id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
35951900 | https://en.wikipedia.org/wiki/Data%20grid | Data grid | A data grid is an architecture or set of services that gives individuals or groups of users the ability to access, modify and transfer extremely large amounts of geographically distributed data for research purposes. Data grids make this possible through a host of middleware applications and services that pull together data and resources from multiple administrative domains and then present it to users upon request. The data in a data grid can be located at a single site or multiple sites where each site can be its own administrative domain governed by a set of security restrictions as to who may access the data. Likewise, multiple replicas of the data may be distributed throughout the grid outside their original administrative domain and the security restrictions placed on the original data for who may access it must be equally applied to the replicas. Specifically developed data grid middleware is what handles the integration between users and the data they request by controlling access while making it available as efficiently as possible. The adjacent diagram depicts a high level view of a data grid.
Middleware
Middleware provides all the services and applications necessary for efficient management of datasets and files within the data grid while providing users quick access to the datasets and files. There is a number of concepts and tools that must be available to make a data grid operationally viable. However, at the same time not all data grids require the same capabilities and services because of differences in access requirements, security and location of resources in comparison to users. In any case, most data grids will have similar middleware services that provide for a universal name space, data transport service, data access service, data replication and resource management service. When taken together, they are key to the data grids functional capabilities.
Universal namespace
Since sources of data within the data grid will consist of data from multiple separate systems and networks using different file naming conventions, it would be difficult for a user to locate data within the data grid and know they retrieved what they needed based solely on existing physical file names (PFNs). A universal or unified name space makes it possible to create logical file names (LFNs) that can be referenced within the data grid that map to PFNs. When an LFN is requested or queried, all matching PFNs are returned to include possible replicas of the requested data. The end user can then choose from the returned results the most appropriate replica to use. This service is usually provided as part of a management system known as a Storage Resource Broker (SRB). Information about the locations of files and mappings between the LFNs and PFNs may be stored in a metadata or replica catalogue. The replica catalogue would contain information about LFNs that map to multiple replica PFNs.
Data transport service
Another middleware service is that of providing for data transport or data transfer. Data transport will encompass multiple functions that are not just limited to the transfer of bits, to include such items as fault tolerance and data access. Fault tolerance can be achieved in a data grid by providing mechanisms that ensures data transfer will resume after each interruption until all requested data is received. There are multiple possible methods that might be used to include starting the entire transmission over from the beginning of the data to resuming from where the transfer was interrupted. As an example, GridFTP provides for fault tolerance by sending data from the last acknowledged byte without starting the entire transfer from the beginning.
The data transport service also provides for the low-level access and connections between hosts for file transfer. The data transport service may use any number of modes to implement the transfer to include parallel data transfer where two or more data streams are used over the same channel or striped data transfer where two or more steams access different blocks of the file for simultaneous transfer to also using the underlying built-in capabilities of the network hardware or specifically developed protocols to support faster transfer speeds. The data transport service might optionally include a network overlay function to facilitate the routing and transfer of data as well as file I/O functions that allow users to see remote files as if they were local to their system. The data transport service hides the complexity of access and transfer between the different systems to the user so it appears as one unified data source.
Data access service
Data access services work hand in hand with the data transfer service to provide security, access controls and management of any data transfers within the data grid. Security services provide mechanisms for authentication of users to ensure they are properly identified. Common forms of security for authentication can include the use of passwords or Kerberos (protocol). Authorization services are the mechanisms that control what the user is able to access after being identified through authentication. Common forms of authorization mechanisms can be as simple as file permissions. However, need for more stringent controlled access to data is done using Access Control Lists (ACLs), Role-Based Access Control (RBAC) and Tasked-Based Authorization Controls (TBAC). These types of controls can be used to provide granular access to files to include limits on access times, duration of access to granular controls that determine which files can be read or written to. The final data access service that might be present to protect the confidentiality of the data transport is encryption. The most common form of encryption for this task has been the use of SSL while in transport. While all of these access services operate within the data grid, access services within the various administrative domains that host the datasets will still stay in place to enforce access rules. The data grid access services must be in step with the administrative domains access services for this to work.
Data replication service
To meet the needs for scalability, fast access and user collaboration, most data grids support replication of datasets to points within the distributed storage architecture. The use of replicas allows multiple users faster access to datasets and the preservation of bandwidth since replicas can often be placed strategically close to or within sites where users need them. However, replication of datasets and creation of replicas is bound by the availability of storage within sites and bandwidth between sites. The replication and creation of replica datasets is controlled by a replica management system. The replica management system determines user needs for replicas based on input requests and creates them based on availability of storage and bandwidth. All replicas are then cataloged or added to a directory based on the data grid as to their location for query by users. In order to perform the tasks undertaken by the replica management system, it needs to be able to manage the underlying storage infrastructure. The data management system will also ensure the timely updates of changes to replicas are propagated to all nodes.
Replication update strategy
There are a number of ways the replication management system can handle the updates of replicas. The updates may be designed around a centralized model where a single master replica updates all others, or a decentralized model, where all peers update each other. The topology of node placement may also influence the updates of replicas. If a hierarchy topology is used then updates would flow in a tree like structure through specific paths. In a flat topology it is entirely a matter of the peer relationships between nodes as to how updates take place. In a hybrid topology consisting of both flat and hierarchy topologies updates may take place through specific paths and between peers.
Replication placement strategy
There are a number of ways the replication management system can handle the creation and placement of replicas to best serve the user community. If the storage architecture supports replica placement with sufficient site storage, then it becomes a matter of the needs of the users who access the datasets and a strategy for placement of replicas. There have been numerous strategies proposed and tested on how to best manage replica placement of datasets within the data grid to meet user requirements. There is not one universal strategy that fits every requirement the best. It is a matter of the type of data grid and user community requirements for access that will determine the best strategy to use. Replicas can even be created where the files are encrypted for confidentiality that would be useful in a research project dealing with medical files. The following section contains several strategies for replica placement.
Dynamic replication
Dynamic replication is an approach to placement of replicas based on popularity of the data. The method has been designed around a hierarchical replication model. The data management system keeps track of available storage on all nodes. It also keeps track of requests (hits) for which data clients (users) in a site are requesting. When the number of hits for a specific dataset exceeds the replication threshold it triggers the creation of a replica on the server that directly services the user’s client. If the direct servicing server known as a father does not have sufficient space, then the father’s father in the hierarchy is then the target to receive a replica and so on up the chain until it is exhausted. The data management system algorithm also allows for the dynamic deletion of replicas that have a null access value or a value lower than the frequency of the data to be stored to free up space. This improves system performance in terms of response time, number of replicas and helps load balance across the data grid. This method can also use dynamic algorithms that determine whether the cost of creating the replica is truly worth the expected gains given the location.
Adaptive replication
This method of replication like the one for dynamic replication has been designed around a hierarchical replication model found in most data grids. It works on a similar algorithm to dynamic replication with file access requests being a prime factor in determining which files should be replicated. A key difference, however, is the number and frequency of replica creations is keyed to a dynamic threshold that is computed based on request arrival rates from clients over a period of time. If the number of requests on average exceeds the previous threshold and shows an upward trend, and storage utilization rates indicate capacity to create more replicas, more replicas may be created. As with dynamic replication, the removal of replicas that have a lower threshold that were not created in the current replication interval can be removed to make space for the new replicas.
Fair-share replication
Like the adaptive and dynamic replication methods before, fair-share replication is based on a hierarchical replication model. Also, like the two before, the popularity of files play a key role in determining which files will be replicated. The difference with this method is the placement of the replicas is based on access load and storage load of candidate servers. A candidate server may have sufficient storage space but be servicing many clients for access to stored files. Placing a replicate on this candidate could degrade performance for all clients accessing this candidate server. Therefore, placement of replicas with this method is done by evaluating each candidate node for access load to find a suitable node for the placement of the replica. If all candidate nodes are equivalently rated for access load, none or less accessed than the other, then the candidate node with the lowest storage load will be chosen to host the replicas. Similar methods to the other described replication methods are used to remove unused or lower requested replicates if needed. Replicas that are removed might be moved to a parent node for later reuse should they become popular again.
Other replication
The above three replica strategies are but three of many possible replication strategies that may be used to place replicas within the data grid where they will improve performance and access. Below are some others that have been proposed and tested along with the previously described replication strategies.
Static – uses a fixed replica set of nodes with no dynamic changes to the files being replicated.
Best Client – Each node records number of requests per file received during a preset time interval; if the request number exceeds the set threshold for a file a replica is created on the best client, one that requested the file the most; stale replicas are removed based on another algorithm.
Cascading – Is used in a hierarchical node structure where requests per file received during a preset time interval is compared against a threshold. If the threshold is exceeded a replica is created at the first tier down from the root, if the threshold is exceeded again a replica is added to the next tier down and so on like a waterfall effect until a replica is placed at the client itself.
Plain Caching – If the client requests a file it is stored as a copy on the client.
Caching plus Cascading – Combines two strategies of caching and cascading.
Fast Spread – Also used in a hierarchical node structure this strategy automatically populates all nodes in the path of the client that requests a file.
Tasks scheduling and resource allocation
Such characteristics of the data grid systems as large scale and heterogeneity require specific methods of tasks scheduling and resource allocation. To resolve the problem, majority of systems use extended classic methods of scheduling. Others invite fundamentally different methods based on incentives for autonomous nodes, like virtual money or reputation of a node.
Another specificity of data grids, dynamics, consists in the continuous process of connecting and disconnecting of nodes and local load imbalance during an execution of tasks. That can make obsolete or non-optimal results of initial resource allocation for a task. As a result, much of the data grids utilize execution-time adaptation techniques that permit the systems to reflect to the dynamic changes: balance the load, replace disconnecting nodes, use the profit of newly connected nodes, recover a task execution after faults.
Resource management system (RMS)
The resource management system represents the core functionality of the data grid. It is the heart of the system that manages all actions related to storage resources. In some data grids it may be necessary to create a federated RMS architecture because of different administrative policies and a diversity of possibilities found within the data grid in place of using a single RMS. In such a case the RMSs in the federation will employ an architecture that allows for interoperability based on an agreed upon set of protocols for actions related to storage resources.
RMS functional capabilities
Fulfillment of user and application requests for data resources based on type of request and policies; RMS will be able to support multiple policies and multiple requests concurrently
Scheduling, timing and creation of replicas
Policy and security enforcement within the data grid resources to include authentication, authorization and access
Support systems with different administrative policies to inter-operate while preserving site autonomy
Support quality of service (QoS) when requested if feature available
Enforce system fault tolerance and stability requirements
Manage resources, i.e. disk storage, network bandwidth and any other resources that interact directly or as part of the data grid
Manage trusts concerning resources in administrative domains, some domains may place additional restrictions on how they participate requiring adaptation of the RMS or federation.
Supports adaptability, extensibility, and scalability in relation to the data grid.
Topology
Data grids have been designed with multiple topologies in mind to meet the needs of the scientific community. On the right are four diagrams of various topologies that have been used in data grids. Each topology has a specific purpose in mind for where it will be best utilized. Each of these topologies is further explained below.
Federation topology is the choice for institutions that wish to share data from already existing systems. It allows each institution control over their data. When an institution with proper authorization requests data from another institution it is up to the institution receiving the request to determine if the data will go to the requesting institution. The federation can be loosely integrated between institutions, tightly integrated or a combination of both.
Monadic topology has a central repository that all collected data is fed into. The central repository then responds to all queries for data. There are no replicas in this topology as compared to others. Data is only accessed from the central repository which could be by way of a web portal. One project that uses this data grid topology is the Network for Earthquake Engineering Simulation (NEES) in the United States. This works well when all access to the data is local or within a single region with high speed connectivity.
Hierarchical topology lends itself to collaboration where there is a single source for the data and it needs to be distributed to multiple locations around the world. One such project that will benefit from this topology would be CERN that runs the Large Hadron Collider that generates enormous amounts of data. This data is located at one source and needs to be distributed around the world to organizations that are collaborating in the project.
Hybrid Topology is simply a configuration that contains an architecture consisting of any combination of the previous mentioned topologies. It is used mostly in situations where researchers working on projects want to share their results to further research by making it readily available for collaboration.
History
The need for data grids was first recognized by the scientific community concerning climate modeling, where terabyte and petabyte sized data sets were becoming the norm for transport between sites. More recent research requirements for data grids have been driven by the Large Hadron Collider (LHC) at CERN, the Laser Interferometer Gravitational Wave Observatory (LIGO), and the Sloan Digital Sky Survey (SDSS). These examples of scientific instruments produce large amounts of data that need to be accessible by large groups of geographically dispersed researchers. Other uses for data grids involve governments, hospitals, schools and businesses where efforts are taking place to improve services and reduce costs by providing access to dispersed and separate data systems through the use of data grids.
From its earliest beginnings, the concept of a Data Grid to support the scientific community was thought of as a specialized extension of the “grid” which itself was first envisioned as a way to link super computers into meta-computers. However, that was short lived and the grid evolved into meaning the ability to connect computers anywhere on the web to get access to any desired files and resources, similar to the way electricity is delivered over a grid by simply plugging in a device. The device gets electricity through its connection and the connection is not limited to a specific outlet. From this the data grid was proposed as an integrating architecture that would be capable of delivering resources for distributed computations. It would also be able to service numerous to thousands of queries at the same time while delivering gigabytes to terabytes of data for each query. The data grid would include its own management infrastructure capable of managing all aspects of the data grids performance and operation across multiple wide area networks while working within the existing framework known as the web.
The data grid has also been defined more recently in terms of usability; what must a data grid be able to do in order for it to be useful to the scientific community. Proponents of this theory arrived at several criteria. One, users should be able to search and discover applicable resources within the data grid from amongst its many datasets. Two, users should be able to locate datasets within the data grid that are most suitable for their requirement from amongst numerous replicas. Three, users should be able to transfer and move large datasets between points in a short amount of time. Four, the data grid should provide a means to manage multiple copies of datasets within the data grid. And finally, the data grid should provide security with user access controls within the data grid, i.e. which users are allowed to access which data.
The data grid is an evolving technology that continues to change and grow to meet the needs of an expanding community. One of the earliest programs begun to make data grids a reality was funded by the Defense Advanced Research Projects Agency (DARPA) in 1997 at the University of Chicago. This research spawned by DARPA has continued down the path to creating open source tools that make data grids possible. As new requirements for data grids emerge projects like the Globus Toolkit will emerge or expand to meet the gap. Data grids along with the "Grid" will continue to evolve.
Notes
References
Further reading
Data management |
573907 | https://en.wikipedia.org/wiki/Staples%20Inc. | Staples Inc. | Staples Inc. is an American office retail company. It is primarily involved in the sale of office supplies and related products, via retail channels and business-to-business (B2B)-oriented delivery operations. At some locations, Staples also offers a copy and print service.
The company opened its first store in Brighton, Massachusetts on May 1, 1986. By 1996, it had reached the Fortune 500, and it later acquired the office supplies company Quill Corporation. In 2014, in the wake of increasing competition from e-commerce market, Staples began to close some of its locations. In 2015, Staples announced its intent to acquire Office Depot and OfficeMax. However, the purchase was blocked under antitrust grounds due to the consolidation that would result.
After the failed acquisition, Staples began to refocus its operations to downplay its brick-and-mortar outlets, and place more prominence on its B2B supply business. In 2017, after its sale to Sycamore Partners, the company was effectively split into three "independently managed and capitalized" entities sharing the Staples name, separating its U.S. retail operations, and Canadian retail operations, from the B2B business.
History
Staples was founded by Leo Kahn and Thomas G. Stemberg, who were former rivals in the New England retail supermarket industry, and Myra Hart.
The idea for Staples originated in 1985, while Stemberg was working on a proposal for a different business. He needed a ribbon for his printer, but was unable to obtain one because his local dealer was closed for the Independence Day holiday. A frustration with the reliance on small stores for critical supplies combined with Stemberg's background in the grocery business led to a vision for an office supply superstore.
The first store was opened in the Brighton neighborhood of Boston in 1986. Staples started with backing from private equity firms including Bain Capital; Bain co-founder Mitt Romney served on the company's board of directors for the next 15 years, helping shape their business model.
In 1991, Staples founded its Canadian subsidiary, The Business Depot, and began opening stores under that name, though over a decade later, all stores were renamed as "Staples". The first store opened in Vaughan, Ontario, north of Toronto. The following year, Staples began expanding into Europe, and opened its first British store in Swansea.
During its tenth anniversary in 1996, Staples became a member of the Fortune 500 companies as sales surpassed $3 billion. On September 4, 1996, Staples and Office Depot announced plans to merge. The Federal Trade Commission decided that the merged company would unfairly increase office supply prices despite competition from OfficeMax, because OfficeMax did not have stores in many of the local markets that the merger would affect. Staples argued that chains such as Walmart and Circuit City represented significant competition, but this argument did little to sway the FTC. Following the denial of the merger by the FTC, a rivalry formed between the two companies.
Staples acquired the naming rights for the Staples Center in Los Angeles shortly before construction began in 1998. Staples also acquired Quill Corporation, an online and catalog retailer of office supplies, for about $685 million in cash and stock. Between 1999 and 2001, unsuccessful attempts to enter the telecommunications business were made as Staples created Staples Communications after the purchase of Canada-based company, Claricom, from an investment group. The company was later sold to Platinum Equities and renamed NextiraOne.
In 2002, Staples acquired Medical Arts Press, which became a subsidiary of Quill. By 2004, Staples expanded to Austria and Denmark and in 2007, Staples opened its first store in India.
In March 2005, Staples and Ahold announced a plan to include a Staples branded store-within-store section in all Stop & Shop Supermarkets and Giant Food stores throughout the Northeast. In August 2006, Ahold announced the addition of the Staples section to all Tops Friendly Markets locations as well.
In 2008, Staples acquired Dutch office supplies company Corporate Express, one of the largest office supply wholesalers in the world. Staples also launched 11 concept stores in the New England area featuring a large focus on small business and technology related services.
Attempted merger with Office Depot, sale of UK division
On March 6, 2014, Staples announced it would close up to 225 stores in North America by the end of 2015, in order to cut $500 million in costs annually, and focus more on e-commerce.
On February 4, 2015, Staples announced a plan to once again acquire Office Depot, which itself had recently acquired OfficeMax in a bid to compete against Staples. CEO Ron Sargent stated that this purchase would "[enable] Staples to provide more value to customers, and more effectively compete in a rapidly evolving competitive environment", and would result in at least $1 billion in "cost synergies" within three years.
It was reported that the deal could face antitrust scrutiny for its monopolization of the office supply market, unless growing competition against online retailers is considered a factor as well. In December, the FTC filed a lawsuit to halt the merger, arguing that it would harm competition in the commercial office supply market. and as of January 2016, the FTC has not changed its stance.
At the end of January 2016, it was announced to employees that Staples would be laying off hundreds of workers at their headquarters location. The layoffs were seen by some analysts as a preemptive tactic in case the merger did not receive regulatory approval from the Federal Trade Commission. On May 10, 2016, the U.S. District Court for the District of Columbia granted the FTC a preliminary injunction against the merger. As a result, the sale was called off, and Staples was required to pay a $250 million breakup fee.
In November 2016, it was announced that Staples had sold its 106 British stores to Hilco Capital for a "nominal" amount, as part of an effort to streamline its international operations following the failed merger. Hilco stated that it would discontinue the Staples brand in the region; the stores were rebranded as "Office Outlet", a new brand retaining the Staples chain's red and white color scheme. In August 2018, the chain closed some of its stores under a company voluntary arrangement, and underwent a management buyout the following month. In March 2019, Office Outlet went into administration, citing that it had "recently experienced a reduction in credit from key suppliers, given the economic outlook which has severely impacted the financial position of the company."
In January 2021, Staples announced that it will again try to buy Office Depot.
Pivot to B2B, sale to Sycamore Partners
Following the aborted acquisition, Staples began to reposition its operations by promoting itself as a "solutions partner" for the business market, and placing a stronger focus on its B2B-oriented delivery and e-commerce businesses. In May 2017, the chain began a new advertising campaign with the slogan "It's Pro Time", which largely downplayed its retail operations.
In 2017, Sycamore Partners acquired Staples for $6.9 billion, of which $1.6 billion was funded as equity, with the remaining deal value raised as debt. As part of the purchase, Sycamore implemented a major restructuring of the company, under which the chain's B2B business (Staples North American Delivery, also known as simply "Staples"), retail locations (Staples U.S. Retail), and Staples Canada would be split into three "independently managed and capitalized" entities under Sycamore.
On April 9, 2019, Sycamore Partners conducted a dividend recapitalization, refinancing $5.4 billion in debt against its ownership of Staples, producing a $1 billion one-time dividend for the private equity firm. A Bloomberg report on this refinancing noted that the deal allowed Sycamore to recover roughly 80% of its equity investment in Staples in less than two years, compared to the typical profit-taking exit timeframe of five to eight years for most private equity buyouts. That month, Staples also unveiled a new logo, which features an icon representing both an unused staple and an office desk. The company also announced that it would introduce a new line of store brands, including Tru Red, Coastwide Professional (facility supplies), NXT Technologies (technology accessories), Perk (office break room supplies), and Union & Scale (furniture), as well as a new catalog known as The Loop. With the rebranding, CEO Sandy Douglas (who joined the company in 2018) stated that Staples was now being marketed as a "worklife fulfillment" company, which he explained was "about helping businesses of all sizes as they create the most dynamic and productive work environments for their teams."
The following month, CEO Mike Motz (who joined the company in 2019 to head Staples U.S. Retail) unveiled a new store concept known as "Staples Connect": it is aligned with a similar store concept being trialed by Staples Canada, featuring "Staples Studio" co-working areas and an auditorium-style "Spotlight" theater (which can be rented for sessions and events). The new concept will be trialed in the Boston area, while elements of the concept will be implemented chain-wide. As part of a partnership with radio broadcaster iHeartMedia, Staples also added recording studios intended for podcasting to six of these stores, with access to recording engineers and a partnership with Spreaker to offer discounted hosting and distribution services to its customers.
Advertising
Throughout most of the company's history, Staples employed, in its American commercials and advertising promotions, the slogan "Yeah, we've got that.", signifying their wide selection of products. This slogan was retired in 2003, to be replaced with "That was easy". Expanding on that theme, 2005 adverts featured a large red push-button marked "easy". In the United Kingdom, Staples had used the slogan "You want it. We've got it"; this changed to "That was Easy".
Originally, the "Easy Button" was only intended to be a fictitious button with 'magical' properties, featured in their television advertisement campaign. However, when the adverts appeared, customers began contacting the company to inquire how they could buy one. The company responded by making the "Easy Button" a real product (available in English "easy", French "simple", Spanish "fácil" and German "einfach easy").
These buttons were shipped to stores in the United States, Canada and Germany starting in the fall of 2005. Sales of the buttons reached 1.5 million by the end of 2006. The button has been referred to as a "Marketer's Dream", effectively turning millions of Staples customers into advertisers, resulting in greatly increased brand recognition.
The Staples Sno-Bot was an advertising character that appeared in the United States, in television and print advertising during the Christmas seasons from 1995 until 2001.
The Sno-Bot was a robot shaped like a snowman who refuses to let go of the inkjet printer he has fallen in love with. After the printer is wrestled from his grasp, the robot utters a monotone "Weeping. Weeping." He is consoled by a Staples employee who offers him a surge protector or a computer mouse (depending on the ad) instead.
The robot's "Weeping. Weeping." catchphrase briefly became a popular meme on the Internet, and the ad itself was parodied in a 2002 Christmas advertisement for Dell Computers, in which a robot hassles a shopper (including striking him with a candy cane) when he attempts to purchase a PC at an unnamed office supplies retailer.
Another advertisement style is used during its annual back-to-school campaign, in which the Christmas song "It's the Most Wonderful Time of the Year" is played while a father joyously shops for school supplies for his sullen-faced children, used for several years from 1995 until 2005.
Later, Alice Cooper appeared in a back-to-school campaign from August 2004. Within the ad, a hand is seen selecting various supplies while a girl looks on unhappily. She finally says, "I thought you said, 'School's out forever.'" Alice is shown behind the cart, saying, "The song goes 'School's out for summer'. Nice try, though." The hit song then plays as supplies are shown. The tagline, "That was easy", is heard playing over the company logo, formed to resemble a stapler.
During the 2008 holiday season, Staples advertising for the first time engaged Facebook, Twitter, YouTube, and other social media platforms. The company created a character named "Coach Tom" to promote its "Gift it for Free" sweepstakes, in which 10,000 Staples customers won up to $5,000 in merchandise.
Acquisitions and divestitures
1992: Workplace stores based in Lakeland, Florida
1994: National Office Products based in Hackensack, New Jersey
1994: Spectrum Office Products based in Rochester, New York
1994: MacIsaac Office Products based in Canton, Massachusetts
1994: Philadelphia Stationers based in Philadelphia
1995: Macauley's Business Resources based in Canton, Michigan
1996: Staples Office Products based in Texas
1998: Quill Corporation, the largest mail order office supply retailer in the United States. Headquartered in Lincolnshire, Illinois, Quill offers products including school and office supplies, office machines, furniture, technology, cleaning and break room, as well as custom-printed and promotional products.
2002: Medical Arts Press, a United States supplier of front office and exam-room products for healthcare facilities.
2004: United Kingdom-based chain Office World, owned by Globus Group.
2006: Chiswick, distributes industrial and retail packaging, shipping and warehouse products to thousands of small and mid–sized manufacturers, distributors and retailers throughout the United States and Canada. The company offers over 7,500 industrial and retail packaging and shipping products, and their product line includes a wide variety of polyethylene bags, corrugated boxes, tape, labels, protective packaging, mailers, retail shopping bags and related packaging supplies. Sales channels include Catalog/Direct Mail, the Internet and Outside Sales. It is now branded as Staples Industrial.
2007: Thrive Networks, an IT services company that provides IT support to small and mid–level businesses.
2007: American Identity, one of the largest global distributors of corporate branded merchandise. American Identity has since been re-branded as Staples Promotional Products.
2008: Corporate Express, a Dutch company that supplies office products to businesses and institutions. The firm was known as Buhrmann prior to April 20, 2007, when it changed its trading name to that of its best known brand, taken from the United States-based corporation it acquired in 1999. The company was acquired by Staples Inc. in August 2008, and has been integrated into the Staples Advantage brand.
2010: Miami Systems, a commercial printing company based in Cincinnati OH with 250 employees.
2014: PNI Digital Media, a Canadian software maker that powers in-store kiosks for printing photographs, calendars and wedding invitations. Staples spent $67.3 million in an all-cash deal for this acquisition.
2016: Staples divested its Australian and New Zealand branches as part of a strategic shift to focus on its US-based retail store format. The company's Australian and New Zealand businesses were rebranded as Winc. In May 2016, a proposed $6.3 billion merger between Staples and key rival Office Depot was successfully blocked by the Federal Trade Commission.
2018: HiTouch Business Services, offering office supplies, workspace design services and IT solutions.
2019: Essendant, a national wholesale distributor of office supplies, and DEX Imaging, an independent document imaging technology dealer in the United States.
2020 Staples acquired Montana office supply company 360 Office Solutions. 360 has locations in the following Montana cities Billings, Bozeman, Butte, Great Falls and Helena, MT.
2021: Staples divested its branches in Germany, Portugal, the Benelux, Finland, Norway, Sweden, Denmark, Austria, Great Britain and Poland, resulting in the brand's disappearance from most of Europe. The Staples brand will continue to be used in the Benelux.
Community relations
In August 2002, the company started the Staples Foundation for Learning, which supports youth groups and education. It also is a partner of Boys & Girls Clubs of America, Ashoka, Earth Force, Initiative for a Competitive Inner City, Hispanic Heritage Foundation and through Staples, ReadBoston.
In August 2005, Staples introduced the "Easy Button", a novelty item for offices which is advertised as a fun way of relieving stress. The button does nothing other than say "That was easy" when pressed. The first US$1 million of profits each year from the Easy Button are donated to the Boys & Girls Clubs of America. As of December 2006, it was sold for US$4.99 to $6.99 in all US and Canadian stores (where profits go to Special Olympics in Canada) and on the company's website. Donations also went to the Children's Fund. Staples has reportedly sold more than $7.5 million worth of Easy Buttons.
Environmental record
Staples is ranked in the top 25 of EPA's Green Power Partner list. In 2006, Staples offered more than 2,900 different office products incorporating recycled content.
Staples is currently trying to pursue developing Staples brand products with green raw materials. In response to a two-year campaign targeting the company, Staples adopted an environmentally friendly paper policy, in hopes of increasing the amount of post-consumer recycled paper made available for sale, phasing out products originating from endangered forests.
The Hanover, Maryland fulfillment center is powered by a 1.01 megawatt solar installation covering nearly of roof space. Its Savi Ranch store in Yorba Linda, California also has a sizeable rooftop solar installation. Staples has also recently implemented power reduction strategies in all of their Copy & Print Centers, where the copiers enter sleep mode in as little as 15 minutes after use.
This technique will save Staples nearly $1 million a year in reduced energy costs and prevent over 11 million pounds of carbon dioxide from being released into the atmosphere. In November 2014, Staples partnered with EnergySage to give Staples giftcards to homeowners and businesses that installed solar panels.
Recycling
Staples accepts all used ink and toner cartridges for recycling. Prior to 2008, the only cartridge brands that could be recycled were HP, Kodak, and Dell, and customers were given a $3 coupon for the store, with the maximum number of coupons to be given, or redeemed, at any one time being 25. Since 2009, ink recycling has been a part of the Staples Rewards program.
Staples now gives back two dollars (Staples.com) on all ink cartridges and toners as of July 2010. Ink recycling credit comes to Rewards members as a separate coupon, monthly, instead of the normal quarterly rewards check. Most customers are able to trade in ten per month for credit, whilst Staples Plus and Premier Rewards members are able to trade in twenty per month.
As of February 28, 2013, Staples announced that in order to receive $2 per ink cartridge recycled, customers would be required to spend at least $30 at Staples in ink purchases within 180 days of recycling.
Price discrimination
A 2012 study by the Wall Street Journal found that Staples displayed different prices to customers in different locations (distinct from shipping prices), based on proximity to competitors like OfficeMax and Office Depot. This generally resulted in higher prices for customers in more rural areas, who were on average less wealthy than customers seeing lower prices.
Security breaches
KrebsOnSecurity reported a suspected breach at Staples, On October 20, 2014, after hearing multiple banks had identified a pattern of card fraud (suggesting that several Staples office supply locations in the Northeastern United States were dealing with a data breach). At the time, Staples would say only that it was investigating "a potential issue" and had contacted law enforcement.
On December 19, 2014, Staples reported that criminals had deployed malware to point-of-sale systems at 115 of their retail stores in the United States. At 113 stores, the malware may have allowed access to this data for purchases made from August 10, 2014, through September 16, 2014. At two stores, the malware may have allowed access to data from purchases made from July 20, 2014, through September 16, 2014. Overall, the company believed that approximately 1.16 million payment cards may have been affected.
On July 14, 2015, Numerous news outlets started to report a suspected data breach at retailers served by online photo software from PNI – Staples' recent acquisition. The first reported victim was Walmart Canada, followed by CVS, Rite-Aid, Costco US, Costco Canada and Tesco UK.
During the period July 13 to 28, 2015, Staples Inc share price fell from $15.59 to $13.97, a 10% fall which reflected a reduction in Staples market value of almost $1bn.
Store layout
Print and marketing services
In addition to selling office supplies, business machines, and tech services, Staples also offers a copy and print center for photocopies, digital printing, faxing, custom business cards, custom rubber stamps, promotional products, binding, lamination, folding, cutting and engraved products. While many products can be produced in-store, larger, more complex jobs, or jobs requiring special materials such as PVC signs are routed to production facilities in various locations through the country.
Most locations have a limited service UPS shipping center offering air and ground services, DHL in the United Kingdom stores, and three providers (Canada Post, FedEx, and Purolator) in Canadian stores, which is open during store hours. Canada and the UK offer international shipping, whereas in US stores, this service is limited to Canada and Mexico. UPS services in US-based stores are not capable of handling AT&T or Dish Network returns with a label, or QR codes from Amazon returns.
In Canada, most web submission jobs and larger orders, including business cards, posters and books are produced in central production facility in each region. The production facilities operate on a 24-hour basis and orders are shipped to most of the stores within its regions within a day. The regions in Canada are BC/Yukon, Alberta/NWT, Saskatchewan, Ontario, Quebec, and Maritime. The Copy & Print Center was also the first Print Center to offer custom business cards printed in store. Known as 'Instant Business Cards' customers are able to have custom business cards in a matter of hours. Staples also operates stand-alone Print & Marketing Stores (currently there are four New York City locations, and one in Salem, Massachusetts) where Print & Marketing Services is a brand of Staples.
Tech services
Some stores also feature Staples Tech Services (formerly EasyTech) an in-store and on-site service for PC repair, PC upgrades, home and office networking setup, and PC tutorials.
Starting in November 2005, Staples began a test called "Heavy Up" primarily using stores in New York state to experiment with the expansion of the offerings by the Staples Tech Center. A subsequent test known as "Double Up" was planned for an unspecified test market and was scheduled to begin the first half of 2006. The tests ran to promote competition with Best Buy's Geek Squad and Circuit City's Fire dog.
Beginning in early 2006, Staples also launched the "Easy Resident Tech" program, employing one to two resident computer repair technicians to do in-store repair during normal business hours.
On January 30, 2007, Staples launched Staples EasyTech. The launch rebranded the "Easy Mobile Tech" name with plans to install an 11' x 17' kiosk in every store. The kiosk may vary from store to store depending on its size and volume. Most kiosks take up part of the Customer Service desk. Within the kiosk, Easy Resident Techs offered repair service as well as sold products. These technicians wore gray "Easy Tech" polo shirts to distinguish them from regular Staples workers. While there was typically one tech per store, a second tech may have been employed for high-volume stores.
Beginning in July 2008, Staples launched a new program labeling all technology workers as "EasyTechs". Under the new guidelines all technology workers are required to have the skills necessary to perform basic services such as memory installation and PC configuration. In addition, all technology workers wear black polo shirts with green "EasyTech" emblems to set them apart from other store workers. The change was due to the company's new focus on services, allowing more customers to be assisted in less time. Most stores will still have a main "EasyTech" who performs most of the more complex tasks.
Beginning in November 2008, eleven concept stores featuring a broader array of small business technology services were launched, which are known within the company as Best Tech stores. EasyTechs and sales workers were now referred to as "Tech Advisors" and "Solutions Advisors". The concept stores carry many more technology related products such as digital signage, small business servers, NAS (Network Attached Storage), and business networking. Staples also partnered with an on demand IT service provider, with such services as network monitoring, advanced network configurations, and server setup.
These concept stores are mainly based in New Hampshire, with stores in Nashua, Seabrook, West Lebanon, Dover and Concord. Some stores with this new concept also opened in Massachusetts, including the Auburn store. Other existing stores have been renovated to include Best Tech's services, including the Newington, Connecticut, and Natick store.
See also
Stationery
Staples (Canada), known as Bureau en Gros in Québec.
Winc
Notes
References
Dalkir, S. and F. Warren-Boulton. 2003. "Market Definition and the Price Effects of Mergers: Staples-Office Depot (1997)", in The Antitrust Revolution: Economics, Competition and Policy. (John E. Kwoka and Lawrence J. White, eds.) Oxford University Press, 4th edition.
External links
1986 establishments in Massachusetts
2017 mergers and acquisitions
American companies established in 1986
Companies based in Framingham, Massachusetts
Companies formerly listed on the Nasdaq
Office supply retailers of the United States
Private equity portfolio companies
Retail companies established in 1986 |
3794087 | https://en.wikipedia.org/wiki/EGABTR | EGABTR | EGABTR (EGA for enhanced graphics adapter), sometimes pronounced "Eggbeater", was a Trojan horse program that achieved some level of notoriety in the late 1980s and early 1990s. Allegedly a graphics utility that would improve the quality of an EGA display, it actually was malware that deleted the file allocation tables on the hard drive. This deletion was accompanied by a text message reading "Arf! Arf! Got you!". Coverage about this virus has translated in languages such as German, Chinese and Indonesian. Various sources disagree as to the exact wording.
In the 1980s, Richard Streeter, a CBS executive, once downloaded the Trojan virus, learned about EGABTR after visiting electronic Bulletin boards, hoping to find something to improve his operating system and unknowingly downloaded the virus.
References
External links
Google Books
Trojan horses |
189744 | https://en.wikipedia.org/wiki/Douglas%20McIlroy | Douglas McIlroy | Malcolm Douglas McIlroy (born 1932) is a mathematician, engineer, and programmer. As of 2019 he is an Adjunct Professor of Computer Science at Dartmouth College.
McIlroy is best known for having originally proposed Unix pipelines and developed several Unix tools, such as spell, diff, sort, join, graph, speak, and tr. He was also one of the pioneering researchers of macro processors and programming language extensibility. He participated in the design of multiple influential programming languages, particularly PL/I, SNOBOL, ALTRAN, TMG and C++.
His seminal work on software componentization and code reuse makes him a pioneer of component-based software engineering and software product line engineering.
Biography
McIlroy earned his bachelor's degree in engineering physics from Cornell University, and a Ph.D. in applied mathematics from MIT in 1959 for his thesis On the Solution of the Differential Equations of Conical Shells (advisor Eric Reissner).
He taught at MIT from 1954 to 1958.
McIlroy joined Bell Laboratories in 1958; from 1965 to 1986 was head of its Computing Techniques Research Department (the birthplace of the Unix operating system), and thereafter was Distinguished Member of Technical Staff.
From 1967 to 1968, McIlroy also served as a visiting lecturer at Oxford University.
In 1997, McIlroy retired from Bell Labs, and took a position as an Adjunct Professor in the Dartmouth College Computer Science Department.
He has previously served the Association for Computing Machinery as national lecturer, Turing Award chairman, member of the publications planning committee, and associate editor for the Communications of the ACM, the Journal of the ACM, and ACM Transactions on Programming Languages and Systems. He also served on the executive committee of CSNET.
Research and contributions
Macro processors
McIlroy is considered to be a pioneer of macro processors. In 1959, together with Douglas E. Eastwood of Bell Labs, he introduced conditional and recursive macros into popular SAP assembler, creating what is known as Macro SAP. His 1960 paper was also seminal in the area of extending any (including high-level) programming languages through macro processors. These contributions started the macro-language tradition at Bell Labs ("everything from L6 and AMBIT to C"). McIlroy's macro processing ideas were also the main inspiration for TRAC macro processor.
He also coauthored M6 macro processor in FORTRAN IV, which was used in ALTRAN and later was ported to and included into early versions of Unix.
Contributions to Unix
Throughout the 1960s and 1970s McIlroy contributed programs for Multics (such as RUNOFF) and Unix operating systems (such as diff, echo, tr, join and look), versions of which are widespread to this day through adoption of the POSIX standard and Unix-like operating systems. He introduced the idea of Unix pipelines. He also implemented TMG compiler-compiler in PDP-7 and PDP-11 assembly, which became the first high-level programming language running on Unix, prompting development and influencing Ken Thompson's B programming language and Stephen Johnson's Yacc parser-generator.
McIlroy also took over from Dennis Ritchie compilation of the Unix manual "as a labor of love". Particularly, he edited volume 1 of the manual pages for Version 7 Unix. According to Sandy Fraser: "The fact that there was a manual, that he [McIlroy] insisted on a high standard for the manual, meant that he insisted on a high standard for every one of the programs that was documented".
Computer language design
McIlroy influenced the design and implementation of SNOBOL programming language. His string manipulation macros were used extensively in the initial SNOBOL implementation of 1962, and figured prominently in subsequent work, eventually leading to its machine-independent implementation language SIL. The table type (associative array) was added to SNOBOL4 on McIlroy's insistence in 1969.
In 1960s, he participated in the design of PL/I programming language. He was a member of the IBM–SHARE committee that designed the language and, together with Robert Morris, wrote the Early PL/I (EPL) compiler in TMG for the Multics project.
Around 1965, McIlroy, together with W. Stanley Brown, implemented the original version of ALTRAN programming language for IBM 7094 computers.
McIlroy has also made a significant influence on design of the programming language C++ (e.g., he proposed the stream output operator <<).
Algorithms
In the 1990s, McIlroy worked on improving sorting techniques, particularly he co-authored an optimized qsort with Jon Bentley.
In 1969, he contributed an efficient algorithm to generate all spanning trees in a graph (first discovered by George J. Minty in 1965).
Awards and recognition
In 1995, he was elected as a Fellow of the American Association for the Advancement of Science. In 2004, he won both the USENIX Lifetime Achievement Award ("The Flame") and its Software Tools User Group (STUG) award. In 2006, he was elected as a member of the National Academy of Engineering.
Views on computing
McIlroy is attributed the quote "The real hero of programming is the one who writes negative code," where the meaning of negative code is taken to be similar to the famous Apple developer, Bill Atkinson, team anecdote (i.e., when a change in a program source makes the number of lines of code decrease ('negative' code), while its overall quality, readability or speed improves).
See also
Darwin (programming game)
Homoiconicity
Unix philosophy
Literature
References
External links
Doug McIlroy's homepage (archive homepage at Bell Labs website)
Biography
Doug McIlroy Facts
McIlroy's History of Unix speech (audio), includes many autobiographical notes, along with discussion of many of the major Unix authors
Ancestry of Linux - How the Fun Began, presentation November 2005: (presentation) (audio) (video)
Original unix spell source code, written by Doug McIlroy
Publications by M. D. McIlroy - https://www.cs.dartmouth.edu
Dartmouth College faculty
Cornell University College of Engineering alumni
Massachusetts Institute of Technology School of Science alumni
Living people
Members of the United States National Academy of Engineering
Scientists at Bell Labs
Multics people
Unix people
Plan 9 people
1932 births
Date of birth missing (living people) |
5417886 | https://en.wikipedia.org/wiki/Mapbender | Mapbender | Mapbender is a graduated project of the Open Source Geospatial Foundation. It was awarded OGC web site of the month in 2008. It is used by PortalU and several federal states to implement the INSPIRE regulation. Many municipalities use Mapbender as City Map Services and it is used as the mapping framework for online cycle route planners.
Introduction
Mapbender is a web mapping software implemented in PHP and JavaScript, the configuration resides in a data model stored in a PostgreSQL PostGIS or MySQL database. It is developed as an open-source project and licensed by the GNU GPL as free software. Mapbender is a framework for managing spatial data services that are standardized following the OGC specifications OWS, WMS and WFS and using the formats GeoRSS and GML and Web Map Context. The framework implements user management, authentication and authorization. Management interfaces for user, group and service administration are stored as configurations in the database.
The software is used to display, overlay, edit and manage distributed Web Map Services. The maps themselves are generated by Server software. From this perspective Mapbender is a client software. The client interfaces are generated dynamically by PHP scripts on the Mapbender Server.
User Interface
User interfaces are created using forms of the same web based type. User interfaces contain elements (buttons, maps, legends, links), each has associated HTML attributes, path to PHP modules or JavaScript code which are stored in the database. Basic modules implement:
zoom in and out
pan map
click and query (OGC WMS GetFeatureInfo)
turn layers on and off
move to coordinate (zoom to)
get coordinate (mouse click)
digitize (add new points, lines, polygons; this requires transactional WFS)
load map services (OGC WMS)
reorder and remove map services
show legend
print
search interfaces
store current map composition as OGC Web Map Context document
User interfaces can be started parameterized with a bounding box, set of services and set of activated layers.
Administration Interfaces
Administration interfaces are user interfaces with administration modules. This makes administration highly flexible and multi client capable (both multiple interfaces and user/group permission). Administration modules include management (add, edit, remove) of:
users
groups
interfaces (GUI)
WMS services
WFS and transactional WFS services
OWS Security Proxy
Metadata
Log and protocol
Service monitor
Categorization
Mapbender is designed to manage loosely coupled web services in a service-oriented architecture. Due to some glitches in early GIS history with Coordinate systems, Cartesian coordinate systems and Surveying this can sometimes be somewhat complex.
The Mapbender software covers the following topics:
Web-GIS Client (OGC WMS, WFS, Catalog Service Client)
Geo-CMS (Content Management System)
Web-based map digitizing and editing functionality (OGC WFS-T Client)
Service Meta Information Broker (ISO 19-hundred Series)
Catalog System (ISO 19119 Service Meta Data)
Security Management (Authentication, Authorization, SSO Secure Service)
Accounting Management (Logging)
Spatial Web Services Orchestrating
References
External links
Mapbender on Ohloh
Free GIS software
Maps
Web mapping |
14072113 | https://en.wikipedia.org/wiki/Andy%20Rubin | Andy Rubin | Andrew E. Rubin is an American computer programmer, entrepreneur, and venture capitalist. Rubin founded Android Inc. in 2003, which was acquired by Google in 2005; Rubin served as a Google vice president for 9 years and led Google's efforts in creating and promoting the Android operating system for mobile phones and other devices during most of his tenure. Rubin left Google in 2014 after allegations of sexual misconduct, although it was presented as a voluntary departure rather than a dismissal at first. Rubin then served as co-founder and CEO of venture capital firm Playground Global from 2015–2019. Rubin also helped found Essential Products in 2015, a mobile phone start-up that closed in 2020 without finding a buyer.
Rubin was nicknamed "Android" by his co-workers at Apple in 1989 due to a love of robots, with the nickname eventually becoming the official name of the Android operating system. Before Android Inc., Rubin also helped found Danger Inc. in 1999, another company involved in the mobile space; Rubin left Danger to work on Android in 2003, and Danger was eventually acquired by Microsoft in 2008.
In 2018, The New York Times published an article revealing the details of Rubin's 2014 departure from Google - that it had been forced rather than voluntary due to credible allegations he had sexually harassed female employees, and that Google had paid Rubin a $90 million severance package to expedite the process. Google's large severance payment attracted significant controversy.
Early life and education
Rubin grew up in Chappaqua, New York as the son of a psychologist who later founded his own direct-marketing firm. His father's firm created photographs of the latest electronic gadgets to be sent with credit card bills. He attended Horace Greeley High School in Chappaqua, New York from 1977 until 1981 and was awarded a Bachelor of Science degree in computer science from Utica College, Utica, New York in 1986.
Career
Andy Rubin worked at Apple from 1989 to 1992 as a manufacturing engineer.
General Magic
Rubin joined General Magic in 1992. He worked for developing Motorola Envoy as a lead engineer.
Google
After Android was acquired by Google in 2005, Rubin became the company's senior vice president of mobile and digital content, where he oversaw development of Android, an open-source operating system for smartphones. On March 13, 2013, Larry Page announced in a blog post that Rubin had moved from the Android division to take on new projects at Google, with Sundar Pichai taking over Android. In December 2013, Rubin started management of the robotics division of Google (including companies such as Boston Dynamics, which Google owned at the time). On October 31, 2014, he left Google after nine years at the company to start a venture capital firm for technology startups.
Sexual harassment allegations
According to The New York Times, while the departure was presented to the media as an amicable one where Rubin would spend more time on philanthropy and start-ups, CEO Larry Page personally asked for Rubin's resignation after a sexual harassment claim by an employee against Rubin was found to be credible during an investigation by Google; the employee, with whom Rubin had an extramarital relationship, accused him of coercing her into oral sex in a hotel room in 2013. Rubin strongly disputed these reports and denied wrongdoing, stating, "these false allegations are part of a smear campaign to disparage me during a divorce and custody battle". The incident, among others, led to the 2018 Google walkouts from Google's employee workforce over Rubin reportedly receiving a $90 million "exit package" to expedite his separation from the company. Google responded by sending a memo to employees saying no employees dismissed due to sexual harassment concerns after 2016 had received payouts.
After Google
After being forced out of Google, Rubin founded Playground Global in 2015 along with Peter Barrett, Matt Hershenson and Bruce Leak. The company is a venture capital firm and studio for technology start-ups, providing funding, resources, and mentorship. In 2015, Playground Global raised a $300 million fund from investors including Google, HP, Foxconn, Redpoint Ventures, Seagate Technology and Tencent, among others. It has invested in several companies such as Owl Labs. Rubin left Playground Global in May 2019.
Rubin eventually joined and helped create the Android phone start-up Essential Products. In November 2017, he took a leave of absence from Essential Products after reports of the inappropriate relationship from his time at Google surfaced. In December 2017, he returned to Essential Products.
Rubin and his ex-wife, Rie Hirabaru Rubin, owned and operated Voyageur du Temps, a bakery and cafe in Los Altos, California, which closed in September 2018.
Timeline
Carl Zeiss AG, robotics engineer, 1986–1989.
Apple Inc., manufacturing engineer, 1989–1992.
General Magic, engineer, 1992–1995. An Apple spin-off where he participated in developing Magic Cap, an operating system and interface for hand-held mobile devices.
MSN TV, engineer, 1995–1999. When Magic Cap failed, Rubin joined Artemis Research, founded by Steve Perlman, which became WebTV and was eventually acquired by Microsoft.
Danger Inc., co-founder, 1999–2003. Founded with Matt Hershenson and Joe Britt. The firm is most notable for the Danger Hiptop, branded for T-Mobile as the Sidekick, which is a phone with PDA-like abilities. The firm was later acquired by Microsoft in February 2008.
Android Inc., co-founder 2003–2005. Android was acquired by Google in 2005.
Google, 2005–2014: Senior Vice President in charge of Android for most of his tenure. Since December 2013, managing the robotics division of Google (which includes companies bought by Google, such as Boston Dynamics).
Playground Global, 2014–2019: Founder. This ventures focuses on artificial intelligence and it is creating new generations of hardware.
Redpoint Ventures, 2015–2017: Partner.
Essential Products, 2015–2020: Founder and lead. Rubin launched the Essential phone through this company in late June 2017. On February 12, 2020, Essential announced in an update on their blog that the company was ceasing operations.
References
External links
"Designing Products Your Customers Will Love", Andy Rubin speaks at Stanford University
"Android on the March", Financial Post September 17, 2010
"Android Invasion", Newsweek October 3, 2010
1963 births
American computer businesspeople
American computer programmers
American investors
American software engineers
American technology chief executives
American technology company founders
American venture capitalists
Apple Inc. employees
Businesspeople from New York (state)
Businesspeople in software
Google employees
Living people
People from Chappaqua, New York
Utica College alumni
Businesspeople from the San Francisco Bay Area
Horace Greeley High School alumni
Android (operating system) |
2174272 | https://en.wikipedia.org/wiki/TiMidity%2B%2B | TiMidity++ | TiMidity++, originally and still frequently informally called TiMidity, is a software synthesizer that can play MIDI files without a hardware synthesizer. It can either render to the sound card in real time, or it can save the result to a file, such as a PCM .wav file.
TiMidity++ primarily runs under Linux and Unix-like operating systems, but it also runs under Microsoft Windows and AmigaOS. Distributed under the GPL-2.0-or-later, TiMidity++ is free software.
Features
TiMidity++ can read a number of file types and devices, primarily the ordinary .mid files, but also .kar (MIDI with Karaoke lyrics), Recomposer files, and module files. It is one of the few programs that can read MIDI .mid files using the MIDI Tuning Standard. TiMidity++ also has support for SoundFonts, rendering the synthesized MIDI sounds into their recorded SoundFont equivalents and directing the output to the soundcard. Files can be fetched from standard input, files, archive files, or from the network (over HTTP, FTP or NNTP).
The program has various interfaces, including but not limited to bare text, ncurses, X11 (Motif, Xaw, GTK+ and Tk) and even an Emacs interface that shows played notes in real time.
TiMidity++ has some support for microtonal music.
History
The original version of TiMidity was written in 1995 by Tuukka Toivonen. After he stopped updating the program, Masanao Izumo and other contributors started to work on the program, renaming it to TiMidity++.
See also
FluidSynth
WildMIDI
References
Citations
Bibliography
External links
TiMidity++ home page
Free audio software
Free software programmed in C
Software synthesizers for Linux
Open source software synthesizers
Audio software that uses GTK
Software that uses Tk (software)
Software that uses ncurses |
1281628 | https://en.wikipedia.org/wiki/De%20La%20Salle%E2%80%93College%20of%20Saint%20Benilde | De La Salle–College of Saint Benilde | De La Salle–College of Saint Benilde (Filipino: Dalubhasaan ng De La Salle San Benildo; French: Collège De La Salle de Sainte Benilde), also known as Benilde and abbreviated DLS–CSB or simply CSB, is a private, Catholic research college run by De La Salle Brothers located in Malate district of Manila, Philippines. It operates four campuses all of which are located within the vicinity of Malate. The college is a member institution of De La Salle Philippines (DLSP), a network of 16 Lasallian institutions. DLS–CSB is also a member of a 350-year-old global network of over 1,100 Lasallian educational institutions in 80 countries.
The college was established in 1980 during the administration of Br. Andrew Gonzalez FSC as the College of Career Development, a night school for working students at De La Salle University. In 1988, it was renamed the De La Salle University–College of Saint Benilde after the Vatican's Patron Saint of Vocations – Saint Bénilde Romançon, a Christian Brother who taught in France during the 19th century. In 1994, the college became autonomous. In 2004, along with a restated vision and mission, received its present name, dropping the University and became De La Salle–College of Saint Benilde.
The college uses "learner-centered instruction" to offer degree and non-degree programs in the arts, design, management, service industries, computer applications in business, and special fields of study. It is the first in the Philippines to offer degrees in animation, consular and diplomatic affairs, digital filmmaking, export management, fashion design and merchandising, multimedia arts, music production, photography and information technology major in game design and development.
The college's sports teams, known as the Blazers, compete in the National Collegiate Athletic Association with La Salle Green Hills representing the junior division. Since joining the league in 1998, the college has won five general championships, first in the 2005 season, back-to-back in the 2007 and 2008 seasons, and another back-to-back wins in 2013 and 2014 seasons.
History
Early history (1980–1987)
In 1980, De La Salle University-Manila opened an academic unit known as the College of Career Development, an evening school for working students. It was Saint Benilde Romancon FSC who pioneered the development of evening classes for adult working students for continuing education hundreds of years ago. In 1984, the Preparatory Studies Department (PSD) was established to allow students to cope with the requirements in subsequent degree-oriented courses in regular undergraduate colleges. In 1985, the college was renamed the Community College. In May 1987, the PSD was phased out and replaced by the Arts and Business Studies Area (ABSA). The ABSA offered two courses: a Bachelor of Arts degree in management with emphasis on human resources management and a Bachelor of Science degree in business administration, major in computer applications.
Under De La Salle University (1988–1994)
The Community College was officially renamed De La Salle University–College of Saint Benilde in 1988, after the establishment of the De La Salle University System. Saint Bénilde Romançon was selected as the namesake to symbolize its objective of providing innovative education for the verbally but not numerically gifted, late bloomers, handicapped, as well as artists. Bénilde made room for his students in Clermont-Ferrand, regardless of their age or their mental capabilities. He also learnt sign language to instruct a deaf-mute boy for his first Holy Communion.
The ABSA was renamed as the Arts and Business Studies Department (ABSD) and became the day program of the college, while the Career Development Department (CDD) remained as the college's evening program. Because of the need for more space, the college moved to its own campus at 2544 Taft Avenue in 1989. A third major program, a Bachelor of Arts degree in Interdisciplinary Studies, was offered, undertaken in consortium with the College of Liberal Arts. In 1991, the college offered certificate programs in Accounting and Bookkeeping for the deaf, and a Bachelor of Science degree in Industrial Design.
As an autonomous college (1994–2005)
The College of Saint Benilde became an autonomous member of the De La Salle University System in April 1994. It ratified its proposed Constitution and By-Laws and identified Benildean core values in November 1994. The Night College, a scholarship program, was transferred from De La Salle University-Manila to the college in 1995. In the same year, the School of Design and Arts (SDA) was established, and the following degrees were offered: Bachelor of Science in Interior Design in consortium with the Philippine School of Interior Design, Bachelor of Arts, Major in Production Design, Bachelor of Arts, Major in Technical Theater, Bachelor of Arts, Major in Arts Management, and Bachelor of Performing Arts, Major in Dance.
In 1996, the School of Hotel, Restaurant, and Institution Management was formed, and groundbreaking ceremonies for the Angelo King International Center building were held. The following degrees were first offered in the same year: Bachelor of Science in Business Administration, Major in Export Management; Bachelor of Science in Hotel, Restaurant and Institution Management; Bachelor of Arts, Major in Fashion Design and Merchandising; Bachelor of Arts, Major in Consular and Diplomatic Affairs, and the Bachelor of Arts in Applied Deaf Studies. The college also established the Certificate Program Center (CPC) which offered short courses, and the Grants-in-Aid Program to provide financial assistance to students in need.
In 1997, the administration of the vocational programs of the Night College of the De La Salle University was passed on to the college in June and was renamed as the Blessed Arnould Study Assistance Program in September. In October 1997, the college held its first graduation rites independent from De La Salle University. In the same year, the college established the School of Special Studies for deaf students. In March 1998, the NCAA accepted the college's application for membership to the sports league along with La Salle Green Hills athletes as its high school representatives. In 1999, the School of Design and Arts offered the Bachelor of Arts in Multimedia Arts degree, the first of its kind in the country. Later that year, the construction of the Angelo King International Center was completed, which then housed the School of Hotel, Restaurant, and Institution Management.
In 2000, the college won its first Men's Basketball Championship title in the NCAA, marking the fastest win of any new school in the league since World War II. The college offered a BSBA degree in Information Management and a Bachelor of Arts in Music Production degree, a first of its kind. In the same year, the college held bidding for the architect of the proposed School of Design and Arts building. In 2001, the School of Special Studies was renamed as the School of Deaf Education and Applied Studies (SDEAS). A year later, the SDEAS was invited to become a member of the Post-Secondary Education Network-International.
In 2004, the non-university members of the DLSU System — Canlubang, and Medical and Health Sciences Campus — removed the term "University" from their names. The college then restated its mission and vision and was renamed De La Salle–College of Saint Benilde. Construction for the 14-storey School of Design and Arts campus was started in this year. The Certificate Program Center was expanded and renamed into the School of Professional and Continuing Education. In 2005, the college became over-all champions for the first time in NCAA Season 81.
Recent history (2006–present)
In 2006, the college became a district school of De La Salle Philippines, the successor of the DLSU System. Br. Edmundo Fernandez FSC, Brother Visitor of the De La Salle Brothers Philippine District, became the college's interim president. The college became the host for the NCAA Season 82, and landed in second place for the General Championship rankings.
In 2007, the School of Design and Arts opened four new degree programs: the Bachelor of Arts degrees in Animation, Digital Filmmaking, and Photography, the first of their kind in the country, and Architecture That same year, the new 14-storey School of Design and Arts Campus opened in May in time for the start of school year 2007–2008. The college inaugurated its first Brother President, Br. Victor Franco FSC in September.
In 2008, the School of Management and Information Technology (SMIT), in partnership with the School of Design and Arts, announced intention to offer a new degree on game design and development, pending approval of the Commission on Higher Education. At the end of NCAA Season 83, the college again became the overall champions, winning their second title after two years.
On August 12, 2008, East Timor President, José Ramos-Horta visited the college and gave a talk entitled United in Faith, Partners in Nation-Building held at the School of Design and Arts Campus during a four-day state visit to the Philippines, which marked the first time that a foreign head of state visited the college.
In 2009, the college opened three new degree programs, the Bachelor of Science in International Hospitality Management for SHRIM, which partnered with Vatel International Hospitality School in France, Bachelor of Science in Information Technology, Major in Game Design and Development for SMIT, the first IT program anchored in game design and development in the Philippines, and Bachelor of Science in Architecture for the SDA. The college won its third championship title in the NCAA, and became back-to-back general champions for NCAA Season 84.
Groundbreaking of a new building commenced in 2017. The building would serve as a sports center for the Benilde Blazers and a dormitory for scholars. It was inaugurated as the Benilde Sports and Dorm Building, and opened in 2020. A five-storey building, the complex stands on a lot in San Isidro Drive corner Dominga Street with the Taft and SDA Campuses as neighbors.
Campuses
The college has four campuses: the Taft Campus, the Angelo King International Center, the School of Design and Arts, and the Atrium, all in Malate, Manila. The Taft Campus is a block from De La Salle University beside St. Scholastica's College and the LRT-1 Vito Cruz Station. The college is surrounded by dormitories, condominiums, and restaurants. To travel between campuses, students may either walk or ride cycle rickshaws stationed near the campuses or the electric jeepney shuttle service provided by the college.
Other properties include the Blessed Hilario Hall on Dominga Street which functions as the college's retreat house. Beside it is the Blessed Scubilion Hall, a residence for student-athletes. The Solomon Guest House on C. Ayala Street is a restaurant and meeting area used as a hands-on workplace for selected SHRIM students where they handle the operation of the establishment.
In 2018, Benilde Antipolo opened its new building in the Antipolo city proper. The campus is the new home of the tertiary programs from La Salle College Antipolo. The school offers a bachelor's degree in Marketing Management, Accountancy, Hospitality Management, Tourism Management, Psychology, Communication Arts, and Education.
Taft Campus
The Taft Campus stands on a 6,380-square-meter lot that stretches from Taft Avenue to the next parallel street, Leon Guinto. The land was acquired from LBP Leasing Corporation, a subsidiary of the Land Bank of the Philippines. The campus is a square lot made up of four interconnected buildings: the St. Benilde Hall, Duerr Hall, Blessed Solomon Hall, and the St. Mutien Marie Hall. The Duerr Hall has a different alignment with the rest of the buildings, requiring the need for stairs and a ramp on its intersections with the Blessed Solomon Hall.
The Plaza Villarosa, named after architect Rogelio Villarosa, is on this campus' second level. It is decorated by lush plants and palm trees and has a basketball court, an elevated platform, and several cabañas with stone benches. The plaza is used as a study area and venue for events and activities such as those of the student organizations. Bazaars and food establishments also temporarily set up stalls in the plaza during such events. The statue of Saint Benilde, originally located on the campus's old front gate, was moved to the plaza after its completion. Behind the statue is an 18-bell carillon, built as a memorial to the Lasallian Christian Brothers who were massacred and murdered at the De La Salle College Taft campus during World War II by 20 plus Japanese soldiers. The names of the brothers are inscribed on the bells of the carillon. The carillon and the statue, when taken together, stand as the visual representation of the college.
St. Benilde Hall
The first building of the college, named after Saint Bénilde Romançon, was opened on August 11, 1989. It is located at the back of the campus and was designed by Gines Rivera. The building has four levels, holding numerous lecture rooms and computer laboratories, a cafeteria, a clinic, and the office of the Information Technology Center. It also houses the offices of the School of Deaf Education and Applied Studies and the School of Management and Information Technology, as well as the Student Grants Unit and the Center for Counseling Services.
Duerr Hall
Br. Crescentius Richard Duerr FSC, president of De La Salle University from 1961 to 1966, was a visionary teacher and administrator of La Salle schools in Manila, Bacolod and Iligan City, doing missionary work for 31 years before returning to New York. He was instrumental in the transformation of De La Salle University-Manila in becoming a pillar of Philippine education.
The second building of the campus, originally called "South Wing" because of its location at the southern side of the campus, was blessed on August 10, 1992, and cost 30 million pesos. It houses the Accounting Office, Faculty and Administrative offices of the School of Multidisciplinary Studies, several offices of the programs of the School of Management and Information Technology, and laboratories of the School of Deaf Education and Applied Studies. It has several classrooms and computer laboratories, and an auditorium. It also has a badminton court located on the fifth floor. The on-campus bookstore is located on the first level of the hall near the Career and Placement Office. The Duerr Hall formerly held the Multimedia and Fashion Design laboratories of the School of Design and Arts, prior to the completion of the SDA campus.
The Chapel of the Resurrection is located on the second floor intersection of the Duerr and Solomon Hall. It features glass doors, stenciled drawing of the praying hands, a sacristy, confessional room, and an altar showing Napoleon Abueva's "Lord of the Resurrection."
St. Mutien Marie Hall
Saint Mutien Marie Wiaux was a devoutly religious Brother who made a tremendous influence on the students under his charge through his patience and piety. He taught in Malonne for 58 years, teaching music and arts alongside Catholic dogma. He was canonized in 1989.
Construction of the third and fourth wings of the campus was approved by the Board of Trustees on January 6, 1993. Groundbreaking ceremonies were made in March 1994, while actual construction began on April 16 of the same year. The Mutien Marie Hall and the Blessed Solomon Hall were blessed at October 29, 1996. Both buildings were designed by Rogelio Villarosa and construction cost 120 million pesos.
The General Administrative Services Office occupies the first floor while the Br. Fidelis Leddy Learning Resource Center occupies the whole second level of the building. The third floor up to the fifth consists of lecture rooms. There is also a case room for thesis defense located on the third floor. The gymnasium is located on the topmost level of the building. Most of the classrooms in this building are equipped with LCD and OHP projectors, television sets with VHS players, and computers. The Mutien-Marie Hall formerly held the drafting rooms, Industrial Design laboratory and the head office of the School of Design and Arts.
Blessed Solomon Hall
Blessed Solomon Leclerq was martyred in 1792 after refusing to swear an oath that forced the French clergy of the time to support the state. Before that, he was a teacher, director, and bursar, and was known for his love for people and for his work. He was beatified in 1926, the first Lasallian brother to be given that honor.
The main entrance of the campus is located at the first level of the Blessed Solomon Hall facing Taft Avenue. The Admissions Office and the Office of Student Behavior can be found at the ground floor, and near the vehicle entrance is the waiting lounge, popularly known as The Airport because the fixed seats resemble an airport departure lounge. The Office of the Registrar, as well as other Executive offices, is housed in the second level of the building while the Office of Student Affairs, Office of Culture and Arts, Social Action Office, Sports Development Office, Student Publications Office, and the Student Involvement Office are all located on the third level. On the fourth level are the Center for Learning and Performance Assessment, and a dance room and multipurpose room for Physical education classes. On the top level of the Solomon Hall is the Augusto-Rosario Gonzalez Theater, named after the parents of the late Br. Andrew Gonzalez FSC.
Angelo King International Center
The Angelo King International Center (AKIC or the CSB Hotel-International Conference Center) is a fully operational four-star hotel on a 2,100-square-meter lot at the corner of Estrada Street and Arellano Avenue, two blocks from the main campus. It was envisioned to be the first operational hotel-school in the Philippines where students will be able to experience learning in a real world environment. Groundbreaking rites for the building were held in 1996 but actual construction began in 1998 and was finished a year after. It was formally opened in August and named after De La Salle alumnus – Dr. Angelo King, who gave financial assistance to the construction of the building.
Sharing the space at the building is the Hotel Benilde, which has 46 guest rooms and two dormitory type rooms, a conference hall, fine dining rooftop restaurant and lobby lounge, cafeteria, library, transport services office, parking space for 126 vehicles, and two guest elevators. The first, second, eleventh and twelfth floors are used by the CSB Hotel, and the third to fifth floors are for interior parking while the SHRIM occupies the fifth to ninth floors.
The School of Hotel, Restaurant, and Institution Management occupies four floors with 14 classrooms, a tiered demonstration kitchen, demonstration bar, institutional hot, cold, and baking/pastry kitchens with adequate cold and dry storage areas, two basic food laboratories, two computer laboratories, a nutrition laboratory, conference rooms, a clinic, and a chapel. The School of HRIM is served by two passenger elevators and one service elevator. Occupying the roof deck is Vatel Restaurant Manila, a fine dining restaurant operated by selected SHRIM students.
Near the AKIC building is the Solomon Guest House, which is operated by selected SHRIM students, where they are involved in marketing to meal preparation and service. The SGH also has three rooms and a suite which could be used as venues for private meetings and gatherings.
School of Design and Arts – SDA Campus
The School of Design and Arts Campus (SDA Campus) is a 14-story academic complex with of usable floor space designed by Lor Calma Design and Associates, with Eduardo Calma as the design principal. It was built on a 4,560 m2 lot that was formerly used as parking space for the college, located at 950 Pablo Ocampo Street, and about 500 meters away from the Taft Campus. It was originally planned to open in January 2006, but due to construction delays, the opening was moved to May 2007. It is the third, largest, and the most advanced campus of the college which houses its largest and busiest school, the School of Design and Arts. While the exact budget for the building is classified, an estimated amount of 1.2 billion pesos was said to be allotted for the whole building project.
The building was dubbed by then De La Salle University System president, Br. Armin Luistro FSC as the "jewel in the crown of the De La Salle University System schools," as well as one of De La Salle's most ambitious projects. The building features an architectural design never before used, with a sophisticated façade and all-glass backside and designed in a way that only the floor from the tenth are visible. Calma relates that the building will feature louvers which, when illuminated at night, will appear like lanterns, and considering the location, the lighting effects would set the building apart from its surroundings.
The opening date of the building was moved to September 2006 when the January 2006 opening could not be achieved, but due to construction delays again, a September opening was not possible and the administration opted for a May 2007 opening instead. The building was delayed due to the intricacy of the architectural design, implementation of the complicated plans, and other problems encountered with the Project Manager and the Contractor. The architectural plans presented design issues which made them difficult to implement at a steady rate. The construction management encountered conflicts in approach and principles with the onsite technical team. Construction, however, gained a steady pace after October 2006 and the building was completed and inaugurated in April 2007.
The building has four floors of above-level parking space and ten floors of workspace served by two service and five-passenger elevators and five sets of stairs. It features a Building Management System with intelligent controls for air conditioning; smoke detection and fire alarms; CCTV surveillance security systems; and has its own sewage management plant. The building is also fully Wi-Fi enabled and the first building in the Philippines to be equipped with 10 Gigabit Ethernet. Among its notable facilities are a three-storey, 558-seat theater which is cantilevered four storeys above the ground and the Museum of Contemporary Art and Design, a contemporary art museum which was envisioned to be the first of its kind in the Philippines. Inside, its corridors can double as exhibition spaces. Every classroom is air-conditioned and configured for better acoustics. The building also has a cafeteria, a chapel, and a two-floor library in addition to lecture, computer, and seminar rooms. There are also video, animation, and sound production laboratories as well as a photography studio and a greenscreen TV and film production studio with motion capture equipment, and a 105-seat cinema.
Atrium
The Atrium is the newest building of the college. A 10-story building, it was designed by architect Daniel Lichauco, the principal and managing partner of Archion Architects. It was constructed by D.M. Consunji Inc., a pioneer of advanced engineering technology. The new campus will house The School of Diplomacy and Governance, School of Management and Information Technology and School of Professional and Continuing Education. Features of the building include an open-air cafeteria located at the topmost floor, and escalators every two floors that are designed to lessen the load of the elevators. The building also houses the Learning Resource Center dedicated to the students of the Benilde Deaf School as well as various departments, offices, conference rooms, initiative learning studios, and classrooms.
Academics
The college uses Howard Gardner's theory of multiple intelligences, where each person is said to possess varying levels of the different intelligence which determine his or her cognitive profile. The theory is implemented through learner-centered instruction where classes are taught according to the student's understanding of the subject and recognizes the uniqueness of each learner. Learner-centered also refers to a learning environment that pays attention to the knowledge, skills, attitudes, and beliefs that learners bring to the educational setting.
The college has six schools that offer degree and non-degree programs designed for the development of professionals in the arts, design, management, service industries, computer applications in business, and special fields of study.
School of Deaf Education and Applied Studies
The School of Deaf Education and Applied Studies (SDEAS) was first established in 1991 as a vocational program offering courses in accounting and bookkeeping for the Deaf. The vocational program became the School of Special Studies with the addition of the Bachelor in Applied Deaf Studies (BAPDST) degree five years later. The school was restructured and renamed the School of Deaf Education and Applied Studies in 2000. The BAPDST course was refined and began offering specialization tracks in Multimedia Arts and Business Entrepreneurship. The SDEAS is one of only six institutions in the Philippines that offer postsecondary education to the deaf.
In 2001, the SDEAS partnered with the Postsecondary Education Network-International, a global partnership of colleges and universities funded by the Nippon Foundation of Japan that aims to provide deaf students the appropriate postsecondary education for them to achieve their full potentials. Two learning centers were established since the partnership: The PEN-Multimedia Learning Center (2003) and the PEN-Learning Center (2006), both at Duerr Hall.
School of Design and Arts
The School of Design and Arts (SDA) was established in 1995 and is one of the largest schools of the college with its thirteen-degree program offerings and a student population of about 2,000. It has approximately 145 faculty members per trimester and 90 percent are part-timers because they are also active industry practitioners at the same time. The SDA seeks to develop the creative and business skills of students adept in the arts. Because of the increasing number of students, a new building was constructed to accommodate the growing student population.
The SDA offers Bachelor of Arts degrees in Animation (ABANI), Arts Management (ABAM), Digital Filmmaking (ABFILM), Multimedia Arts (ABMMA), Music Production (ABMP), Photography (ABPHOTO), Production Design (AB-PROD), Technical Theater (ABTHA), and Fashion Design and Merchandising (AB-FDM); Bachelor of Science degrees in Architecture (BS-ARCH), Industrial Design (BS-ID) and Interior Design (BS-IND); and a Bachelor of Performing Arts degree in Dance (BPAD). Two of its programs are offered in consortium with other schools and organizations, the Interior Design program with the Philippine School of Interior Design and the Dance program with the Ballet Philippines Dance School of the Cultural Center of the Philippines.
The Multimedia Arts and Technical Theater degrees are the first of their kind in the Philippines. The Technical Theater program teaches the technical aspects of production on stage, film, and television. It also provides in-depth coverage on the applications of various technical equipment used in set production, while the Multimedia Arts program incorporates various art forms with the latest in multimedia technology. Areas of study include graphic design, photography, 2D and 3D animation, web design and development, and video production. It is also one of the three most popular SDA programs, along with Fashion Design and Merchandising and Industrial Design.
In June 2002, Team St. Benilde under the Multimedia Arts program of the School of Design and Arts, has first taken the championship in the First Philippine Animation Competition with their entry, "Fiesta Karera", a fully 3D animated short of a futuristic rendition of carabao races usually held in festivals in the Philippines.
School of Hotel, Restaurant and Institution Management
The School of Hotel, Restaurant and Institution Management (SHRIM) was established in 1996 and aims to provide the hotel and restaurant industry with graduates who possess the requisite knowledge, skills, knowledge, and values to become successful entrepreneurs and to train students to become "industry-ready" for hotel and restaurants in the country and abroad. It offers the Bachelor of Science degree in Hotel, Restaurant and Institution Management (BS-HRIM), which integrates theory and practice to provide students with a strong management and service orientation as well as a global perspective of hotel and restaurant operations. It has three tracks, the Culinary Arts track, Hospitality Management track, and Tourism Management track.
The school is housed at the Angelo King International Center, a four-star hotel school at the corner of Arellano Avenue and Estrada Street. Students are given their first on-the-job training at the Hotel Benilde regardless of their tracks. Students are deployed at either of its subsidiaries: the Solomon Guest House, a restaurant and lodge fully student-managed and operated, and the Chefs' Station, a food stall at the cafeterias of the college.
In 2009, the SHRIM partnered with the Vatel International Hospitality School in Paris. With this educational cooperation, the school is concurrently known as Vatel Manila and is included in the Vatel international network. Under the Vatel partnership, the school offers the Bachelor of Science in International Hospitality Management (BS-IHM).
School of Management and Information Technology
The School of Management and Information Technology (SMIT) offers degrees based on emerging profitable disciplines. It offers Bachelor of Science in Business Administration (BSBA) degrees majoring in Computer Applications (BSBA-CA), Export Management (BSBA-EM), Human Resource Management (BSBA-HRM) and Bachelor of Science degrees in Information Systems (BS-IS), Real Estate Management (BS-REM) and Interactive Entertainment and Multimedia Computing majoring in Game Development and Game Art (BSIEMC). The SMIT continues the college's Career Development Program by offering BSBA degrees in Business Management (BSBA-BM) and Marketing Management (BSBA-MM) as night programs for working students.
In 2005, the SMIT, along with the SHRIM, was given accreditation by the Philippine Accrediting Association of Schools, Colleges and Universities. In 2009, the SMIT partnered with the Singapore-based online graduate school Universitas 21 Global. Part of the agreement includes an elective in Electronic Business for the Computer Applications and Information Systems programs. Students enrolled in the elective will have access to Universitas 21 Global's resources, and will be trained by its staff.
School of Diplomacy and Governance
The School of Diplomacy and Governance (SDG) handles the general education curriculum of all programs offered by the college. It provides the students a strong foundation in the languages, social and natural sciences, theology, and philosophy. Until 2020, it offered the Bachelor of Arts in Consular and Diplomatic Affairs (CDA) degree to develop practitioners in international relations. In January 2020, SDG Dean Gary Ador Dionisio announced that the CDA degree would be replaced with the Bachelor of Arts in Diplomacy and International Affairs (AB-DIA) degree, while a new program entitled the Bachelor of Arts in Governance and Public Affairs (AB-GPA) would be offered, starting August that year.
The Consular and Diplomatic Affairs program has networked with various government (e.g. Department of Foreign Affairs, international and Philippine embassies and consulates abroad) and non-government organizations to provide the relevant exposure to students as well as to provide job opportunities to graduates.
Consular and Diplomatic Affairs program has entered into agreements with non-profit institutions like Alliance Française de Manille and Instituto Cervantes de Manila to provide the needed foreign language learning and cultural exposures to students.
Most of the professors in the CDA program were former diplomats, namely Rosario Manalo (former Philippine ambassador to Belgium, Sweden, France, and Special Envoy of the Philippines to the ASEAN Intergovernmental Commission on Human Rights), Minerva Jean Falcon (former Philippine Consul General to Toronto, former Philippine Ambassador to Turkey, Switzerland, and Germany), Antonio Rodriguez (former DFA Undersecretary, former Philippine Ambassador to Thailand), Franklin Ebdalin (former DFA Undersecretary and Philippine ambassador to Hungary), José del Rosario Jr. (former Philippine ambassador to India and Jordan), Monina Estrella Callangan-Rueca (former Philippine ambassador to Hungary), Luz Palacios (former DFA Assistant Secretary for European Affairs), and Marilyn Alarilla (former Philippine ambassador to Laos and Turkey).
Linkages were also created with different international institutions handling the Model United Nations Assembly (MUNA) abroad (e.g. MUNA USA, China, Switzerland, Germany, Hong Kong, Czech Republic, France, and Canada). Through this, students are able to attend the MUNA as official delegates representing assigned countries.
School of Professional and Continuing Education
The School of Professional and Continuing Education (SPaCE) provides post-baccalaureate diploma programs for graduates seeking continuing education in various business-related fields. Formerly held by the SMIT, the SPaCE now handles the Career Development Program (CDP) which offers BSBA degrees in Business Management and Marketing Management. The Career Development Program gives adult students the opportunity to gain a degree program while at work through a streamlined program and format which caters to their busy lifestyle.
Athletics
The College of Saint Benilde Blazers are the NCAA senior varsity team of De La Salle–College of Saint Benilde.
The Blazers were formerly a member National Capital Region Athletic Association (NCRAA) before they applied and were admitted to the NCAA in 1998. They then went on to win their first NCAA seniors basketball title in 2000 was the fastest for an expansion squad.
The other senior varsity teams may also be referred to as the Blazers. The juniors team are the CSB–LSGH Junior Blazers (officially the CSB–LSGH Greenies) of La Salle Green Hills, while the women's teams (volleyball and taekwondo) are the Saint Benilde Lady Blazers.
Student life
The college uses the trimestral calendar, where the school year usually begins in the last week of August. Freshmen students are required to attend the freshmen orientation program of the Department of Student Life, which is held a week before the start of classes. Freshmen students are oriented by upperclassmen about the school's policies, the facilities of the campus as well as what to expect during their stay in the college. In September, the Student Involvement Unit organizes the SI week (Student Involvement week), when the student organizations can recruit new members from the freshmen. The College Week is held during August, where the feast day of Saint Benilde is celebrated through various activities and several masses. Every Friday, a vacant period given from 11 a.m to 2:30 p.m., known as C-Break (College Break), can be used by organizations to hold seminars and workshops, training period for the performing groups, or to hold special events and activities. The Plaza Villarosa is usually used for activities, where the basketball court can be used for training sessions or sports activities, the performing stage for concerts, and the cabañas for bazaars.
Central Student Government
The De La Salle–College of Saint Benilde Central Student Government is the official student government of the college. It is composed of 35 officers and all enrolled students as members. It is categorized into the executive board (EB) and the School Student Governments (SSG). The EB is composed of six officers namely the President, Vice President for Academics, Vice President for Internal Affairs, Vice President for External Affairs, Vice President For Operations, and the Vice President for Finance, all of which are elected by the entire student body. The School Student Governments are composed of six officers namely the President, Secretary, Public Relations Officer, two Batch Representatives (One higher and lower), and the Frosh Representative (except for SDEAS), all of which are elected only by the students of their respective schools.
Student organizations
The college has organizations under the Student Involvement Office of the Department of Student Life. All recognized student organizations are members of the Council of Presidents, the mother organization which overlooks operations and project handling.
Professional organizations cater to a specific degree program. There are sixteen professional and profitable organizations which include the:
AIESEC is an international non-governmental not-for-profit organization in Benilde that provides young people with leadership development and cross-cultural global internship and volunteer exchange experiences. Currently, AIESEC in Benilde is a Specialized Unit. (AIESEC SU DLS-CSB)
Animotion is the professional organization of the AB-Animation Program.
Association of Information Management (AIM) is the professional organization that protects, uplifts, and promotes the Information Systems program of the college. It establishes linkages with other social organizations and key departments of the college, and other organizations outside of the college.
Association of Music Production Students (AMPS) is an organization that is dedicated to fostering the growth of the musical aptitude and the appreciation of all types of music among the students of the Music Production Program.
Benildean Industrial Designers (BInD)
Benilde Red Cross Youth Council (BRCYC)
Benildean Deaf Association (BDA)
Computer Business Association (CBA) is an information technology and business student organization.
Export Management Society (EMS) is a professional organization that aims to develop future exporters and entrepreneurs who are Filipino in Ideals, professionally competent, and world class.
Game Developers Union for Innovation and Leadership Development (GUILD), the professional organization representing the Game Design and Development Program of the college. The organization provides venues for students to appreciate the many facets of video games from its creation to its consumption as well as showcase burgeoning local game development talent.Guild of Rising Interior Designers (GRID), the professional organization for the BS Interior Design program. Human Resource Management Society (HRMS)
Hoteliers in Progress (HIP)is the professional organization of the students majoring in Hospitality Management under the School of Hotel, Restaurant and Institution Management.
Benilde Business Management Society is the professional organization of the Business Management Students under the Career Development Program of DLS-CSB.
Junior Marketing Association (JMA)is the professional organization of the Marketing Management Students under the Career Development Program of DLS-CSB.
Media Max (MMX)
Students Collaborating and Reaching Out in Events and Arts Management (SCREAM)
Travelers in Progress (TRIP)
Vateliens in Progress (VIP)
Chefs in Progress (CHIP)
Special Interest organizations cater to non-academic and special interests. There are twelve special interest organizations which include:
Computer Link(COMLINK) is a special interest organization organizing student-centered programs and projects.
Coro San Benildo is the resident chorale group of De La Salle–College of Saint Benilde.
Debate Society (DebSoc) is the official debate team and organization of the college. The organization represents the college in nationals and international tournaments.
Dulaang Filipino (DF) is the resident theater company of the De La Salle–College of Saint Benilde.
Greenergy (GNY)
International Student Association (ISA) is a special interest organization of De La Salle–College of Saint Benilde that aims to promote intercultural awareness, understanding, and apprehensive association between the Foreign and Filipino students.
Kino Eye (KE)
Optic View (OV) is a special interest organization.
Silent Steps
Societe Et' Cultura (SEC)
Stage Production and Operations Team (SPOT)
St. Benilde Romancon Dance Company (SBRDC) is the resident dance company of the De La Salle – College of Saint Benilde.
An electoral body (student government body)
DLS-CSB Commission on Elections (COMELEC)
Varsity organizations include the:
CSB Green Pepper Spirit Team
CSB Fencing Team
CSB Samahang Kali Arnis ng Benilde
CSB Women's Football Team
Volunteer groups
Students may also opt to join in the five volunteer groups directly tied to an office under the Department of Student Life. These include the volunteer groups:
Benildean Student Envoys (BSE) are the student ambassadors of Benilde. They are professionally trained to represent the school. They handle the tours of guests, parents, students, and VIPs around the campuses of Benilde. They also usher in different events like seminars, conferences, theater plays, and the like.
Student Trainers (STRAINS)* is the volunteer arm of the Student Involvement Office, helps the office implement its year-long training program. The group is part of the planning, implementation, and evaluation stages of the unit's programs and projects. It assists in the implementation of the Frosh Orientation Program(Interaktiv), Frosh Solidarity Night (UNITE), team buildings of the different organizations, and student-development activities.
Social Action Volunteers for the Center for Social Action
Student Ministers for the Center for Lasallian Ministry
Kaagapay for the Center for Counseling Services.
Learning Resource Center
The Br. Fidelis Leddy Learning Resource Center (LRC) is the multimedia resource center and library of the college. It provides access to conventional printed materials, such as books and periodicals, and other forms of storage media, such as transparencies, videotapes, compact discs, and other electronic/digital materials. The LRC's audio-visual equipment can be borrowed. The LRC has facilities on each campus. Each facility has separate audio-visual and reading areas.
Members of the De La Salle Brothers' community, De La Salle University-Manila alumni, as well as students and employees of De La Salle Philippines member schools, are authorized to use the LRC. Non-Lasallian users can be given access as long as they have a recommendation or referral letters from their respective librarians.
The LRC was first located at the Benilde Hall in a three-classroom setup. It housed a small collection of books and some audio-visual equipment. After the completion of the Mutien-Marie Hall in 1996, the LRC was moved to its present location on the second floor of the new building. It was named the Br. Fidelis Leddy Learning Resource Center, in honor of the longest living Lasallian Brother in the Philippines at that time, Br. Leander Fidelis Leddy FSC celebrated his 50 years of service in the country and his 60th year as a Lasallian Brother that year.
Facilities
Taft Campus
At the Taft Campus, the LRC is divided into two areas: the LRC-Main occupying the second floor of the St. Mutien-Marie Hall, up the stairway from the main entrance, and the LRC-Extension located underneath the Plaza Villarosa, which was formerly used as parking space. The LRC-Main holds the audio-visual equipment and multimedia resource collections, periodicals, as well as the memorabilia and thesis collections of the college. It has an audio-video listening and viewing area for the LRC's VHS collection. The LRC-Extension is an additional reading area where students can browse, borrow, and bring home books from the LRC's general book collections except for the Lasalliana collection which is for room use only.
AKIC Campus
The LRC in the AKIC Campus provides the learning resource needs of the School of HRIM, holding book collections and relevant periodicals for its students and faculty. The reading area can be found on the sixth floor of the AKIC campus. It has a floor area of 224 square meters, and a seating capacity of 100. The Audio-Visual Service Section can be found on the seventh floor and has a floor area of 105 square meters.
SDA Campus
The LRC in the SDA Campus occupies a part of the seventh and eighth floors of the building, housing the LRC's design and art book collection.
Collections
As of summer 2006, the LRC has a total collection of about 80,000 book titles (90,000 volumes), 4,657 volumes of undergraduate theses, more than 1,000 periodical titles (in print, electronic and microfilm formats), 139 titles of transparency-based library materials, more than 4,013 CD-ROM volumes, more than 2,562 commercial VHS tapes, 113 slide titles, 253 maps, 594 audio cassette tapes, 159 VCD titles, 107 DVD titles, 440 volumes of audio CDs, 7 titles of selected newspapers in microfilm format, and 5,000 volumes of in-house VHS tapes on campus activities.
Books found at the LRC-Extension and LRC-AKIC are grouped by collection: Reference, Reference Filipiniana, Filipiniana and General Collection. Each book is arranged by the Library of Congress Classification System. The LRC follows the revised Anglo-American Cataloging Rules 2 and the LC Classification System for cataloging and classifying books. The LRC and its extensions have Online Public Access Catalog stations for quick searching of books needed by the students.
The college subscribes to several online databases and electronic journals. Among them are ProQuest 5000 International, Thomson Gale, Global Market Information Database, Ovid PsycArticles Full-Text Journals, Emerald Database, and the Journal of Deaf Studies and Deaf Education. The database and journals can be accessed from computer units within the campus or at home through the online library facility at the college website.
Official Publications
The Benildean, the official student publication of De La Salle–College of Saint Benilde – Manila
BLiP (Benildean Lifestyle, Interests and People) is the official features magazine of the college, which showcases the life and interests of Benildeans. First published in 2004, it tackles fashion, travel, and other topics.
Karilyon is a magazine discussing Filipino lifestyles and issues. It aims to promote Filipino culture, language and ideals. It is published only in the Filipino language.
Shades of Gray is a literary folio that showcases the literary talents of students. It is published once a year.
Ablaze is a sports magazine released twice a year that provides an in-depth look into the personalities and perspectives behind Benildean sports and its athletes.
Horizons is a design folio that trains students adept in the visual arts. It presents representations and images that are sometimes serious, sometimes light-hearted, but always thought-provoking.
Dekunstrukt is a photo folio that showcases the works of students skilled in photography. It provides a venue for the college's student photographers to express and present their view of the people and the world around them.
Ad Astra is the annual yearbook. It was first published as the Benildean Yearbook in 2000. Students are encouraged to subscribe to it one year before their graduation.
Notable alumni
Notable alumni from the De La Salle–College of Saint Benilde include:
Mimiyuuuh (AB-FDM) – internet personality, fashion designer
Say Alonzo (BS-HRIM, 2005) – television personality (Pinoy Big Brother: Season 1)
Phoemela Baranda (2001) – model and actress
Zild Benitez (ABMP) – musician (IV of Spades)
Justin De Dios (ABMMA, 2018) – singer-performer (SB19)
Albie Casiño (BSBA-EM, 2016) – actor
Ken Chan (BS-HRIM) – actor, model and television personality
Yam Concepcion (ABMMA, 2010) – actress
Serena Dalrymple (BSBA, 2011) – actress
John Vic De Guzman (BSBA-HRM, 2017) – volleyball player (silver medalist, 2019 Southeast Asian Games)
Rita De Guzman (ABFILM) – actress and singer
Moira dela Torre – singer-songwriter
Karen delos Reyes (ABPHOTO, 2008) – actress
Andi Eigenmann (AB-FDM, 2014) – actress
Dino Imperial (ABMMA, 2010) – actor, model and radio personality
Elisse Joson (AB-FDM) – actress
Kian Kazemi (BS-HRIM, 2006) – television personality and model
Bianca King (ABFILM, 2012) – actress, model and television host
Carlo Lastimosa (BS-HRIM) – basketball player, former Benilde Blazer
Champ Lui Pio (BSBA-HRM, 2004) – musician (Hale)
Elmo Magalona (BS-HRIM) – actor and singer
Luis Manzano (BS-HRIM, 2003) – television host and actor
Maxine Medina (BS-IND) – actress and beauty queen (Binibining Pilipinas 2016)
Maine Mendoza (BS-HRIM, 2015) – actress and television personality (Yaya Dub)
Valeen Montenegro (AB-FDM, 2013) – actress and model
Robin Nievera (ABMP) – singer-songwriter and record producer
Sam Pinto (AB-FDM) – actress
Dominic Roque (BS-HRIM, 2011) – actor and model
Jondan Salvador – basketball player, former Benilde Blazer
Shalani Soledad (BSBA-HRM, 2002) – member of the Valenzuela City Council (2004–2013)
Paolo Taha – basketball player
Nyoy Volante (ABTHA, 1999) – singer-songwriter and actor
Lauren Young (BS-HRIM, 2019) – actress and model
Megan Young (ABFILM) – actress and beauty queen (Miss World 2013)
David Licauco - basketball player, former Benilde Blazers
References
College of Saint Benilde
Catholic universities and colleges in Manila
Education in Malate, Manila
Educational institutions established in 1988
Universities and colleges in Manila
Art schools in the Philippines
Film schools in the Philippines
Cooking schools in the Philippines
Design schools
Hospitality schools in the Philippines
Schools of international relations
Schools of the performing arts
Schools for the deaf in the Philippines
National Collegiate Athletic Association (Philippines)
Deaf universities and colleges
1988 establishments in the Philippines |
4420697 | https://en.wikipedia.org/wiki/Kuali%20Foundation | Kuali Foundation | The Kuali Foundation is a non-profit, 501(c)(3) corporation that develops open source enterprise resource planning software for higher education institutions. Kuali modules include Student, Financial, Human Resources, Research Administration, and Library.
Founding partners are Indiana University, The University of Arizona, the University of Hawaii, Michigan State University, San Joaquin Delta Community College, Cornell University, NACUBO, and the rSmart Group.
History
Around 2003, Indiana University administrators were considering alternatives for replacing the existing financial information system. They looked at retooling the current financial system or buying vendor software. In 2004, Indiana University chief information officer Brad Wheeler wrote a paper about the state of open and community source software development in education. This paper helped coalesce a movement among higher ed institutions to create a community source enterprise resource planning software suite.
Wheeler's preliminary work assessed higher education's readiness for a community source financial system project and its applicability across colleges and universities through a planning grant from the Andrew W. Mellon Foundation to National Association of College and University Business Officers (NACUBO) in 2004. In March 2005, after more than a year of evaluation, partner coalescing, and preparatory work, the Kuali Financial System (KFS) received a $2.5 million grant from the Mellon Foundation to help complete the software development. Colorado State University and San Joaquin Delta College became the first to host large-scale installations of the full KFS in 2009. Kuali modules now include Student, Financial, Research, and a no-code forms and workflow tool called Kuali Build.
Over the next ten years, usage of Kuali had increased substantially, and by 2014 the Kuali Foundation had 74 member institutions.
In 2014, the foundation invested in Kuali, Inc., which is now responsible for the development of Kuali software and offers the software in the cloud for higher ed institutions.
See also
History of free and open-source software
References
External links
Kuali tries to compete
Free software project foundations in the United States
Organizations established in 2005
501(c)(3) organizations
Free ERP software |
6882267 | https://en.wikipedia.org/wiki/Tesseract%20%28software%29 | Tesseract (software) | Tesseract is an optical character recognition engine for various operating systems. It is free software, released under the Apache License. Originally developed by Hewlett-Packard as proprietary software in the 1980s, it was released as open source in 2005 and development has been sponsored by Google since 2006.
In 2006, Tesseract was considered one of the most accurate open-source OCR engines available.
History
The Tesseract engine was originally developed as proprietary software at Hewlett Packard labs in Bristol, England and Greeley, Colorado between 1985 and 1994, with more changes made in 1996 to port to Windows, and some migration from C to C++ in 1998. A lot of the code was written in C, and then some more was written in C++. Since then, all the code has been converted to at least compile with a C++ compiler. Very little work was done in the following decade. It was then released as open source in 2005 by Hewlett Packard and the University of Nevada, Las Vegas (UNLV). Tesseract development has been sponsored by Google since 2006.
Features
Tesseract was in the top three OCR engines in terms of character accuracy in 1995. It is available for Linux, Windows and Mac OS X. However, due to limited resources it is only rigorously tested by developers under Windows and Ubuntu.
Tesseract up to and including version 2 could only accept TIFF images of simple one-column text as inputs. These early versions did not include layout analysis, and so inputting multi-columned text, images, or equations produced garbled output. Since version 3.00 Tesseract has supported output text formatting, hOCR positional information and page-layout analysis. Support for a number of new image formats was added using the Leptonica library. Tesseract can detect whether text is monospaced or proportionally spaced.
The initial versions of Tesseract could only recognize English-language text. Tesseract v2 added six additional Western languages (French, Italian, German, Spanish, Brazilian Portuguese, Dutch). Version 3 extended language support significantly to include ideographic (Chinese & Japanese) and right-to-left (e.g. Arabic, Hebrew) languages, as well as many more scripts. New languages included Arabic, Bulgarian, Catalan, Chinese (Simplified and Traditional), Croatian, Czech, Danish, German (Fraktur script), Greek, Finnish, Hebrew, Hindi, Hungarian, Indonesian, Japanese, Korean, Latvian, Lithuanian, Norwegian, Polish, Portuguese, Romanian, Russian, Serbian, Slovak (standard and Fraktur script), Slovenian, Swedish, Tagalog, Tamil, Thai, Turkish, Ukrainian and Vietnamese. V3.04, released in July 2015, added an additional 39 language/script combinations, bringing the total count of support languages to over 100. New language codes included: amh (Amharic), asm (Assamese), aze_cyrl (Azerbaijana in Cyrillic script), bod (Tibetan), bos (Bosnian), ceb (Cebuano), cym (Welsh), dzo (Dzongkha), fas (Persian), gle (Irish), guj (Gujarati), hat (Haitian and Haitian Creole), iku (Inuktitut), jav (Javanese), kat (Georgian), kat_old (Old Georgian), kaz (Kazakh), khm (Central Khmer), kir (Kyrgyz), kur (Kurdish), lao (Lao), lat (Latin), mar (Marathi), mya (Burmese), nep (Nepali), ori (Oriya), pan (Punjabi), pus (Pashto), san (Sanskrit), sin (Sinhala), srp_latn (Serbian in Latin script), syr (Syriac), tgk (Tajik), tir (Tigrinya), uig (Uyghur), urd (Urdu), uzb (Uzbek), uzb_cyrl (Uzbek in Cyrillic script), yid (Yiddish).
In addition, Tesseract can be trained to work in other languages.
Tesseract can process right-to-left text such as Arabic or Hebrew, many Indic scripts as well as CJK quite well. Accuracy rates are shown in this presentation for Tesseract tutorial at DAS 2016, Santorini by Ray Smith.
Tesseract is suitable for use as a backend and can be used for more complicated OCR tasks including layout analysis by using a frontend such as OCRopus.
Tesseract's output will have very poor quality if the input images are not preprocessed to suit it: Images (especially screenshots) must be scaled up such that the text x-height is at least 20 pixels, any rotation or skew must be corrected or no text will be recognized, low-frequency changes in brightness must be high-pass filtered, or Tesseract's binarization stage will destroy much of the page, and dark borders must be manually removed, or they will be misinterpreted as characters.
Version 4
Version 4 adds LSTM based OCR engine and models for many additional languages and scripts, bringing the total to 116 languages.
Additionally 37 scripts are supported. So it is for example possible to recognize text with a mix of Western and Central European languages by using the model for the Latin script it is written in.
Version 5
Version 5 was released in 2021, after more than two years of testing and developing.
User interfaces
Tesseract is executed from the command-line interface. While Tesseract is not supplied with a GUI, there are many separate projects which provide a GUI for it. One common example is OCRFeeder.
Reception
In a July 2007 article on Tesseract, Anthony Kay of Linux Journal termed it "a quirky command-line tool that does an outstanding job". At that time he noted "Tesseract is a bare-bones OCR engine. The build process is a little quirky, and the engine needs some additional features (such as layout detection), but the core feature, text recognition, is drastically better than anything else I've tried from the Open Source community. It is reasonably easy to get excellent recognition rates using nothing more than a scanner and some image tools, such as The GIMP and Netpbm."
On November 2020, Brewster Kahle from the Internet Archive praised Tesseract saying:
See also
Libtiff
References
External links
Free software programmed in C
Free software programmed in C++
Optical character recognition software
HP software
Google software
Formerly proprietary software
Software using the Apache license |
46265020 | https://en.wikipedia.org/wiki/Pepper%20%28cryptography%29 | Pepper (cryptography) | In cryptography, a pepper is a secret added to an input such as a password during hashing with a cryptographic hash function. This value differs from a salt in that it is not stored alongside a password hash, but rather the pepper is kept separate in some other medium, such as a Hardware Security Module. Note that the National Institute of Standards and Technology never refers to this value as a pepper but rather as a secret salt. A pepper is similar in concept to a salt or an encryption key. It is like a salt in that it is a randomized value that is added to a password hash, and it is similar to an encryption key in that it should be kept secret.
A pepper performs a comparable role to a salt or an encryption key, but while a salt is not secret (merely unique) and can be stored alongside the hashed output, a pepper is secret and must not be stored with the output. The hash and salt are usually stored in a database, but a pepper must be stored separately to prevent it from being obtained by the attacker in case of a database breach. Where the salt only has to be long enough to be unique per user, a pepper should be long enough to remain secret from brute force attempts to discover it (NIST recommends at least 112 bits).
History
The idea of a site- or service-specific salt (in addition to a per-user salt) has a long history, with Steven M. Bellovin proposing a local parameter in a Bugtraq post in 1995. In 1996 Udi Manber also described the advantages of such a scheme, terming it a secret salt. The term pepper has been used, by analogy to salt, but with a variety of meanings. For example, when discussing a challenge-response scheme, pepper has been used for a salt-like quantity, though not used for password storage; it has been used for a data transmission technique where a pepper must be guessed; and even as a part of jokes.
The term pepper was proposed for a secret or local parameter stored separately from the password in a discussion of protecting passwords from rainbow table attacks. This usage did not immediately catch on: for example, Fred Wenzel added support to Django password hashing for storage based on a combination of bcrypt and HMAC with separately stored nonces, without using the term. Usage has since become more common.
Types
There are multiple different types of pepper:
A secret unique to each user.
A shared secret that is common to all users.
A randomly-selected number that must be re-discovered on every password input.
Shared Secret Pepper
In the case of a shared-secret pepper, a single compromised password (via password reuse or other attack) along with a user's salt can lead to an attack to discover the pepper, rendering it ineffective. If an attacker knows a plaintext password and a user's salt, as well as the algorithm used to hash the password, then discovering the pepper can be a matter of brute forcing the values of the pepper. This is why NIST recommends the secret value be at least 112 bits, so that discovering it by exhaustive search is intractable. The pepper must be generated anew for every application it is deployed in, otherwise a breach of one application would result in lowered security of another application. Without knowledge of the pepper, other passwords in the database will be far more difficult to extract from their hashed values, as the attacker would need to guess the password as well as the pepper.
A pepper adds security to a database of salts and hashes because unless the attacker is able to obtain the pepper, cracking even a single hash is intractable, no matter how weak the original password. Even with a list of (salt, hash) pairs, an attacker must also guess the secret pepper in order to find the password which produces the hash. The NIST specification for a secret salt suggests using a Password-Based Key Derivation Function (PBKDF) with an approved Pseudorandom Function such as HMAC with SHA-3 as the hash function of the HMAC. The NIST recommendation is also to perform at least 1000 iterations of the PBKDF, and a further minimum 1000 iterations using the secret salt in place of the non-secret salt.
Unique Pepper Per User
In the case of a pepper which is unique to each user, the tradeoff is gaining extra security at the cost of storing more information securely. Compromising one password hash and revealing its secret pepper will have no effect on other password hashes and their secret pepper, so each pepper must be individually discovered, which greatly increases the time taken to attack the password hashes.
Randomly Selected Pepper
In the case of a randomly-selected pepper which is not saved at all, it must be rediscovered every time it is needed. This means that an algorithm to verify a password would effectively need to brute-force the pepper every time. For this reason, algorithms implementing this would not want to use a large value for the pepper, as verification should be reasonably fast.
See also
Salt (cryptography)
HMAC
passwd
References
Cryptography
Password authentication |
40496287 | https://en.wikipedia.org/wiki/Touch%20ID | Touch ID | Touch ID is an electronic fingerprint recognition feature designed and released by Apple Inc. that allows users to unlock devices, make purchases in the various Apple digital media stores (iTunes Store, App Store, and Apple Books Store), and authenticate Apple Pay online or in apps. It can also be used to lock and unlock password-protected notes on iPhone and iPad. Touch ID was first introduced in iPhones with 2013's iPhone 5S, In 2015, Apple introduced a faster second-generation Touch ID in the iPhone 6S; a year later in 2016, it made its laptop debut in the MacBook Pro integrated on the right side of the Touch Bar. Touch ID has been used on all iPads since the iPad Air 2 was introduced in 2014. In MacBooks, each user account can have up to three fingerprints, and a total of five fingerprints across the system. Fingerprint information is stored locally in a secure enclave on the Apple A7 and later chips, not in the cloud, a design choice intended to make it impossible for users or malicious attackers to externally access the fingerprint information.
Apple retained Touch ID on iPhone 8, 2nd-generation iPhone SE, and the base model iPads, while all iPhones since the iPhone X in 2017, and the higher-end iPad Pro adopted Face ID recognition. The 4th-generation iPad Air and the 6th-generation iPad Mini incorporates a Touch ID sensor on the sleep/wake button. In 2021, Apple unveiled a new line of iMacs that can be configured with Touch ID on the Magic Keyboard.
History
In 2012, Apple acquired AuthenTec, a company focused on fingerprint reading and identification management software, for $356 million. The acquisition led commentators to expect a fingerprint reading feature. Following leaks and speculation in early September, the iPhone 5S was unveiled on September 10, 2013, and was the first phone on a major US carrier to feature the technology. Apple's Vice President of Marketing, Phil Schiller, announced the feature at Apple's iPhone media event and spent several minutes (the major portion of the conference) discussing the feature.
Wells Fargo analyst Maynard Um predicted on September 4, 2013, that a fingerprint sensor in the iPhone 5S would help mobile commerce and boost adoption in the corporate environment. "As consumers increasingly rely on mobile devices to transact and store personal data, a reliable device-side authentication solution may become a necessity," Um said.
With the unveiling of the iPhone 6 and 6 Plus at a keynote event on September 9, 2014, Touch ID was expanded from being used to unlock the device and authenticating App Store purchases to also authenticating Apple Pay. The iPhone 6S incorporates a second-generation Touch ID sensor that is up to twice as fast as the first-generation sensor found in the 5S, 6, and SE (1st generation) phones. As of September 2020, the iPhone 6S, 6S Plus, 7, 7 Plus, 8, 8 Plus, SE (2nd generation), 2016-2020 MacBook Pro, 2018-2020 MacBook Air, iPad Pro 10.5" and 12.9" (2nd generation), iPad Air (2020), and iPad Mini (2021) are the Apple devices which use the second generation sensor. The new Touch ID unlocks almost instantly and posed an issue as it unlocks too fast to read notifications on the lock screen. This is remedied with the iOS 10 update in which a user must press the home button to have the home screen appear. This, however, can be changed in iOS settings so that users can go directly to the home screen after resting their finger on the sensor, similar to previous versions of iOS. Solely placing a finger on the sensor will only unlock the iPhone unless said setting is enabled, and no notifications are currently being displayed on the lock screen.
Generations
Hardware
Touch ID is built into the home button, which is built of laser-cut sapphire crystal, and does not scratch easily (scratching would prevent Touch ID from working). It features a stainless steel detection ring to detect the user's finger without pressing it. There is no longer a rounded square icon in the home button, nor is it concave.
The sensor uses capacitive touch to detect the user's fingerprint. The sensor has a thickness of 170 µm, with 500 pixels per inch resolution. The user's finger can be oriented in any direction and it will still be read. Apple says it can read sub-epidermal skin layers, and it will be easy to set up and will improve with every use. The sensor passes a small current through one's finger to create a "fingerprint map" of the user's dermis. Up to 5 fingerprint maps can be stored in the Secure Enclave.
Security and privacy
Touch ID can be bypassed using passcodes set up by the user.
Fingerprint data is stored on the secure enclave inside the Apple A7, A8, A8X, A9, A9X, A10, A10X, A11, A12, A13, A14 processors of an iOS device, or the T1, T2 and M1 in the case of Macs, and not on Apple servers, nor on iCloud. From the Efficient Texture Comparison patent covering Apple's Touch ID technology: In order to overcome potential security drawbacks, Apple's invention includes a process of collapsing the full maps into a sort of checksum, hash function, or histogram. For example, each encrypted ridge map template can have some lower resolution pattern computed and associated with the ridge map. One exemplary pattern could be a histogram of, e.g., the most common angles (e.g., a 2 dimensional (2D) array of common angles). The exemplary pattern could include in each slot an average value over a respective vector of the map. The exemplary pattern could include in each slot a sum of the values over a respective vector of the map. The exemplary pattern could include the smallest or largest value within a respective vector of the map or could be a difference between a largest and a smallest value within the respective vector of the map. Numerous other exemplary embodiments are also possible, and any other exemplary pattern calculation can be used, where the exemplary pattern includes enough associated information to narrow the candidate list, while omitting enough associated information that the unsecured pattern cannot or cannot easily be reverse engineered into a matching texture.If the user's phone has been rebooted, has not been unlocked for 48 hours, has its SIM card removed or has Emergency SOS activated, only the passcode a user has created, not their fingerprint, can be used to unlock the device or during other specific use cases.
In September 2013, the German Chaos Computer Club announced that it had bypassed Apple's Touch ID security. A spokesman for the group stated: "We hope that this finally puts to rest the illusions people have about fingerprint biometrics. It is plain pity to use something that you can't change and that you leave everywhere every day as a security token." Similar results have been achieved by using PVA Glue to take a cast of the finger. Others have also used Chaos Computer Club's method but concluded that it is not an easy process in either time or effort, given that the user has to use a high resolution photocopy of a complete fingerprint, special chemicals, and expensive equipment and because the spoofing process takes some time to achieve.
Impact
In a 2013 New York magazine opinion piece, Kevin Roose argued that consumers are generally not interested in fingerprint recognition, preferring to use passcodes instead. Traditionally, he wrote, only businesspeople used biometric recognition, although they believe Touch ID may help bring fingerprint recognition to the masses. Roose stated the feature will also allow application developers to experiment, should Apple open up access to Touch ID later on (which they have done), but that those wary of surveillance agencies such as the US National Security Agency may still choose not to use Touch ID.
Roose also noted that fingerprint technology still has some issues, such as the potential to be hacked, or of the device's not recognizing the fingerprint (for example, when the finger has been injured).
Adrian Kingsley-Hughes, writing for ZDNet, said Touch ID could be useful in bring your own device situations. He said the biometric protection adds another layer of security, removing the ability of people to look over others' shoulders and read their passcode/password. He added that Touch ID would prevent children from racking up thousands of dollars in unwanted purchases when using iPhones owned by adults. He observed that Touch ID was Apple's response to the large number of iPhone crimes, and that the new feature would deter would-be iPhone thieves.
Moreover, he notes that the feature is one of the few that distinguish the iPhone 5S from the 5C. Roose also stated the feature is intended to deter theft. However, Brent Kennedy, a vulnerability analyst at the United States Computer Emergency Readiness Team, expressed concern that Touch ID could be hacked and suggested that people not rely on it right away. Forbes noted a history of fingerprints being spoofed in the past, and cautioned that the fingerprints on a stolen iPhone might be used to gain unauthorized access. However, the article did say that biometrics technology had improved since tests on spoofing fingerprint readers had been conducted.
Kingsley-Hughes suggested the Touch ID as a form of two-factor authentication, combining something one knows (the password) with "something you are" (the fingerprint). Forbes said that, if two-factor authentication is available, it will be an overall improvement for security.
Forbes columnist Andy Greenberg said the fact that fingerprint data was stored on the local device and not in a centralized database was a win for security.
See also
Face ID
References
External links
– official site
– official site
– official site
Authentication methods
Fingerprints
IOS |
57675884 | https://en.wikipedia.org/wiki/Michael%20J.%20Carey%20%28computer%20scientist%29 | Michael J. Carey (computer scientist) | Michael J. Carey is an American computer scientist. He currently serves as Bren Professor of Information and Computer Science in the Donald Bren School at the University of California, Irvine.
Education
Carey earned his Ph.D. in Computer Science from the University of California at Berkeley in 1983. He also holds a M.S. in Electrical Engineering (Computer Engineering) from Carnegie-Mellon University (earned 1981) and a B.S. (University Honors) in Electrical Engineering and Mathematics from Carnegie-Mellon University (earned 1979).
Life and career
From 1983 to 1995, Carey taught in the Computer Sciences Department at the University of Wisconsin-Madison. After which, he worked as Research Staff Member/Manager at IBM Almaden Research Center in San Jose, California.
Carey was elected a member of the National Academy of Engineering in 2002 for contributions to the design, implementation, and evaluation of database systems.
He has been a Donald Bren Professor of Computer and Information Sciences in the Department of Computer Science at the University of California, Irvine since 2008.
Since 2015 Carey has served as a Consulting Chief Architect at Couchbase, Inc.
Carey has published over 200 research papers, journal articles, book chapters and other publications that primarily focus on Big Data management, database management systems, information integration, middleware, parallel and distributed systems, and computer system performance evaluation
Awards and honors
Fellow, Institute for Electrical and Electronic Engineers (IEEE), 2017
IEEE TCDE Computer Science, Engineering, and Education (CSEE) Impact Award, 2016.
Chancellor's Award for Excellence in Fostering Undergraduate Research, UC Irvine, 2010.
ACM SIGMOD Edgar F. Codd Innovations Award, 2005.
Test of Time Paper Award, ACM SIGMOD Conference, 2004.
Member, National Academy of Engineering, 2002.
Distinguished Alumnus Award, EECS Department, UC Berkeley, 2002.
Fellow, Association for Computing Machinery (ACM), 2000.
Patents
Carey holds 11 patents in the United States.
References
Year of birth missing (living people)
Living people
University of California, Irvine faculty
American computer scientists
UC Berkeley College of Engineering alumni
Carnegie Mellon University alumni
University of Wisconsin–Madison faculty
Members of the United States National Academy of Engineering
Fellow Members of the IEEE
Fellows of the Association for Computing Machinery
American inventors |
8405629 | https://en.wikipedia.org/wiki/De%20bello%20Troiano | De bello Troiano | Daretis Phrygii Ilias De bello Troiano ("The Iliad of Dares the Phrygian: On the Trojan War") is an epic poem in Latin, written around 1183 by the English poet Joseph of Exeter. It tells the story of the ten year Trojan War as it was known in medieval western Europe. The ancient Greek epic on the subject, the Iliad, was inaccessible; instead, the sources available included the fictional "diaries" of Dictys of Crete and Dares of Phrygia. When Joseph's text was printed for the first time in 1541, it was actually erroneously attributed to Dares of Phrygia, announced as the long-lost verse version of his story (quibus multis seculis caruimus – which we lacked for many centuries) supposedly put into Latin hexameters by Nepos.
Notes
References
Mortimer, Richard Angevin England 1154-1258 Oxford: Blackwell 1994
External links
English translation by A. G. Rigg available
1541 editio princeps in original Latin (Bavarian State Library)
12th-century Latin books
12th-century poems
Epic poems in Latin
Trojan War literature
1183 works |
32288 | https://en.wikipedia.org/wiki/Usability%20testing | Usability testing | Usability testing is a technique used in user-centered interaction design to evaluate a product by testing it on users. This can be seen as an irreplaceable usability practice, since it gives direct input on how real users use the system. It is more concerned with the design intuitiveness of the product and tested with users who have no prior exposure to it. Such testing is paramount to the success of an end product as a fully functioning application that creates confusion amongst its users will not last for long. This is in contrast with usability inspection methods where experts use different methods to evaluate a user interface without involving users.
Usability testing focuses on measuring a human-made product's capacity to meet its intended purpose/s. Examples of products that commonly benefit from usability testing are food, consumer products, websites or web applications, computer interfaces, documents, and devices. Usability testing measures the usability, or ease of use, of a specific object or set of objects, whereas general human–computer interaction studies attempt to formulate universal principles.
What it is not
Simply gathering opinions on an object or a document is market research or qualitative research rather than usability testing. Usability testing usually involves systematic observation under controlled conditions to determine how well people can use the product. However, often both qualitative research and usability testing are used in combination, to better understand users' motivations/perceptions, in addition to their actions.
Rather than showing users a rough draft and asking, "Do you understand this?", usability testing involves watching people trying to use something for its intended purpose. For example, when testing instructions for assembling a toy, the test subjects should be given the instructions and a box of parts and, rather than being asked to comment on the parts and materials, they should be asked to put the toy together. Instruction phrasing, illustration quality, and the toy's design all affect the assembly process.
Methods
Setting up a usability test involves carefully creating a scenario, or a realistic situation, wherein the person performs a list of tasks using the product being tested while observers watch and take notes (dynamic verification). Several other test instruments such as scripted instructions, paper prototypes, and pre- and post-test questionnaires are also used to gather feedback on the product being tested (static verification). For example, to test the attachment function of an e-mail program, a scenario would describe a situation where a person needs to send an e-mail attachment, and asking him or her to undertake this task. The aim is to observe how people function in a realistic manner, so that developers can identify the problem areas and fix them. Techniques popularly used to gather data during a usability test include think aloud protocol, co-discovery learning and eye tracking.
Hallway testing
Hallway testing, also known as guerrilla usability, is a quick and cheap method of usability testing in which people - e.g., those passing by in the hallway, are asked to try using the product or service. This can help designers identify "brick walls", problems so serious that users simply cannot advance, in the early stages of a new design. Anyone but project designers and engineers can be used (they tend to act as "expert reviewers" because they are too close to the project).
This type of testing is a perfect example of convenience sampling, the results tend to have a strong bias.
Remote usability testing
In a scenario where usability evaluators, developers and prospective users are located in different countries and time zones, conducting a traditional lab usability evaluation creates challenges both from the cost and logistical perspectives. These concerns led to research on remote usability evaluation, with the user and the evaluators separated over space and time. Remote testing, which facilitates evaluations being done in the context of the user's other tasks and technology, can be either synchronous or asynchronous. The former involves real time one-on-one communication between the evaluator and the user, while the latter involves the evaluator and user working separately. Numerous tools are available to address the needs of both these approaches.
Synchronous usability testing methodologies involve video conferencing or employ remote application sharing tools such as WebEx. WebEx and GoToMeeting are the most commonly used technologies to conduct a synchronous remote usability test. However, synchronous remote testing may lack the immediacy and sense of "presence" desired to support a collaborative testing process. Moreover, managing inter-personal dynamics across cultural and linguistic barriers may require approaches sensitive to the cultures involved. Other disadvantages include having reduced control over the testing environment and the distractions and interruptions experienced by the participants' in their native environment. One of the newer methods developed for conducting a synchronous remote usability test is by using virtual worlds.
Asynchronous methodologies include automatic collection of user's click streams, user logs of critical incidents that occur while interacting with the application and subjective feedback on the interface by users. Similar to an in-lab study, an asynchronous remote usability test is task-based and the platform allows researchers to capture clicks and task times. Hence, for many large companies, this allows researchers to better understand visitors' intents when visiting a website or mobile site. Additionally, this style of user testing also provides an opportunity to segment feedback by demographic, attitudinal and behavioral type. The tests are carried out in the user's own environment (rather than labs) helping further simulate real-life scenario testing. This approach also provides a vehicle to easily solicit feedback from users in remote areas quickly and with lower organizational overheads. In recent years, conducting usability testing asynchronously has also become prevalent and allows testers to provide feedback in their free time and from the comfort of their own home.
Expert review
Expert review is another general method of usability testing. As the name suggests, this method relies on bringing in experts with experience in the field (possibly from companies that specialize in usability testing) to evaluate the usability of a product.
A heuristic evaluation or usability audit is an evaluation of an interface by one or more human factors experts. Evaluators measure the usability, efficiency, and effectiveness of the interface based on usability principles, such as the 10 usability heuristics originally defined by Jakob Nielsen in 1994.
Nielsen's usability heuristics, which have continued to evolve in response to user research and new devices, include:
Visibility of system status
Match between system and the real world
User control and freedom
Consistency and standards
Error prevention
Recognition rather than recall
Flexibility and efficiency of use
Aesthetic and minimalist design
Help users recognize, diagnose, and recover from errors
Help and documentation
Automated expert review
Similar to expert reviews, automated expert reviews provide usability testing but through the use of programs given rules for good design and heuristics. Though an automated review might not provide as much detail and insight as reviews from people, they can be finished more quickly and consistently. The idea of creating surrogate users for usability testing is an ambitious direction for the artificial intelligence community.
A/B testing
In web development and marketing, A/B testing or split testing is an experimental approach to web design (especially user experience design), which aims to identify changes to web pages that increase or maximize an outcome of interest (e.g., click-through rate for a banner advertisement). As the name implies, two versions (A and B) are compared, which are identical except for one variation that might impact a user's behavior. Version A might be the one currently used, while version B is modified in some respect. For instance, on an e-commerce website the purchase funnel is typically a good candidate for A/B testing, as even marginal improvements in drop-off rates can represent a significant gain in sales. Significant improvements can be seen through testing elements like copy text, layouts, images and colors.
Multivariate testing or bucket testing is similar to A/B testing but tests more than two versions at the same time.
Number of test subjects
In the early 1990s, Jakob Nielsen, at that time a researcher at Sun Microsystems, popularized the concept of using numerous small usability tests—typically with only five test subjects each—at various stages of the development process. His argument is that, once it is found that two or three people are totally confused by the home page, little is gained by watching more people suffer through the same flawed design. "Elaborate usability tests are a waste of resources. The best results come from testing no more than five users and running as many small tests as you can afford."
The claim of "Five users is enough" was later described by a mathematical model which states for the proportion of uncovered problems U
where p is the probability of one subject identifying a specific problem and n the number of subjects (or test sessions). This model shows up as an asymptotic graph towards the number of real existing problems (see figure below).
In later research Nielsen's claim has been questioned using both empirical evidence and more advanced mathematical models. Two key challenges to this assertion are:
Since usability is related to the specific set of users, such a small sample size is unlikely to be representative of the total population so the data from such a small sample is more likely to reflect the sample group than the population they may represent
Not every usability problem is equally easy-to-detect. Intractable problems happen to decelerate the overall process. Under these circumstances the progress of the process is much shallower than predicted by the Nielsen/Landauer formula.
It is worth noting that Nielsen does not advocate stopping after a single test with five users; his point is that testing with five users, fixing the problems they uncover, and then testing the revised site with five different users is a better use of limited resources than running a single usability test with 10 users. In practice, the tests are run once or twice per week during the entire development cycle, using three to five test subjects per round, and with the results delivered within 24 hours to the designers. The number of users actually tested over the course of the project can thus easily reach 50 to 100 people. Research shows that user testing conducted by organisations most commonly involves the recruitment of 5-10 participants.
In the early stage, when users are most likely to immediately encounter problems that stop them in their tracks, almost anyone of normal intelligence can be used as a test subject. In stage two, testers will recruit test subjects across a broad spectrum of abilities. For example, in one study, experienced users showed no problem using any design, from the first to the last, while naive user and self-identified power users both failed repeatedly. Later on, as the design smooths out, users should be recruited from the target population.
When the method is applied to a sufficient number of people over the course of a project, the objections raised above become addressed: The sample size ceases to be small and usability problems that arise with only occasional users are found. The value of the method lies in the fact that specific design problems, once encountered, are never seen again because they are immediately eliminated, while the parts that appear successful are tested over and over. While it's true that the initial problems in the design may be tested by only five users, when the method is properly applied, the parts of the design that worked in that initial test will go on to be tested by 50 to 100 people.
Example
A 1982 Apple Computer manual for developers advised on usability testing:
"Select the target audience. Begin your human interface design by identifying your target audience. Are you writing for businesspeople or children?"
Determine how much target users know about Apple computers, and the subject matter of the software.
Steps 1 and 2 permit designing the user interface to suit the target audience's needs. Tax-preparation software written for accountants might assume that its users know nothing about computers but are expert on the tax code, while such software written for consumers might assume that its users know nothing about taxes but are familiar with the basics of Apple computers.
Apple advised developers, "You should begin testing as soon as possible, using drafted friends, relatives, and new employees":
Designers must watch people use the program in person, because
Education
Usability testing has been a formal subject of academic instruction in different disciplines. Usability testing is important to composition studies and online writing instruction (OWI). Scholar Collin Bjork argues that usability testing is "necessary but insufficient for developing effective OWI, unless it is also coupled with the theories of digital rhetoric."
See also
ISO 9241
Software testing
Educational technology
Universal usability
Commercial eye tracking
Don't Make Me Think
Software performance testing
System usability scale (SUS)
Test method
Tree testing
RITE Method
Component-based usability testing
Crowdsourced testing
Usability goals
Heuristic evaluation
Diary studies
Usability of web authentication systems
References
External links
Usability.gov
Usability
Software testing
Educational technology
Product testing |
2246588 | https://en.wikipedia.org/wiki/Rkhunter | Rkhunter | rkhunter (Rootkit Hunter) is a Unix-based tool that scans for rootkits, backdoors and possible local exploits. It does this by comparing SHA-1 hashes of important files with known good ones in online databases, searching for default directories (of rootkits), wrong permissions, hidden files, suspicious strings in kernel modules, and special tests for Linux and FreeBSD. rkhunter is notable due to its inclusion in popular operating systems (Fedora, Debian, etc.)
The tool has been written in Bourne shell, to allow for portability. It can run on almost all UNIX-derived systems.
Development
In 2003, developer Michael Boelen released the version of Rootkit Hunter. After several years of development, early 2006, he agreed to hand over development to a development team. Since that time eight people have been working to set up the project properly and work towards the much-needed maintenance release. The project has since been moved to SourceForge.
See also
chkrootkit
Lynis
OSSEC
Samhain (software)
Host-based intrusion detection system comparison
Hardening (computing)
Linux malware
MalwareMustDie
Rootkit
References
External links
Old rkhunter web page
Computer security software
Unix security-related software
Rootkit detection software |
29272231 | https://en.wikipedia.org/wiki/A.%20Mendelson%20and%20Son%20Company%20Building | A. Mendelson and Son Company Building | The A. Mendelson and Son Company Building is located on Broadway in Albany, New York, United States. It is a brick industrial building erected in the early 20th century. In 2003 it was listed on the National Register of Historic Places. It is one of the few intact examples of the early 20th century industrial architecture of Albany.
Originally built to replace an earlier building damaged by fire in 1904, it was used to manufacture lye and potash. It is one of the few remaining industrial buildings from that period on Albany's waterfront, making a distinct contrast to an adjacent building erected just a decade later. Several subsequent owners have left it largely intact, and it remains in industrial use, home for the past four decades to a tape and label manufacturer, as well as some other small businesses.
Building
The Mendelson Building occupies most of the block on the west side of Broadway between Fourth Avenue and Vine Street. To the east, between Broadway and the Hudson River, are parking lots, vacant land and another large building. Two other industrial buildings, of a similar size and shape but not as old, are to the north. A modern warehouse is on the south. On the west are railroad tracks used by CSX and the elevated roadways of Interstate 787, dividing the waterfront from the residential neighborhoods of the South End.
The building itself is a three-story 10-by-13-bay brick structure on a foundation of cut bluestone and topped by a flat roof. The facade bricks are laid in common bond. Most windows are 12-over-12 light wood double-hung sash windows with mild segmental-arched lintels. Smaller, narrower windows are interspersed between them at irregular intervals, most significantly on all three stories between the fourth and fifth bays from the east on the north side. A small round window is on the third bay south on the east side.
A belt course connects the window lintels. These are the building's only decoration besides "Port Business Center" painted in large black letters on a yellow background just below the eastern roofline. The bricks rise several feet above the roofline to form a parapet capped with terra cotta barrel tiles. At the southwest corner the roof is pierced by the elevator's motor housing, and a small chimney rises from the northeast.
In the fourth bay west from Broadway along the north side is the building's main business entrance. A small set of steps leads up to a heavy wooden door with transom. A similar secondary entrance is in the fourth bay from the east façade, just west of a large loading door similar to that in the middle of the east facade. Along the south is a large metal canopy, and another loading door is in the center. The west of the building has loading doors on the first and second floors.
Inside, the floor layout reflects both the temporary nature of industrial space and the cage structural system. The building's structural load is carried by the steel beams and iron columns inside. The exterior walls are self-supporting; corbeled pilasters on their interiors support the beams, and the iron columns are within the masonry of the building's two interior north-south firewalls. Secondary support is provided by eight-inch–thick () iron columns throughout.
The first floor is concrete. On the upper two stories, timber joists rest across the steel beams, supporting wood planking several inches thick. This is covered by steel plate in many areas to make moving barrels around easier. The firewalls have two entries with relieving arches and iron sliding doors on tracks. The elevator in the southwest corner is the original traction elevator, with its motor on top to free up space on the first floor. Most partitions within the firewalled areas are made of materials that can easily be removed without seriously damaging the building.
History
The Erie Canal and the development of railroads made Albany a bustling inland port by the late 19th century. Industry in the city found the area around the port an ideal business location, and by the end of the century there were 16 such buildings along Broadway similar to the Mendelson building, with the river and port facilities on one side and the Delaware and Hudson Railroad on the other. The site at 40 Broadway is first known to have been used by the John A. Goewey Company, which built a five-story brick building to make holloware, in particular ham boilers and tea kettles. The complex included a foundry and shops on the west side.
In 1882 A. Winterburn started the Capital City Malleable Iron Company in the building, primarily making agricultural implements and carriage irons. The interior was used for both pattern shops and offices. Thirteen years later, in 1895, it was transferred to the A. Mendelson and Son Company, makers of lye and potash.
A 1904 fire destroyed the upper two floors and, along with the water used to extinguish the fire, severely damaged the lower stories. The Times Union, Albany's major daily newspaper then as now, reported that 150 people had been left jobless and a considerable amount of inventory destroyed. The owners promised the building would be rebuilt as soon as possible.
It has not been determined exactly when the current building was constructed. Albany did not start issuing building permits until 1909, and there are none for 40 Broadway. Based on the owners' promise, it has been assumed that the site was cleared and the current, smaller building was in place around 1905 at the earliest, and definitely before 1909. It is possible that some elements of the original building's foundation were reused in the current building.
Aesthetically, the current building is typical of factories from the turn-of-the-century period. Brick is used extensively both inside and out, with minimal decoration. The large windows allowed natural light into interior spaces that were difficult to light fully with electricity. Elevators were also standard for the time in new industrial construction. A new feature in the building resulting from the destruction of the old was an automatic fire sprinkler system.
In 1909, Mendelson bought the building that existed at that time to its north, which had been a grain wholesaler and distributor. A bridge was built over Fourth Avenue to connect them. Seven years later, in 1916, the company demolished that building and replaced it with the current building. While similar in size and form to its neighbor, the newer building showed a distinct change had taken place in industrial aesthetics from the era in which 40 Broadway had been constructed. It used steel framing and was faced in concrete, with rows of small steel sash windows letting in light rather than the larger wooden ones.
Mendelson sold both buildings in 1919 to the B.T. Babbitt Company, another lye maker, which had been doing business since the 1830s. Babbitt expanded its product line to include soaps and soap powders, but made no changes to the building itself. They in turn sold the buildings in 1965.
In 1969, 40 Broadway was purchased by Greenbush Tape & Label; the family-owned business moved there, where it has been ever since. At some point before that, the steel canopy had been added over the shipping and receiving area on the south facade. Greenbush removed the bridges to the 1916 building and added a glass door on the second floor of the south facade to allow access to the canopy. In 2000 it replaced the roof. It has worked hard to preserve the building's architectural integrity. Other businesses in the building include an art gallery, recently restored after a 2009 fire, and an antique pool table dealership.
See also
National Register of Historic Places listings in Albany, New York
References
External links
Greenbush Tape & Label
Buildings and structures completed in 1909
Industrial buildings completed in 1909
Industrial buildings and structures on the National Register of Historic Places in New York (state)
National Register of Historic Places in Albany, New York |
2511353 | https://en.wikipedia.org/wiki/Pushover%20%28video%20game%29 | Pushover (video game) | Pushover is a puzzle-platform game developed by Red Rat Software and published by Ocean Software in 1992 for the Amiga, Atari ST, MS-DOS and Super NES. The game was sponsored by Smiths' British snack Quavers (now owned by Walkers), and the plot revolves around the then Quavers mascot Colin Curly losing his Quavers packets down a giant ant hill. The player is tasked with controlling G.I. Ant, a large soldier ant, to recover the Quavers by solving a series of puzzles. The SNES version lacks the Quavers branding, and instead the aim is to recover bundles of cash dropped down the ant hill by Captain Rat.
Gameplay
The game consists of 100 levels of increasing complexity over nine different themed worlds. Each level features several interconnected platforms holding a number of "dominoes". The aim is to rearrange the dominoes, such that with a single push, all of the dominoes are toppled, thus opening the exit to the next level. There are 11 different types of domino, identified by red and yellow patterns, each with different actions. The player controls G.I. Ant, who can move certain dominoes by carrying them one at a time.
Various factors can result in failure to complete a level. As well as toppling all of the dominoes, the player must be able to access the exit door once the dominoes have fallen. For instance, the player will be unable to reach the exit if a ledge leading to the exit has been destroyed, or if a gap leading to the exit has not been bridged, or if a line of dominoes lie across the exit. G.I. Ant may die by falling from a large height, by falling off the bottom of the screen, or by being crushed under a falling domino. The player is then greeted with the message "You Failed, You Died" and has to restart the level. Also, the level will be failed if any dominoes are destroyed by landing one domino on top of another.
Each level has a time limit during which it must be completed. However, if the time runs out the player is still able to continue with the puzzle if they wish. By pausing the game once the time has run out, a small hint will be displayed, giving advice on how to complete the level. As a side note, the hint for level 98 informs the player that the game's designer cannot remember how to complete the level without trickery ("Use a drop! There is a way to make it work with a push, but I can't find it!").
The themed worlds, in order, are an industrial complex, an Aztec world, a space station, an electronic world, a Greek temple, a Medieval castle, a Meccano-inspired world, a dungeon and a Japanese temple. Each world has 11 levels, making a total of 99 regular levels. A packet of Colin's Quavers is retrieved after each world, with nine packets in all to be collected. Many of the early levels are tutorials demonstrating how each type of domino will act. Often there is only a single solution to each level, though some levels have multiple solutions. The final level, level 100, must be completed using dominoes with hidden markings.
A password system allows the player to continue an earlier game, without having to restart from the first level. Additionally, upon completing a level the player gains a token, which once a level has been failed, allows the player to return to the point before the domino push, rather than having to return to the initial state of the level.
Development
Creative differences between RedRat Software and Ocean Software around branding and graphical changes overshadowed this title and a breakdown between both parties occurred once legal action was taken by RedRat to regain creative control/claim a breach of contract. The legal battle led to the downfall of RedRat Software, who were unable to fund continual legal costs vs the much deeper pockets of Ocean Software.
Reception
The game was reviewed in 1993 in Dragon #193 by Hartley, Patricia, and Kirk Lesser in the "Role of Computers" column. The reviewers gave the game 5 out of 5 stars.
Entertainment Weekly gave the game a B- and wrote that "The theme of Pushover (Ocean of America, for Super NES) is ingenious — players have to line up 10 kinds of dominoes, then get them all to fall with a single push — but the static execution will have small kids dozing off way before their bedtime."
References
External links
Pushover at Amiga Hall of Light
1992 video games
Advergames
Amiga games
Atari ST games
DOS games
Ocean Software games
Piko Interactive games
Puzzle-platform games
Super Nintendo Entertainment System games
Video games about insects
Video games developed in the United Kingdom |
4245884 | https://en.wikipedia.org/wiki/List%20of%20spreadsheet%20software | List of spreadsheet software | The following is a list of spreadsheets.
Free and open-source software
Cloud and on-line spreadsheets
Collabora Online Calc — Enterprise-ready LibreOffice.
Sheetster – "Community Edition" is available under the Affero GPL
Simple Spreadsheet
Tiki Wiki CMS Groupware includes a spreadsheet since 2004 and migrated to jQuery.sheet in 2010.
Spreadsheets that are parts of suites
Collabora Online Calc — Enterprise-ready LibreOffice, included with Online, Mobile and Desktop apps
Gnumeric — for Linux. Started as the GNOME desktop spreadsheet. Reasonably lightweight but has very advanced features.
KSpread — following the fork of the Calligra Suite from KOffice in mid-2010, superseded by KCells in KOffice and Sheets in the Calligra Suite.
LibreOffice Calc — developed for MS Windows, Linux, BSD and Apple Macintosh (Mac) operating systems by The Document Foundation. The Document Foundation was formed in mid-2010 by several large organisations such as Google, Red Hat, Canonical (Ubuntu) and Novell along with the OpenOffice.org community (developed by Sun) and various OpenOffice.org forks, notably Go-oo. Go-oo had been the "OpenOffice" used in Ubuntu and elsewhere. Started as StarOffice in the late 1990s, it became OpenOffice under Sun and then LibreOffice in mid-2010. The Document Foundation works with external organisations such as NeoOffice and Apache Foundation to help drive all three products forward.
NeoOffice Calc — for Mac. Started as an OpenOffice.org port to Mac, but by using the Mac-specific Aqua user interface, instead of the more widely used X11 windowing server, it aimed to be far more stable than the normal ports of other suites.
OpenOffice.org Calc — for MS Windows, Linux and the Apple Macintosh. Started as StarOffice. Sun changed the name to OpenOffice.org and developed a community of developers (and others) between the late 1990s and mid-2010. Oracle gave it to the Apache Foundation in 2011. IBM contributed their fork of OpenOffice.org, IBM Lotus Symphony, to Apache a few weeks later.
Siag — for Linux, OpenBSD and Apple Mac OS X. A simple old spreadsheet, part of Siag Office.
Sheets — for MS Windows, Linux, FreeBSD, Apple Mac OS X and Haiku. Part of the extensive Calligra Suite. Possibly still mainly for Linux, but ports have been developed for other operating systems.
Standalone spreadsheets
sc
GNU Oleo
Pyspread
Proprietary software
Online spreadsheets
EditGrid – access, collaborate and share spreadsheets online, with API support; discontinued since 2014
Google Sheets – as part of Google Docs
Zoho Sheet Spreadsheet on the cloud that allows real-time collaboration and more, for free
iRows – closed since 31 December 2006
JotSpot Tracker – acquired by Google Inc.
Smartsheet – Online spreadsheet for project management, interactive Gantt, file sharing, integrated with Google Apps
ThinkFree Online Calc – as part of the ThinkFree Office online office suite, using Java
Airtable - a spreadsheet-database hybrid, with the features of a database but applied to a spreadsheet.
Spreadsheets that are parts of suites
Ability Office Spreadsheet – for MS Windows.
Apple iWork Numbers, included with Apple's iWork '08 suite exclusively for Mac OS X v10.4 or higher.
AppleWorks – for MS Windows and Macintosh. This is a further development of the historical Claris Works Office suite.
WordPerfect Office Quattro Pro – for MS Windows. Was one of the big three spreadsheets (the others being Lotus 123 and Excel).
EasyOffice EasySpreadsheet – for MS Windows. No longer freeware, this suite aims to be more user friendly than competitors.
Framework – for MS Windows. Historical office suite still available and supported. It includes a spreadsheet.
IBM Lotus Symphony – freeware for MS Windows, Apple Mac OS X and Linux.
Kingsoft Office Spreadsheets 2012 – For MS Windows. Both free and paid versions are available. It can handle Microsoft Excel .xls and .xlsx files, and also produce other file formats such as .et, .txt, .csv, .pdf, and .dbf. It supports multiple tabs, VBA macro and PDF converting.
Lotus SmartSuite Lotus 123 – for MS Windows. In its MS-DOS (character cell) version, widely considered to be responsible for the explosion of popularity of spreadsheets during the 80s and early 90s.
Microsoft Office Excel – for MS Windows and Apple Macintosh. The proprietary spreadsheet leader.
Microsoft Works Spreadsheet – for MS Windows (previously MS-DOS and Apple Macintosh). Only allows one sheet at a time.
PlanMaker – for MS Windows, Linux, MS Windows Mobile and CE; part of SoftMaker Office
Quattro Pro – part of WordPerfect Office
StarOffice Calc – Cross-platform. StarOffice was originally developed by the German company Star Division which was purchased by Sun in 1998. The code was made open source and became OpenOffice.org. Sun continues developing the commercial version which periodically integrates the open source code with their own and third party code to make new low price versions.
Stand alone spreadsheets
As-Easy-As – from Trius, Inc.; unsupported; last MS-DOS and Windows versions available with free full license key.
Multi-dimensional spreadsheets
Javelin
Lotus Improv
Quantrix Financial Modeler
Spreadsheets on different paradigms
DADiSP – Combines the numerical capability of MATLAB with a spreadsheet like interface.
Javelin
Lotus Improv
Resolver One – a business application development tool that represents spreadsheets as IronPython programs, created and executed in real time and allowing the spreadsheet flow to be fully programmed
Spreadsheet 2000
Spreadsheet-related developmental software
ExtenXLS – Java Spreadsheet Toolkit.
Specifications
-* 32-bit addressable memory on Microsoft Windows, i.e. ~2.5 GB.
Historical
VisiCalc The first widely used normal spreadsheet with A1 notation etc.
Lotus 1-2-3 Took the market from Visicalc in the early 1980s.
Lotus Improv Novel design that went beyond A1 notation.
Lotus Symphony for DOS
Multiplan Early version of Excel.
20/20 Multiplatform competitor to 1-2-3 with database integration and real-time data updating.
3D-Calc multi-dimensional spreadsheet for Atari ST
SuperCalc – CP/M-80 Included with early Osborne computers. It also was ported to MS-DOS and to Microsoft Windows.
Dynacalc — from Computer Systems Center, similar to VisiCalc. It was designed to run on Microware's OS-9, a Unix-like operating system.
VP Planner – Similar in look and feel to Lotus 1-2-3, but included 5 level multi-dimensional database
Wingz Multi Dimensional Spreadsheet from Informix (1988)
Boeing Calc – was a spreadsheet package written by subsidiary of aviation manufacturer Boeing (1985).
See also
Comparison of spreadsheets
Logical spreadsheet
References
Spreadsheets |
27974706 | https://en.wikipedia.org/wiki/Harman%20Connected%20Services | Harman Connected Services | Harman Connected Services, often abbreviated to HCS, is an American subsidiary of Samsung Electronics through Harman International Industries. The Connected Services Division supplies software services to the mobile communications industry. HCS is a technologies company leader in Cloud, Mobile, Analytics Capabilities, Design, and Software Services. Harman has a workforce of approximately 30,000 people across the Americas, Europe, and Asia.
On January 22, 2015, Harman acquired Symphony Teleca from the Symphony Technology Group. The deal was valued at US$780 million. Symphony Telca was subsequently integrated and rebranded as Harman Connected Services and, in March 2017, Harman became a wholly owned subsidiary of Samsung Electronics.
History
Early history
Teleca was founded in 1992 by a team of software engineers, based in Manchester. Over the coming years, Teleca's telecom software was utilized by Motorola, Racal, Digital, GEC, Hewlett-Packard, Psion, and Siemens. The presence in the market led to a number of major mobile device partnerships, which led them to expand their offering into the mobile phone market. These vendors included Nokia, Motorola and SonyEricsson.
In early 2000, it was acquired by Sigma AB, a leading Swedish engineering services business, and became its UK subsidiary. Following the takeover, Teleca expanded their workforce from 2,500 across Europe into a number of countries including Łodz, Poland and Seoul, South Korea.
Teleca then acquired Telma Soft in 2006, a Russian-based software company. Over the next two years, Teleca opened a number of different locations up in both India and China.
Symphony Technology Group announced in 2008 that they had acquired Teleca and delisted it from the Stockholm stock exchange. Later that year, Telma Soft was then rebranded to become the Russian Teleca operation.
In 2013, the company had 9000+ employees in 35 countries with the largest number of employees in India spread across many locations, including Bangalore, Pune, Gurgaon and Chennai.
Symphony Ownership
Symphony services was founded in 2002 with initial financial support from Romesh T. Wadhwani, Chairman, CEO and founder of Symphony Technology Group. In 2003, Symphony raised growth capital from TH Lee Putnam Ventures. In 2004, Symphony Services purchased Stonehouse Technologies for $6.7 million.
Following the purchase of Teleca, it became Symphony Teleca and part of the Symphony Services division. The new services division focused on software product engineering outsourcing services and was headquartered in Palo Alto, California, with major global operations centers in the U.S., India and China. The company was assessed as CMMI level 3.
In 2010, Symphony acquired Proteans Software Solutions, an Indian company engaged in providing software engineering services to the small and medium ISV space; followed by CoreObjects Software Inc, a Los Angeles-based company specializes in embedded product development for technology companies.
In 2011, Symphony Services acquired JPC Software, an Argentina entity that offers IT solutions, consulting and support services based in Buenos Aires.
In 2012, Symphony Services Corporation merged with Teleca, creating Symphony Teleca Corporation with a focus to help clients manage the global convergence of software, the cloud, and mobility. Symphony Teleca announced on 10 April 2014 that they would be acquiring Aditi Technologies for an undisclosed amount. Following the acquisition, Sanjay Dhawan took over as CEO and Pradeep Rathinam was appointed as Aditi's president. Aditi subsequently became an independent business unit of Symphony Teleca.
Harman Connected Services
In 2015, it was announced that Harman International Industries were interested in acquiring Symphony Teleca. Harman and Symphony Technology Group agreed on a deal worth US$780 million. Teleca was rebranded Harman Connected Services, with a focus on producing software for all Harman-related products.
As well as Symphony Teleca, Harman also acquired Red Bend Software. The total price for the acquisition was $200 million, with $170 million in shares and $30 million in cash once certain milestones were reached. The Red Bend software remote revision by cellular operators of both software and physical components installed in cellular devices and its software is used on more than 2 billion mobile phones globally. TowerSec was another company acquired by HARMAN in 2016. The Israel-based cyber security company focused on security for the automotive industry.
Collectively, the acquired companies were merged to form Harman Connected Services. Its parent company was acquired in 2016 for US$8 billion by Samsung Electronics.
Global locations
Certifications
ISO 9001:2008
References
Further reading
The Hindu Business Line : Symphony Service, Aldata in pact
External links
Symphony SMS - Telecom Expense Management - TEMS.
EFYTimes.com, May 27, 2010 - Inauguration of Teleca India's new premises
February 15, 2010 - Teleca's collaboration with Imagination Technologies to bring optimized technology for Adobe Flash and Flash Lite
February 10, 2010 - Teleca's partner SVOX and what they exhibited in Teleca's stand at Mobile World Congress
TMCnet.com, February 10, 2010 - Teleca's partnership agreement with Antix
Dr. Dobb's, December 19, 2009 - Article by Teleca's Andrew Till about avoiding open source pitfalls
February 13, 2009 - Article about Teleca's partnership with TAT in Swedish daily
SOA World Magazine, February 10, 2009 - Article about Teleca's work to enable Android for CDMA phone market
Outsourcing companies
Software companies based in California
Harman International
Software companies of the United States
2001 establishments in California |
64020 | https://en.wikipedia.org/wiki/Multiprocessing | Multiprocessing | Multiprocessing is the use of two or more central processing units (CPUs) within a single computer system. The term also refers to the ability of a system to support more than one processor or the ability to allocate tasks between them. There are many variations on this basic theme, and the definition of multiprocessing can vary with context, mostly as a function of how CPUs are defined (multiple cores on one die, multiple dies in one package, multiple packages in one system unit, etc.).
According to some on-line dictionaries, a multiprocessor is a computer system having two or more processing units (multiple processors) each sharing main memory and peripherals, in order to simultaneously process programs. A 2009 textbook defined multiprocessor system similarly, but noting that the processors may share "some or all of the system’s memory and I/O facilities"; it also gave tightly coupled system as a synonymous term.
At the operating system level, multiprocessing is sometimes used to refer to the execution of multiple concurrent processes in a system, with each process running on a separate CPU or core, as opposed to a single process at any one instant. When used with this definition, multiprocessing is sometimes contrasted with multitasking, which may use just a single processor but switch it in time slices between tasks (i.e. a time-sharing system). Multiprocessing however means true parallel execution of multiple processes using more than one processor. Multiprocessing doesn't necessarily mean that a single process or task uses more than one processor simultaneously; the term parallel processing is generally used to denote that scenario. Other authors prefer to refer to the operating system techniques as multiprogramming and reserve the term multiprocessing for the hardware aspect of having more than one processor. The remainder of this article discusses multiprocessing only in this hardware sense.
In Flynn's taxonomy, multiprocessors as defined above are MIMD machines. As the term "multiprocessor" normally refers to tightly coupled systems in which all processors share memory, multiprocessors are not the entire class of MIMD machines, which also contains message passing multicomputer systems.
Pre-history
Possibly the first expression of the idea of multiprocessing was written by Luigi Federico Menabrea in 1842, about Charles Babbage's analytical engine (as translated by Ada Lovelace): "the machine can be brought into play so as to give several results at the same time, which will greatly abridge the whole amount of the processes."
Key topics
Processor symmetry
In a multiprocessing system, all CPUs may be equal, or some may be reserved for special purposes. A combination of hardware and operating system software design considerations determine the symmetry (or lack thereof) in a given system. For example, hardware or software considerations may require that only one particular CPU respond to all hardware interrupts, whereas all other work in the system may be distributed equally among CPUs; or execution of kernel-mode code may be restricted to only one particular CPU, whereas user-mode code may be executed in any combination of processors. Multiprocessing systems are often easier to design if such restrictions are imposed, but they tend to be less efficient than systems in which all CPUs are utilized.
Systems that treat all CPUs equally are called symmetric multiprocessing (SMP) systems. In systems where all CPUs are not equal, system resources may be divided in a number of ways, including asymmetric multiprocessing (ASMP), non-uniform memory access (NUMA) multiprocessing, and clustered multiprocessing.
Master/slave multiprocessor system
In a master/slave multiprocessor system, the master CPU is in control of the computer and the slave CPU(s) performs assigned tasks. The CPUs can be completely different in terms of speed and architecture. Some (or all) of the CPUs can have share common bus, each can also have a private bus (for private resources), or they may be isolated except for a common communications pathway. Likewise, the CPUs can share common RAM and/or have private RAM that the other processor(s) cannot access. The roles of master and slave can change from one CPU to another.
An early example of a master/slave multiprocessor system is the Tandy/Radio Shack TRS-80 Model 16 desktop computer which came out in February 1982 and ran the multi-user/multi-tasking Xenix operating system, Microsoft's version of UNIX (called TRS-XENIX). The Model 16 has 3 microprocessors, an 8-bit Zilog Z80 CPU running at 4MHz, a 16-bit Motorola 68000 CPU running at 6MHz and an Intel 8021 in the keyboard. When the system was booted, the Z-80 was the master and the Xenix boot process initialized the slave 68000, and then transferred control to the 68000, whereupon the CPUs changed roles and the Z-80 became a slave processor that was responsible for all I/O operations including disk, communications, printer and network, as well as the keyboard and integrated monitor, while the operating system and applications ran on the 68000 CPU. The Z-80 could be used to do other tasks.
The earlier TRS-80 Model II, which was released in 1979, could also be considered a multiprocessor system as it had both a Z-80 CPU and an Intel 8021 microprocessor in the keyboard. The 8021 made the Model II the first desktop computer system with a separate detachable lightweight keyboard connected with by a single thin flexible wire, and likely the first keyboard to use a dedicated microprocessor, both attributes that would later be copied years later by Apple and IBM.
Instruction and data streams
In multiprocessing, the processors can be used to execute a single sequence of instructions in multiple contexts (single instruction, multiple data or SIMD, often used in vector processing), multiple sequences of instructions in a single context (multiple instruction, single data or MISD, used for redundancy in fail-safe systems and sometimes applied to describe pipelined processors or hyper-threading), or multiple sequences of instructions in multiple contexts (multiple instruction, multiple data or MIMD).
Processor coupling
Tightly coupled multiprocessor system
Tightly coupled multiprocessor systems contain multiple CPUs that are connected at the bus level. These CPUs may have access to a central shared memory (SMP or UMA), or may participate in a memory hierarchy with both local and shared memory (SM)(NUMA). The IBM p690 Regatta is an example of a high end SMP system. Intel Xeon processors dominated the multiprocessor market for business PCs and were the only major x86 option until the release of AMD's Opteron range of processors in 2004. Both ranges of processors had their own onboard cache but provided access to shared memory; the Xeon processors via a common pipe and the Opteron processors via independent pathways to the system RAM.
Chip multiprocessors, also known as multi-core computing, involves more than one processor placed on a single chip and can be thought of the most extreme form of tightly coupled multiprocessing. Mainframe systems with multiple processors are often tightly coupled.
Loosely coupled multiprocessor system
Loosely coupled multiprocessor systems (often referred to as clusters) are based on multiple standalone single or dual processor commodity computers interconnected via a high speed communication system (Gigabit Ethernet is common). A Linux Beowulf cluster is an example of a loosely coupled system.
Tightly coupled systems perform better and are physically smaller than loosely coupled systems, but have historically required greater initial investments and may depreciate rapidly; nodes in a loosely coupled system are usually inexpensive commodity computers and can be recycled as independent machines upon retirement from the cluster.
Power consumption is also a consideration. Tightly coupled systems tend to be much more energy efficient than clusters. This is because considerable economy can be realized by designing components to work together from the beginning in tightly coupled systems, whereas loosely coupled systems use components that were not necessarily intended specifically for use in such systems.
Loosely coupled systems have the ability to run different operating systems or OS versions on different systems.
See also
Multiprocessor system architecture
Symmetric multiprocessing
Asymmetric multiprocessing
Multi-core processor
BMDFM – Binary Modular Dataflow Machine, a SMP MIMD runtime environment
Software lockout
OpenHMPP
References
Parallel computing
Classes of computers
Computing terminology |
29504396 | https://en.wikipedia.org/wiki/Video%20games%20in%20Japan | Video games in Japan | Video games are a major industry in Japan. Japanese game development is often identified with the golden age of video games, including Nintendo under Shigeru Miyamoto and Hiroshi Yamauchi, Sega during the same time period, Sony Computer Entertainment when it was based in Tokyo, and other companies such as Taito, Namco, Capcom, Square Enix, Konami, NEC, and SNK, among others.
The space is known for the catalogs of several major publishers, all of whom have competed in the video game console and video arcade markets at various points. Released in 1965, Periscope was a major arcade hit in Japan, preceding several decades of success in the arcade industry there. Nintendo, a former hanafuda playing card vendor, rose to prominence during the 1980s with the release of the home video game console called the Famicom or "Family Computer", which became a major hit as the Nintendo Entertainment System or "NES" internationally. Sony, already one of the world's largest electronics manufacturers, entered the market in 1994 with the Sony PlayStation, one of the first home consoles to feature 3d graphics, almost immediately establishing itself as a major publisher in the space. Shigeru Miyamoto remains internationally renowned as a "father of videogaming" and is the only game developer so far to receive Japan's highest civilian honor for artists, the 文化功労者 bunka kōrōsha or Person of Cultural Merit.
Arcade culture is a major influence among young Japanese, with Akihabara Electric Town being a major nexus of so-called otaku culture in Japan, which overlaps with videogaming heavily. A good number of japanese video game franchises such as Super Smash Bros, Pokemon, Super Mario, The Legend of Zelda, Animal Crossing, Shin Megami Tensei: Persona, Resident Evil, Souls and Monster Hunter have gained critical acclaim and continue to garner a large international following. The Japanese role-playing game is a major game genre innovated by Japan and remains popular both domestically and internationally, with titles like Final Fantasy and Dragon Quest selling millions. The country has an estimated 67.6 million players in 2018.
History
Background
In 1966, Sega introduced an electro-mechanical game called Periscope - a submarine simulator which used lights and plastic waves to simulate sinking ships from a submarine. It became an instant success in Japan, Europe, and North America, where it was the first arcade game to cost a quarter per play, which would remain the standard price for arcade games for many years to come.
Sega later produced gun games that used rear image projection in a manner similar to the ancient zoetrope to produce moving animations on a screen. The first of these, the light-gun game Duck Hunt, appeared in 1969; it featured animated moving targets on a screen, printed out the player's score on a ticket, and had volume-controllable sound-effects. Another Sega 1969 release, Missile, a shooter, featured electronic sound and a moving film strip to represent the targets on a projection screen.
1970s to early 1980s
The first arcade game, Atari's Pong, debuted in the United States in 1972, and led to a number of new American manufacturers to create their own arcade games to capitalize on the rising fad. Several of these companies had Japanese partners and kept their overseas counterparts abreast of this new technology, leading several Japanese coin-operated electronic games makers to step into the arcade game market as well. Taito and Namco were some of the early adopters of arcade games in Japan, first distributing American games before developing their own. Nintendo which at this time was primarily manufacturing traditional and electronic toys, also entered the arcade game market in the latter part of the 1970s.
As in the United States, many of the early Japanese arcade games were based on the principle of cloning gameplay established by popular titles to make new ones. However, several new concepts came out of these Japanese-developed games, and performed well in both Japan and in re-licensed versions in the United States, such as Taito's Speed Race and Gun Fight in 1975. Gun Fight notably, when released by Midway Games in the U.S., was the first arcade game to use a microprocessor rather than discrete electronic components. Sega's black and white boxing game Heavyweight Champ was released in 1976 as the first video game to feature fist fighting. The first stealth games were Hiroshi Suzuki's Manbiki Shounen (1979) and Manbiki Shoujo (1980), Taito's Lupin III (1980), and Sega's 005 (1981).
Separately, the first home video game console, the Magnavox Odyssey, had been released in the U.S. in 1971, of which Nintendo had partnered to manufacture the light gun accessory for the console, while Atari began releasing home console versions of Pong in 1975. Japan's first home video game console was Epoch's TV Tennis Electrotennis. It was followed by the first successful Japanese console, Nintendo's Color TV Game, in 1977 which was made in partnership with Mitsubishi Electronics. Numerous other dedicated home consoles were made mostly by television manufacturers, leading these systems to be called TV geemu or terebi geemu in Japan.
Eventually, the 1978 arcade release of Space Invaders would mark the first major mainstream breakthrough for video games in Japan. Created by Nishikado at Japan's Taito Corporation, Space Invaders pitted the player against multiple enemies descending from the top of the screen at a constantly increasing speed. The game used alien creatures inspired by The War of the Worlds (by H. G. Wells) because the developers were unable to render the movement of aircraft; in turn, the aliens replaced human enemies because of moral concerns (regarding the portrayal of killing humans) on the part of Taito Corporation. As with subsequent shoot 'em ups of the time, the game was set in space as the available technology only permitted a black background. The game also introduced the idea of giving the player a number of "lives". It popularised a more interactive style of gameplay with the enemies responding to the player-controlled cannon's movement, and it was the first video game to popularise the concept of achieving a high score, being the first to save the player's score. The aliens of Space Invaders return fire at the protagonist, making them the first arcade game targets to do so. It set the template for the shoot 'em up genre, and has influenced most shooting games released since then.
Taito's Space Invaders, in 1978, proved to be the first blockbuster arcade video game. Its success marked the beginning of the golden age of arcade video games. Video game arcades sprang up in shopping malls, and small "corner arcades" appeared in restaurants, grocery stores, bars and movie theaters all over Japan and other countries during the late 1970s and early 1980s. Space Invaders (1978), Galaxian (1979), Pac-Man (1980) and Bosconian (1981) were especially popular. By 1981, the arcade video game industry was worth $8 billion ($ in ). Some games of this era were so popular that they entered popular culture. The first to do so was Space Invaders. The game was so popular upon its release in 1978 that an urban legend blamed it for a national shortage of 100 yen coins in Japan, leading to a production increase of coins to meet demand for the game (although 100 yen coin production was lower in 1978 and 1979 than in previous or subsequent years, and the claim does not withstand logical scrutiny: arcade operators would have emptied out their machines and taken the money to the bank, thus keeping the coins in circulation). Japanese arcade games during the golden age also had hardware unit sales at least in the tens of thousands, including Ms. Pac-Man with over 115,000 units, Donkey Kong with over 60,000, Galaxian with 40,000, Donkey Kong Jr. with 35,000, and Mr. Do! with 30,000.
Other Japanese arcade games established new concepts that would become fundamentals in video games. Use of color graphics and individualized antagonists were considered "strong evolutionary concepts" among space ship games. The Namco's Galaxian in 1979 introduced multi-colored animated sprites. That same year saw the release of SNK's debut shoot 'em up Ozma Wars, notable for being the first action game to feature a supply of energy, resembling a life bar, a mechanic that has now become common in the majority of modern action games. It also featured vertically scrolling backgrounds and enemies.
1980s to early 2000s
From 1980 to 1991, Nintendo produced a line of handheld electronic games called Game & Watch. Created by game designer Gunpei Yokoi, each Game & Watch features a single game to be played on an LCD screen. It was the earliest Nintendo product to gain major success.
Mega Man, known as Rockman (ロックマン, Rokkuman) in Japan, is a Japanese science fiction video game franchise created by Capcom, starring a series of robot characters each known by the moniker "Mega Man". Mega Man, released for the Nintendo Entertainment System in 1987, was the first in a series that expanded to over 50 games on multiple systems. As of March 31, 2021, the game series has sold 36 million units worldwide.[1]
Konami's Scramble, released in 1981, is a side-scrolling shooter with forced scrolling. It was the first scrolling shooter to offer multiple, distinct levels. Vertical scrolling shooters emerged around the same time. Namco's Xevious, released in 1982, is frequently cited as the first vertical scrolling shooter and, although it was in fact preceded by several other games of that type, it is considered one of the most influential.
The first platform game to use scrolling graphics was Jump Bug (1981), a simple platform-shooter game developed by Alpha Denshi.
The North American video game industry was devastated by the 1983 video game crash, but in Japan, it was more of a surprise to developers, and typically known in Japan as the "Atari Shock". After the video game crash, analysts doubted the long-term viability of the video game industry. At the same time, following a series of arcade game successes in the early 1980s, Nintendo made plans to create a cartridge-based console called the Famicom, which is short for Family Computer. Masayuki Uemura designed the system. The console was released on July 15, 1983 as the Family Computer (or Famicom for short) alongside three ports of Nintendo's successful arcade games Donkey Kong, Donkey Kong Jr. and Popeye. The Famicom was slow to gather momentum; a bad chip set caused the initial release of the system to crash. Following a product recall and a reissue with a new motherboard, the Famicom's popularity soared, becoming the best-selling game console in Japan by the end of 1984. By 1988, industry observers stated that the NES's popularity had grown so quickly that the market for Nintendo cartridges was larger than that for all home computer software. By mid-1986, 19% (6.5 million) of Japanese households owned a Famicom; one third by mid-1988. In June 1989, Nintendo of America's vice president of marketing Peter Main, said that the Famicom was present in 37% of Japan's households. By the end of its run, over 60 million NES units had been sold throughout the world. In 1990 Nintendo surpassed Toyota as Japan's most successful corporation.
Because the NES was released after the "video game crash" of the early 1980s, many retailers and adults regarded electronic games as a passing fad, so many believed at first that the NES would soon fade. Before the NES/Famicom, Nintendo was known as a moderately successful Japanese toy and playing card manufacturer, but the popularity of the NES/Famicom helped the company grow into an internationally recognized name almost synonymous with video games as Atari had been, and set the stage for Japanese dominance of the video game industry. With the NES, Nintendo also changed the relationship between console manufacturers and third-party software developers by restricting developers from publishing and distributing software without licensed approval. This led to higher quality software titles, which helped change the attitude of a public that had grown weary from poorly produced titles for earlier game systems. The system's hardware limitations led to design principles that still influence the development of modern video games. Many prominent game franchises originated on the NES, including Nintendo's own Super Mario Bros., The Legend of Zelda and Metroid, Capcom's Mega Man franchise, Konami's Castlevania franchise, Square's Final Fantasy, and Enix's Dragon Quest franchises.
Following the release of the Famicom / Nintendo Entertainment System, the global video game industry began recovering, with annual sales exceeding $2.3 billion by 1988, with 70% of the market dominated by Nintendo. In 1986 Nintendo president Hiroshi Yamauchi noted that "Atari collapsed because they gave too much freedom to third-party developers and the market was swamped with rubbish games". In response, Nintendo limited the number of titles that third-party developers could release for their system each year, and promoted its "Seal of Quality", which it allowed to be used on games and peripherals by publishers that met Nintendo's quality standards.
Japan's first personal computers for gaming soon appeared, the Sord M200 in 1977 and Sharp MZ-80K in 1978. In Japan, both consoles and computers became major industries, with the console market dominated by Nintendo and the computer market dominated by NEC's PC-88 (1981) and PC-98 (1982). A key difference between Western and Japanese computers at the time was the display resolution, with Japanese systems using a higher resolution of 640x400 to accommodate Japanese text which in turn affected video game design and allowed more detailed graphics. Japanese computers were also using Yamaha's FM synth sound boards from the early 1980s. During the 16-bit era, the PC-98, Sharp X68000 and FM Towns became popular in Japan. The X68000 and FM Towns were capable of producing near arcade-quality hardware sprite graphics and sound quality when they first released in the mid-to-late 1980s.
The Wizardry series (translated by ASCII Entertainment) became popular and influential in Japan, even more so than at home. Japanese developers created the action RPG subgenre in the early 1980s, combining RPG elements with arcade-style action and action-adventure elements. The trend of combining role-playing elements with arcade-style action mechanics was popularized by The Tower of Druaga, an arcade game released by Namco in June 1984. While the RPG elements in Druaga were very subtle, its success in Japan inspired the near-simultaneous development of three early action role-playing games, combining Druagas real-time hack-and-slash gameplay with stronger RPG mechanics, all released in late 1984: Dragon Slayer, Courageous Perseus, and Hydlide. A rivalry developed between the three games, with Dragon Slayer and Hydlide continuing their rivalry through subsequent sequels. The Tower of Druaga, Dragon Slayer and Hydlide were influential in Japan, where they laid the foundations for the action RPG genre, influencing titles such as Ys and The Legend of Zelda.
The action role-playing game Hydlide (1984) was an early open world game, rewarding exploration in an open world environment. Hydlide influenced The Legend of Zelda (1986), an influential open world game. Zelda had an expansive, coherent open world design, inspiring many games to adopt a similar open world design.
Bokosuka Wars (1983) is considered an early prototype real-time strategy game. TechnoSoft's Herzog (1988) is regarded as a precursor to the real-time strategy genre, being the predecessor to Herzog Zwei and somewhat similar in nature. Herzog Zwei, released for the Sega Mega Drive/Genesis home console in 1989, is the earliest example of a game with a feature set that falls under the contemporary definition of modern real-time strategy.
Data East's Karate Champ from 1984 is credited with establishing and popularizing the one-on-one fighting game genre, and went on to influence Konami's Yie Ar Kung-Fu from 1985. Capcom's Street Fighter (1987) introduced the use of special moves that could only be discovered by experimenting with the game controls. Street Fighter II (1991) established the conventions of the fighting game genre and allowed players to play against each other.
In 1985, Sega AM2's Hang-On, designed by Yu Suzuki and running on the Sega Space Harrier hardware, was the first of Sega's "Super Scaler" arcade system boards that allowed pseudo-3D sprite-scaling at high frame rates. The pseudo-3D sprite/tile scaling was handled in a similar manner to textures in later texture-mapped polygonal 3D games of the 1990s. Designed by Sega AM2's Yu Suzuki, he stated that his "designs were always 3D from the beginning. All the calculations in the system were 3D, even from Hang-On. I calculated the position, scale, and zoom rate in 3D and converted it backwards to 2D. So I was always thinking in 3D." It was controlled using a video game arcade cabinet resembling a motorbike, which the player moves with their body. This began the "Taikan" trend, the use of motion-controlled hydraulic arcade cabinets in many arcade games of the late 1980s, two decades before motion controls became popular on video game consoles.
Sega's Space Harrier, a rail shooter released in 1985, broke new ground graphically and its wide variety of settings across multiple levels gave players more to aim for than high scores. 1985 also saw the release of Konami's Gradius, which gave the player greater control over the choice of weaponry, thus introducing another element of strategy. The game also introduced the need for the player to memorise levels in order to achieve any measure of success. Gradius, with its iconic protagonist, defined the side-scrolling shoot 'em up and spawned a series spanning several sequels. The following year saw the emergence of one of Sega's forefront series with its game Fantasy Zone. The game received acclaim for its surreal graphics and setting and the protagonist, Opa-Opa, was for a time considered Sega's mascot. The game borrowed Defender's device of allowing the player to control the direction of flight and along with the earlier TwinBee (1985), is an early archetype of the "cute 'em up" subgenre.
Hydlide II: Shine of Darkness in 1985 featured an early morality meter, where the player can be aligned with justice, normal, or evil, which is affected by whether the player kills evil monsters, good monsters, or humans, and in turn affects the reactions of the townsfolk towards the player. In the same year, Yuji Horii and his team at Chunsoft began production on Dragon Quest (Dragon Warrior). After Enix published the game in early 1986, it became the template for future console RPGs. Horii's intention behind Dragon Quest was to create a RPG that appeals to a wider audience unfamiliar with the genre or video games in general. This required the creation of a new kind of RPG, that didn't rely on previous D&D experience, didn't require hundreds of hours of rote fighting, and that could appeal to any kind of gamer. The streamlined gameplay of Dragon Quest thus made the game more accessible to a wider audience than previous computer RPGs. The game also placed a greater emphasis on storytelling and emotional involvement, building on Horii's previous work Portopia Serial Murder Case, but this time introducing a coming of age tale for Dragon Quest that audiences could relate to, making use of the RPG level-building gameplay as a way to represent this. It also featured elements still found in most console RPGs, like major quests interwoven with minor subquests, an incremental spell system, the damsel-in-distress storyline that many RPGs follow, and a romance element that remains a staple of the genre, alongside anime-style art by Akira Toriyama and a classical score by Koichi Sugiyama that was considered revolutionary for console video game music. With Dragon Quest becoming widely popular in Japan, such that local municipalities were forced to place restrictions on where and when the game could be sold, the Dragon Quest series is still considered a bellwether for the Japanese video game market.
Shoot 'em ups featuring characters on foot, rather than spacecraft, became popular in the mid-1980s in the wake of action movies such as Rambo: First Blood Part II. The origins of this type go back to Sheriff by Nintendo, released in 1979. Taito's Front Line (1982) established the upwards-scrolling formula later popularized by Capcom's Commando, in 1985, and SNK's Ikari Warriors (1986). Commando also drew comparisons to Rambo and indeed contemporary critics considered military themes and protagonists similar to Rambo or Schwarzenegger prerequisites for a shoot 'em up, as opposed to an action-adventure game. In 1986, Arsys Software released WiBArm, a shooter that switched between a 2D side-scrolling view in outdoor areas to a fully 3D polygonal third-person perspective inside buildings, while bosses were fought in an arena-style 2D battle, with the game featuring a variety of weapons and equipment.
The late 1980s to early 1990s is considered the golden age of Japanese computer gaming, which would flourish until its decline around the mid-1990s, as consoles eventually dominated the Japanese market. A notable Japanese computer RPG from around this time was WiBArm, the earliest known RPG to feature 3D polygonal graphics. It was a 1986 role-playing shooter released by Arsys Software for the PC-88 in Japan and ported to MS-DOS for Western release by Brøderbund. In WiBArm, the player controls a transformable mecha robot, switching between a 2D side-scrolling view during outdoor exploration to a fully 3D polygonal third-person perspective inside buildings, while bosses are fought in an arena-style 2D shoot 'em up battle. The game featured a variety of weapons and equipment as well as an automap, and the player could upgrade equipment and earn experience to raise stats. Unlike first-person RPGs at the time that were restricted to 90-degree movements, WiBArm's use of 3D polygons allowed full 360-degree movement.
On October 30, 1987, the PC Engine made its debut in the Japanese market and it was a tremendous success. The console had an elegant, "eye-catching" design, and it was very small compared to its rivals. The PC Engine, TurboGrafx-16, known as TurboGrafx-16 in the rest of the world, was a collaborative effort between Hudson Soft, who created video game software, and NEC, a major company which was dominant in the Japanese personal computer market with their PC-88 and PC-98 platforms.
R-Type, an acclaimed side-scrolling shoot 'em up, was released in 1987 by Irem, employing slower paced scrolling than usual, with difficult levels calling for methodical strategies. 1990's Raiden was the beginning of another acclaimed and enduring series to emerge from this period. In 1987, Square's 3-D WorldRunner was an early stereoscopic 3-D shooter played from a third-person perspective, followed later that year by its sequel JJ, and the following year by Space Harrier 3-D which used the SegaScope 3-D shutter glasses. Also in 1987, Konami created Contra as a coin-op arcade game that was particularly acclaimed for its multi-directional aiming and two player cooperative gameplay.
Digital Devil Story: Megami Tensei by Atlus for the Nintendo Famicom abandoned the common medieval fantasy setting and sword and sorcery theme in favour of a modern science-fiction setting and horror theme. It also introduced the monster-catching mechanic with its demon-summoning system, which allowed the player to recruit enemies into their party, through a conversation system that gives the player a choice of whether to kill or spare an enemy and allows them to engage any opponent in conversation. Sega's original Phantasy Star for the Master System combined sci-fi & fantasy setting that set it apart from the D&D staple. It was also one of the first games to feature a female protagonist and animated monster encounters, and allowed inter-planetary travel between three planets. Another 1987 title Miracle Warriors: Seal of the Dark Lord was a third-person RPG that featured a wide open world and a mini-map on the corner of the screen.
According to Wizardry developer Roe R. Adams, early action-adventure games "were basically arcade games done in a fantasy setting," citing Castlevania (1986) and Trojan (1986) as examples. IGN UK argues that The Legend of Zelda (1986) "helped to establish a new subgenre of action-adventure", becoming a success due to how it combined elements from different genres to create a compelling hybrid, including exploration, adventure-style inventory puzzles, an action component, a monetary system, and simplified RPG-style level building without the experience points. The Legend of Zelda was the most prolific action-adventure game series through to the 2000s.
The first Nintendo Space World show was held on July 28, 1989. It was a video game trade show that was hosted by Nintendo until 2001. At the same year, Phantasy Star II for the Genesis established many conventions of the RPG genre, including an epic, dramatic, character-driven storyline dealing with serious themes and subject matter, and a strategy-based battle system. The game's science fiction story was also unique, reversing the common alien invasion scenario by instead presenting Earthlings as the invading antagonists rather than the defending protagonists. Capcom's Sweet Home for the NES introduced a modern Japanese horror theme and laid the foundations for the survival horror genre, later serving as the main inspiration for Resident Evil (1996). Tengai Makyo: Ziria released for the PC Engine CD that same year was the first RPG released on CD-ROM and the first in the genre to feature animated cut scenes and voice acting. The game's plot was also unusual for its feudal Japan setting and its emphasis on humour; the plot and characters were inspired by the Japanese folk tale Jiraiya Goketsu Monogatari. The music for the game was also composed by noted musician Ryuichi Sakamoto.
The ‘golden age’ of console RPGs is often dated in the 1990s. Console RPGs distinguished themselves from computer RPGs to a greater degree in the early 1990s. As console RPGs became more heavily story-based than their computer counterparts, one of the major differences that emerged during this time was in the portrayal of the characters, with most American computer RPGs at the time having characters devoid of personality or background as their purpose was to represent avatars which the player uses to interact with the world, in contrast to Japanese console RPGs which depicted pre-defined characters who had distinctive personalities, traits, and relationships, such as Final Fantasy and Lufia, with players assuming the roles of people who cared about each other, fell in love or even had families. Romance in particular was a theme that was common in most console RPGs but alien to most computer RPGs at the time. Japanese console RPGs were also generally more faster-paced and action-adventure-oriented than their American computer counterparts. During the 1990s, console RPGs had become increasingly dominant.
In 1990, Dragon Quest IV introduced a new method of storytelling: segmenting the plot into segregated chapters. The game also introduced an AI system called "Tactics" which allowed the player to modify the strategies used by the allied party members while maintaining full control of the hero. Final Fantasy III introduced the classic "job system", a character progression engine allowing the player to change the character classes, as well as acquire new and advanced classes and combine class abilities, during the course of the game. That same year also saw the release of Nintendo's Fire Emblem: Ankoku Ryu to Hikari no Tsurugi, a game that set the template for the tactical role-playing game genre and was the first entry in the Fire Emblem series. Another notable strategy RPG that year was Koei's Bandit Kings of Ancient China, which was successful in combining the strategy RPG and management simulation genres, building on its own Nobunaga's Ambition series that began in 1983. Several early RPGs set in a post-apocalyptic future were also released that year, including Digital Devil Story: Megami Tensei II, and Crystalis, which was inspired by Hayao Miyazaki's Nausicaa of the Valley of the Wind. Crystalis also made advances to the action role-playing game subgenre, being a true action RPG that combined the real-time action-adventure combat and open world of The Legend of Zelda with the level-building and spell-casting of traditional RPGs like Final Fantasy. That year also saw the release of Phantasy Star III: Generations of Doom, which featured an innovative and original branching storyline, which spans three generations of characters and can be altered depending on which character the protagonist of each generation marries, leading to four possible endings.
In 1991, Final Fantasy IV was one of the first role-playing games to feature a complex, involving plot, placing a much greater emphasis on character development, personal relationships, and dramatic storytelling. It also introduced a new battle system: the "Active Time Battle" system, developed by Hiroyuki Ito, where the time-keeping system does not stop. The fact that enemies can attack or be attacked at any time is credited with injecting urgency and excitement into the combat system. The ATB combat system was considered revolutionary for being a hybrid between turn-based and real-time combat, with its requirement of faster reactions from players appealing to those who were more used to action games.
Nintendo executives were initially reluctant to design a new system, but as the market transitioned to the newer hardware, Nintendo saw the erosion of the commanding market share it had built up with the Nintendo Entertainment System. Nintendo's fourth-generation console, the Super Famicom, was released in Japan on November 21, 1990; Nintendo's initial shipment of 300,000 units sold out within hours. Despite stiff competition from the Mega Drive/Genesis console, the Super NES eventually took the top selling position, selling 49.10 million units worldwide, and would remain popular well into the fifth generation of consoles. Nintendo's market position was defined by their machine's increased video and sound capabilities, as well as exclusive first-party franchise titles such as Super Mario World, The Legend of Zelda: A Link to the Past and Super Metroid.
In the early 1990s, the arcades experienced a major resurgence with the 1991 release of Capcom's Street Fighter II, which popularized competitive fighting games and revived the arcade industry to a level of popularity not seen since the days of Pac-Man, setting off a renaissance for the arcade game industry in the early 1990s. Its success led to a wave of other popular games which mostly were in the fighting genre, such as Fatal Fury: King of Fighters (1992) by SNK, Virtua Fighter (1993) by SEGA, and The King of Fighters (1994–2005) by SNK. In 1993, Electronic Games noted that when "historians look back at the world of coin-op during the early 1990s, one of the defining highlights of the video game art form will undoubtedly focus on fighting/martial arts themes" which it described as "the backbone of the industry" at the time.
A new type of shoot 'em up emerged in the early 1990s: variously termed "bullet hell", "manic shooters", "maniac shooters" and , these games required the player to dodge overwhelming numbers of enemy projectiles and called for still more consistent reactions from players. Bullet hell games arose from the need for 2D shoot 'em up developers to compete with the emerging popularity of 3D games: huge numbers of missiles on screen were intended to impress players. Toaplan's Batsugun (1993) provided the prototypical template for this new breed, with Cave (formed by former employees of Toaplan, including Batsugun's main creator Tsuneki Ikeda, after the latter company collapsed) inventing the type proper with 1995's DonPachi. Bullet hell games marked another point where the shoot 'em up genre began to cater to more dedicated players. Games such as Gradius had been more difficult than Space Invaders or Xevious, but bullet hell games were yet more inward-looking and aimed at dedicated fans of the genre looking for greater challenges. While shooter games featuring protagonists on foot largely moved to 3D-based genres, popular, long-running series such as Contra and Metal Slug continued to receive new sequels. Rail shooters have rarely been released in the new millennium, with only Rez and Panzer Dragoon Orta achieving cult recognition.
1992 saw the release of Dragon Quest V, a game that has been praised for its involving, emotional family-themed narrative divided by different periods of time, something that has appeared in very few video games before or since. It has also been credited as the first known video game to feature a playable pregnancy, a concept that has since appeared in later games such as Story of Seasons. Dragon Quest V's monster-collecting mechanic, where monsters can be defeated, captured, added to the party, and gain their own experience levels, also influenced many later franchises such as Pokémon, Digimon and Dokapon. In turn, the concept of collecting everything in a game, in the form of achievements or similar rewards, has since become a common trend in video games. Shin Megami Tensei, released in 1992 for the SNES, introduced an early moral alignment system that influences the direction and outcome of the storyline, leading to different possible paths and multiple endings. This has since become a hallmark of the Megami Tensei series. Another non-linear RPG released that year was Romancing Saga, an open-world RPG by Square that offered many choices and allowed players to complete quests in any order, with the decision of whether or not to participate in any particular quest affecting the outcome of the storyline. The game also allowed players to choose from eight different characters, each with their own stories that start in different places and offer different outcomes. Data East's Heracles no Eikō III, written by Kazushige Nojima, introduced the plot element of a nameless immortal suffering from amnesia, and Nojima would later revisit the amnesia theme in Final Fantasy VII and Glory of Heracles. The TurboGrafx-CD port of Dragon Knight II released that year was also notable for introducing erotic adult content to consoles, though such content had often appeared in Japanese computer RPGs since the early 1980s. That same year, Game Arts began the Lunar series on the Sega CD with Lunar: The Silver Star, one of the first successful CD-ROM RPGs, featuring both voice and text, and considered one of the best RPGs in its time. The game was praised for its soundtrack, emotionally engaging storyline, and strong characterization. It also introduced an early form of level-scaling where the bosses would get stronger depending on the protagonist's level, a mechanic that was later used in Enix's The 7th Saga and extended to normal enemies in Square's Romancing Saga 3 and later Final Fantasy VIII.
3D polygon graphics were popularized by the Sega Model 1 games Virtua Racing (1992) and Virtua Fighter (1993), followed by racing games like the Namco System 22 title Ridge Racer (1993) and Sega Model 2 title Daytona USA, and light gun shooters like Sega's Virtua Cop (1994), gaining considerable popularity in the arcades.
In 1993, Square's Secret of Mana, the second in the Mana series, further advanced the action RPG subgenre with its introduction of cooperative multiplayer into the genre. The game was created by a team previously responsible for the first three Final Fantasy titles: Nasir Gebelli, Koichi Ishii, and Hiromichi Tanaka. The game received considerable acclaim, for its innovative pausable real-time battle system, the "Ring Command" menu system, its innovative cooperative multiplayer gameplay, where the second or third players could drop in and out of the game at any time rather than players having to join the game at the same time, and the customizable AI settings for computer-controlled allies. The game has influenced a number of later action RPGs. That same year also saw the release of Phantasy Star IV: The End of the Millennium, which introduced the use of pre-programmable combat manoeuvers called 'macros', a means of setting up the player's party AI to deliver custom attack combos. That year also saw the release of Romancing Saga 2, which further expanded the non-linear gameplay of its predecessor. While in the original Romancing Saga, scenarios were changed according to dialogue choices during conversations, Romancing Saga 2 further expanded on this by having unique storylines for each character that can change depending on the player's actions, including who is chosen, what is said in conversation, what events have occurred, and who is present in the party. PCGamesN credits Romancing SaGa 2 for having laid the foundations for modern Japanese RPGs with its progressive, non-linear, open world design and subversive themes.
In 1994, Final Fantasy VI moved away from the medieval setting of its predecessors, instead being set in a steampunk environment,. The game received considerable acclaim, and is seen as one of the greatest RPGs of all time, for improvements such as its broadened thematic scope, plotlines, characters, multiple-choice scenarios, and variation of play. Final Fantasy VI dealt with mature themes such as suicide, war crimes, child abandonment, teen pregnancy, and coping with the deaths of loved ones. Square's Live A Live, released for the Super Famicom in Japan, featured eight different characters and stories, with the first seven unfolding in any order the player chooses, as well as four different endings. The game's ninja chapter in particular was an early example of stealth game elements in an RPG, requiring the player to infiltrate a castle, rewarding the player if the entire chapter can be completed without engaging in combat. Other chapters had similar innovations, such as Akira's chapter where the character uses telepathic powers to discover information. Robotrek by Quintet and Ancient was a predecessor to Pokémon in the sense that the protagonist does not himself fight, but sends out his robots to do so. Like Pokémon, Robotrek was designed to appeal to a younger audience, allowed team customization, and each robot was kept in a ball.
FromSoftware released their first video game, titled King's Field, as a launch title for the PlayStation in 1994. The game was later called the brainchild of company founder Naotoshi Zin, who was considered a key creative figure in the series. The eventual success of the first King's Field prompted the development of sequels, establishing the King's Field series. The design of King's Field would influence later titles by FromSoftware including Shadow Tower, which used similar mechanics to King's Field; and Demon's Souls, described by its staff as a spiritual successor to King's Field, and inspired multiple follow-up titles which form part of the Souls series and propelled FromSoftware to international fame.
In 1995, Square's Romancing Saga 3 featured a storyline that could be told differently from the perspectives of up to eight different characters and introduced a level-scaling system where the enemies get stronger as the characters do, a mechanic that was later used in a number of later RPGs, including Final Fantasy VIII. Sega's Sakura Wars for the Saturn combined tactical RPG combat with dating sim and visual novel elements, introducing a real-time branching choice system where, during an event or conversation, the player must choose an action or dialogue choice within a time limit, or not to respond at all within that time; the player's choice, or lack thereof, affects the player character's relationship with other characters and in turn the characters' performance in battle, the direction of the storyline, and the ending. Later games in the series added several variations, including an action gauge that can be raised up or down depending on the situation, and a gauge that the player can manipulate using the analog stick depending on the situation. The success of Sakura Wars led to a wave of games that combine the RPG and dating sim genres, including Thousand Arms in 1998, Riviera: The Promised Land in 2002, and Luminous Arc in 2007.
The survival horror video game genre began with Capcom's Resident Evil (1996), which coined the term "survival horror" and defined the genre. The game was inspired by Capcom's Sweet Home (1989), retroactively described as survival horror.
The first Tokyo Game Show was held in 1996. From 1996 to 2002, the show was held twice a year: once in the Spring and once in Autumn (in the Tokyo Big Sight). Since 2002, the show has been held once a year. It attracts more visitors every year. 2011's show hosted over 200,000 attendees and the 2012 show bringing in 223,753. The busiest TGS was in 2016 with 271,224 people in attendance and 614 companies had exhibits. The event has been held annually since 1996 and was never canceled. The 20th anniversary of TGS was celebrated in 2016.
The Fujitsu FM Towns Marty is considered the world's first 32-bit console (predating the Amiga CD32 and 3DO), being released only released in Japan on February 20, 1993 by Fujitsu. However, it failed to make an impact in the marketplace due to its expense relative to other consoles and inability to compete with home computers. Around the mid-1990s, the fifth-generation home consoles, Sega Saturn, PlayStation, and Nintendo 64, began offering true 3D graphics, improved sound, and better 2D graphics, than the previous generation. By 1995, personal computers followed, with 3D accelerator cards. While arcade systems such as the Sega Model 3 remained considerably more advanced than home systems in the late 1990s.
The next major revolution came in the mid-to-late 1990s, which saw the rise of 3D computer graphics and optical discs in fifth generation consoles. The implications for RPGs were enormous—longer, more involved quests, better audio, and full-motion video. This was clearly demonstrated in 1997 by the phenomenal success of Final Fantasy VII, which is considered one of the most influential games of all time, akin to that of Star Wars in the movie industry. With a record-breaking production budget of around $45 million, the ambitious scope of Final Fantasy VII raised the possibilities for the genre, with its more expansive world to explore, much longer quest, more numerous sidequests, dozens of minigames, and much higher production values. The latter includes innovations such as the use of 3D characters on pre-rendered backgrounds, battles viewed from multiple different angles rather than a single angle, and for the first time full-motion CGI video seamlessly blended into the gameplay, effectively integrated throughout the game. Gameplay innovations included the materia system, which allowed a considerable amount of customization and flexibility through materia that can be combined in many different ways and exchanged between characters at any time, and the limit breaks, special attacks that can be performed after a character's limit meter fills up by taking hits from opponents. Final Fantasy VII continues to be listed among the best games of all time, for its highly polished gameplay, high playability, lavish production, well-developed characters, intricate storyline, and an emotionally engaging narrative that is much darker and sophisticated than most other RPGs. The game's storytelling and character development was considered a major narrative jump forward for video games and was often compared to films and novels at the time.
One of the earliest Japanese RPGs, Koei's The Dragon and Princess (1982), featured a tactical turn-based combat system. Koji Sumii's Bokosuka Wars, originally released for the Sharp X1 computer in 1983 and later ported to the NES in 1985, is credited for laying the foundations for the tactical RPG genre, or "simulation RPG" genre as it is known in Japan, with its blend of basic RPG and strategy game elements. The genre became with the game that set the template for tactical RPGs, Fire Emblem: Ankoku Ryū to Hikari no Tsurugi (1990).
Treasure's shoot 'em up, Radiant Silvergun (1998), introduced an element of narrative to the genre. It was critically acclaimed for its refined design, though it was not released outside Japan and remains a much sought after collector's item. Its successor Ikaruga (2001) featured improved graphics and was again acclaimed as one of the best games in the genre. Both Radiant Silvergun and Ikaruga were later released on Xbox Live Arcade. The Touhou Project series spans 22 years and 27 games as of 2018 and was listed in the Guinness World Records in October 2010 for being the "most prolific fan-made shooter series". The genre has undergone something of a resurgence with the release of the Xbox 360, PlayStation 3 and Wii online services, while in Japan arcade shoot 'em ups retain a deep-rooted niche popularity. Geometry Wars: Retro Evolved was released on Xbox Live Arcade in 2005 and in particular stood out from the various re-releases and casual games available on the service. The PC has also seen its share of dōjin shoot 'em ups like Crimzon Clover, Jamestown: Legend of the Lost Colony, Xenoslaive Overdrive, and the eXceed series. However, despite the genre's continued appeal to an enthusiastic niche of players, shoot 'em up developers are increasingly embattled financially by the power of home consoles and their attendant genres.
2005–2015
In 2002, the Japanese video game industry made up about 50% of the global market; that share has since shrunk to around 10% by 2010. The shrinkage in market share has been attributed to a difference of taste between Japanese and Western audiences, and the country's economic recession.
Nintendo had seen record revenues, net sales and profits in 2009 as a result of the release of the Nintendo DS and Wii in 2004 and 2006, respectively, but in Nintendo's subsequent years, its revenues had declined.
In 2007 Tokkun Studio released Marie: BabySitter for PC.
In 2009, FromSoftware released Demon's Souls for the PlayStation 3, which brought them international exposure. Its spiritual successor, Dark Souls, was released in 2011. In March 2014, Dark Souls II, was released, while Dark Souls III was released in 2016. A title inspired by the Souls series, Bloodborne, was released in March 2015. The Souls series, along with Bloodborne, received widespread critical acclaim, as well as strong sales domestically and internationally. They have also received a number of awards, primarily those for the role-playing genre, including multiple "RPG of the Year" and Game of the Year awards. Since release, Dark Souls and Bloodborne have been cited by many publications to be among the greatest games of all time.
The decline of the Japanese video game development industry during this period was partially attributed to the traditional development process. Japanese companies were criticized for long development times and slow release dates on home video game consoles, their lack of third-party game engines, and for being too insular to appeal to a global market. Yoichi Wada stated in the Financial Times on April 27, 2009 that the Japanese video game development industry has become a "closed environment" and "almost xenophobic." He also stated: "The lag with the US is very clear. The US games industry was not good in the past but it has now attracted people from the computer industry and from Hollywood, which has led to strong growth." At the 2010 Tokyo Game Show, Keiji Inafune stated that "Everyone's making awful games - Japan is at least five years behind", and that "Japan is isolated in the gaming world. If something doesn't change, we're doomed.", stressing the need for Japanese developers to bring in Western approaches to game development to make a comeback.
Related to the isolationism, games developed in Western countries did not perform well in Japan, whereas Japanese games were readily played by Western market consumers. Foreign games often sell more poorly in Japanese markets due to differences in what consumers expect for escapism between these cultures. Microsoft had attempted to push both the Xbox and Xbox 360 consoles in Japan with poor success, at they struggled to compete against Sony and Nintendo there.
However, as detailed above, Japanese console games became less successful, even in their own country, as of 2013.
In the Japanese gaming industry, arcades have remained popular through to the present day. As of 2009, out of Japan's $20 billion gaming market, $6 billion of that amount is generated from arcades, which represent the largest sector of the Japanese video game market, followed by home console games and mobile games at $3.5 billion and $2 billion, respectively. In 2005, arcade ownership and operation accounted for a majority of Namco's for example. With considerable withdrawal from the arcade market from companies such as Capcom, Sega became the strongest player in the arcade market with 60% marketshare in 2006. Despite the global decline of arcades, Japanese companies hit record revenue for three consecutive years during this period. However, due to the country's economic recession, the Japanese arcade industry has also been steadily declining, from ¥702.9 billion (US$8.7 billion) in 2007 to ¥504.3 billion ($6.2 billion) in 2010. In 2013, estimation of revenue is ¥470 billion.
In the 2010s, Japanese RPGs have been experiencing a resurgence on PC, with a significant increase in the number of Japanese RPGs releasing for the Steam platform. This began with the 2010 release of doujin/indie game Recettear (2007) for Steam, selling over 500,000 units on the platform. This led to many Japanese doujin/indie games releasing on Steam in subsequent years.
Beyond doujin/indie titles, 2012 was a breakthrough year, with the debut of Nihon Falcom's Ys series on Steam and then the Steam release of From Software's Dark Souls, which sold millions on the platform. Other Japanese RPGs were subsequently ported to Steam, such as the previously niche Valkyria Chronicles which became a million-seller on the platform, and other titles that sold hundreds of thousands on Steam, such as the 2014 localization of The Legend of Heroes: Trails in the Sky (2014) and ports of numerous Final Fantasy titles. Japanese developers have been increasingly considering Steam as a viable platform for the genre, with many Japanese RPGs available on the platform.
By 2015, Japan had become the world's fourth largest PC game market, behind only China, the United States, and South Korea. The Japanese game development engine RPG Maker has also gained significant popularity on Steam, including hundreds of commercial games. Every year, hundreds of games released on Steam are created using RPG Maker, as of 2017.
In the present day, Japan is the world's largest market for mobile games. The Japanese market today is becoming increasingly dominated by mobile games, which generated $5.1 billion in 2013, more than traditional console games in the country.
Former rivals in the Japanese arcade industry, Konami, Taito, Bandai Namco Entertainment and Sega, are now working together to keep the arcade industry vibrant. This is evidenced in the sharing of arcade networks, and venues having games from all major companies rather than only games from their own company.
2016–present
The eighth generation of video game consoles primarily includes the home video game consoles of the Wii U released in 2012, the PlayStation 4 family in 2013; the handheld game consoles of the Nintendo 3DS in 2011, Nintendo 2DS in 2013, and the PlayStation Vita in 2011; as well as the first hybrid game console, the Nintendo Switch in 2017, which played as a handheld but could be docked to played like a home console. Unlike in most prior generations, there were few new innovative hardware capabilities to mark this generation as distinct from prior ones. Sony continued to produce new systems with similar designs and capabilities as their predecessors, but with improved performance (processing speed, higher-resolution graphics, and increased storage capacity) that further moved consoles into confluence with personal computers, and furthering support for digital distribution and games as a service. Motion-controlled games of the seventh generation had waned in popularity, but consoles were preparing for advancement of virtual reality (VR); Sony introduced the PlayStation VR in 2016.
Though prior console generations have normally occurred in five to six-year cycles, the transition from seventh to eighth generation lasted approximately eight years. The transition is also unusual in that the prior generation's best-selling unit, the Wii, was the first to be replaced in the eighth generation. In 2011, Sony considered themselves only halfway through a ten-year lifecycle for their seventh-generation offerings. Nintendo president Satoru Iwata had stated that his company would be releasing the Wii U due to declining sales of seventh generation home consoles and that "the market is now waiting for a new proposal for home consoles". Sony considered making its next console a digital download only machine, but decided against it due to concerns about the inconsistency of internet speeds available globally, especially in developing countries.
On September 13, 2012, Nintendo announced that the Wii U would launch in Japan on December 8, 2012. The PlayStation 4 and Wii U use AMD GPUs, and the PS4 also use AMD CPUs on an x86-64 architecture, similar to common personal computers (as opposed to the IBM PowerPC Architecture used in the previous generation). Nintendo and Sony were not aware that they were all using AMD hardware until their consoles were announced. This shift was considered to be beneficial for multi-platform development, due to the increased similarities between PC hardware and console hardware.
In October 2013, online retailer Play.com announced that its Wii U sales saw a 75% sales increase. The company also predicted that the Wii U would be more popular than its competition, the PlayStation 4 and Xbox One, among children during the holiday season. Following the release of Wii Party U on October 31 in Japan, weekly Wii U sales spiked to 38,802 units sold. During the first two weeks of December, the Wii U was the top performing home console in Japan, with 123,665 units sold. In fiscal year (FY) 2013 (ending early 2013), Nintendo sold 23.7 million consoles. By February 26, 2014, Wii U sales had surpassed those of the Xbox 360 in Japan. However, by June 2015, the basic Wii U was discontinued in Japan, and replaced by a 32 GB "Premium" set that includes white hardware and a Wii Remote Plus. In mid-November 2016, Nintendo announced that Japanese production of the Wii U would be ending "in the near future".
The PS4 was released in Japan at ¥39,980 on February 22, 2014. In September 2015, Sony reduced the price of the PS4 in Japan to ¥34,980, with similar price drops in other Southeast Asian markets. Within the first two days of release in Japan during the weekend of February 22, 2014, 322,083 consoles were sold. PS4 software unit sales surpassed 20.5 million on April 13, 2014. During Japan's 2013 fiscal year, heightened demand for the PS4 helped Sony top global console sales, beating Nintendo for the first time in eight years.
Since 2016, Japanese video games have been experiencing a resurgence, as part of a renaissance for the Japanese video game industry. In 2017, Japanese video games gained further commercial success and greater critical acclaim. In 2016, the global success of Pokémon Go helped Pokémon Sun and Moon set sales records around the world. Final Fantasy XV was also a major success, selling millions. There were also other Japanese RPGs that earned commercial success and/or critical acclaim that year, including Dragon Quest VII: Fragments of the Forgotten Past, Shin Megami Tensei IV: Apocalypse, Bravely Second, Fire Emblem Fates, Dragon Quest Builders, World of Final Fantasy, Exist Archive: The Other Side of the Sky and I Am Setsuna.
Anticipating the release of the console's successor, the Nintendo Switch, a hybrid video game console, Nintendo had planned to diminish production of the Wii U. It formally announced the end of its production on January 31, 2017. The company had posted its first loss as a video game company in 2012 prior to the Wii U's introduction that year, and had similar losses in the following years due to the console's poor uptake. The New York Times attributed Nintendo lowering financial forecasts in 2014 to weak hardware sales against mobile gaming. Previously, the company had been hesitant about this market, with then-president Satoru Iwata considering that they would "cease to be Nintendo" and lose their identity if they attempted to enter it. About three years prior to the Switch's announcement, Iwata, Tatsumi Kimishima, Genyo Takeda, and Shigeru Miyamoto crafted a strategy for revitalizing Nintendo's business model, which included approaching the mobile market, creating new hardware, and "maximizing [their] intellectual property". Prior to his death, Iwata was able to secure a business alliance with Japanese mobile provider DeNA to develop mobile titles based on Nintendo's first-party franchises, believing this approach would not compromise their integrity. Following Iwata's death in July 2015, Kimishima was named as president of Nintendo, while Miyamoto was promoted to the title of "Creative Fellow".
The Switch was officially released on March 3, 2017 in Japan with an MSRP of . The design of the Switch was aimed to bridge the polarization of the gaming market at the time, creating a device that could play "leisurely" video games along with games that are aimed to be played "deeply", according to Shinya Takahashi and Yoshiaki Koizumi, general manager and deputy general manager of Nintendo's Entertainment Planning & Development Division (EPD), respectively. This approach also would apply to the cultural lifestyle and gaming differences between Japanese and Western players; Japanese players tend to play on the go and with social groups, while Western players tend to play at home by themselves. The design of the Switch would meet both cultures, and certain games, like 1-2-Switch, could potentially make social gaming more acceptable in Western culture. Two key elements that were set to address this mixed market were the ability for the unit to play both on a television screen and as a portable, and the use of detachable controllers. In Japan, first weekend sales exceeded 330,000 units, which was on par with the PlayStation 4 during its launch period. Media Create estimated that more than 500,000 Switch units were sold in Japan within its first month, beating out the PlayStation 4 to this figure.
Console sales in Japan, which had been languishing due to the strength of the mobile game market, saw its first annual growth of 14.8% in 2017 due to the release of the Switch. Based on its first year sales, the Switch was considered to be the fastest-selling game console in history in many regions. With 2017 year end Japanese sales data from Media Create, the Switch became the fastest-selling home console in Japan in first year sales, with its total sales of 3.2 million units exceeding the 3.0 million units of the PlayStation 2 during its first year of release, while Famitsu reported that these sales had eclipsed the lifetime sales of the Wii U in the country, and helped to support the first growth in sales within Japan's console market in eleven years. By May 2019, the Switch had overtaken the PS4's lifetime sales in Japan.
In 2017, Japanese RPGs gained further commercial success and greater critical acclaim. The year started strong with Gravity Rush 2, followed by Yakuza 0, which some critics consider the best in the Yakuza series, Nioh which is considered to have one of the eighth-generation's best RPG combat systems, and then Nier Automata which has gameplay and storytelling thought to be some of the best in recent years. Persona 5 won the Best Role Playing Game award at The Game Awards 2017. Some Japanese RPGs that were previously considered niche became mainstream million-sellers in 2017, including Persona 5, Nier: Automata, Nioh, and Xenoblade Chronicles 2 on the Nintendo Switch. 2017 was considered a strong year for Japanese RPGs, with other notable releases including Dragon Quest VIII on the Nintendo 3DS, Tales of Berseria, Valkyria Revolution, Ever Oasis, Final Fantasy XII: The Zodiac Age, Ys VIII, Etrian Odyssey V, Dragon Quest Heroes II, The Legend of Heroes: Trails in the Sky the 3rd, Fire Emblem Echoes: Shadows of Valentia, Final Fantasy XIV: Stormblood, and Tokyo Xanadu. In 2018, Monster Hunter: World sold over 10million units, becoming Capcom's best-selling single software title, and Square Enix's Octopath Traveler sold over 1million units.
Sony released the PlayStation 5 in 2020 and have emphasized that they want this to be a soft transition, allowing PlayStation 4 games to be directly backwards compatible on their respective systems. Sony has stated the "overwhelming majority" of PlayStation 4 games will play on the PlayStation 5, with many running at higher frame rates and resolutions.
See also
Joypolis
Sega World
Warehouse Kawasaki
References
Sources
External links
Mass media in Japan |
1267364 | https://en.wikipedia.org/wiki/Teredo%20tunneling | Teredo tunneling | In computer networking, Teredo is a transition technology that gives full IPv6 connectivity for IPv6-capable hosts that are on the IPv4 Internet but have no native connection to an IPv6 network. Unlike similar protocols such as 6to4, it can perform its function even from behind network address translation (NAT) devices such as home routers.
Teredo operates using a platform independent tunneling protocol that provides IPv6 (Internet Protocol version 6) connectivity by encapsulating IPv6 datagram packets within IPv4 User Datagram Protocol (UDP) packets. Teredo routes these datagrams on the IPv4 Internet and through NAT devices. Teredo nodes elsewhere on the IPv6 network (called Teredo relays) receive the packets, un-encapsulate them, and pass them on.
Teredo is a temporary measure. In the long term, all IPv6 hosts should use native IPv6 connectivity. Teredo should be disabled when native IPv6 connectivity becomes available. Christian Huitema developed Teredo at Microsoft, and the IETF standardized it as RFC 4380. The Teredo server listens on UDP port 3544.
Purpose
For 6to4, the most common IPv6 over IPv4 tunneling protocol, requires that the tunnel endpoint have a public IPv4 address. However, many hosts currently attach to the IPv4 Internet through one or several NAT devices, usually because of IPv4 address shortage. In such a situation, the only available public IPv4 address is assigned to the NAT device, and the 6to4 tunnel endpoint must be implemented on the NAT device itself. Many NAT devices currently deployed, however, cannot be upgraded to implement 6to4, for technical or economic reasons.
Teredo alleviates this problem by encapsulating IPv6 packets within UDP/IPv4 datagrams, which most NATs can forward properly. Thus, IPv6-aware hosts behind NATs can serve as Teredo tunnel endpoints even when they don't have a dedicated public IPv4 address. In effect, a host that implements Teredo can gain IPv6 connectivity with no cooperation from the local network environment.
In the long term, all IPv6 hosts should use native IPv6 connectivity. The temporary Teredo protocol includes provisions for a sunset procedure: Teredo implementation should provide a way to stop using Teredo connectivity when IPv6 matures and connectivity becomes available using a less brittle mechanism. As of IETF89, Microsoft plans to deactivate their Teredo servers for Windows clients in the first half of 2014 (exact date TBD), and encourage the deactivation of publicly operated Teredo relays.
Overview
The Teredo protocol performs several functions:
Diagnoses UDP over IPv4 (UDPv4) connectivity and discovers the kind of NAT present (using a simplified replacement to the STUN protocol)
Assigns a globally routable unique IPv6 address to each host using it
Encapsulates IPv6 packets inside UDPv4 datagrams for transmission over an IPv4 network (this includes NAT traversal)
Routes traffic between Teredo hosts and native (or otherwise non-Teredo) IPv6 hosts
Node types
Teredo defines several different kinds of nodes:
Teredo client A host that has IPv4 connectivity to the Internet from behind a NAT and uses the Teredo tunneling protocol to access the IPv6 Internet. Teredo clients are assigned an IPv6 address that starts with the Teredo prefix (2001::/32).
Teredo server A well-known host used for initial configuration of a Teredo tunnel. A Teredo server never forwards any traffic for the client (apart from IPv6 pings), and has therefore modest bandwidth requirements (a few hundred bits per second per client at most), which means a single server can support many clients. Additionally, a Teredo server can be implemented in a fully stateless manner, thus using the same amount of memory regardless of how many clients it supports.
Teredo relay The remote end of a Teredo tunnel. A Teredo relay must forward all of the data on behalf of the Teredo clients it serves, with the exception of direct Teredo client to Teredo client exchanges. Therefore, a relay requires a lot of bandwidth and can only support a limited number of simultaneous clients. Each Teredo relay serves a range of IPv6 hosts (e.g. a single campus or company, an ISP or a whole operator network, or even the whole IPv6 Internet); it forwards traffic between any Teredo clients and any host within said range.
Teredo host-specific relay A Teredo relay whose range of service is limited to the very host it runs on. As such, it has no particular bandwidth or routing requirements. A computer with a host-specific relay uses Teredo to communicate with Teredo clients, but sticks to its main IPv6 connectivity provider to reach the rest of the IPv6 Internet.
IPv6 addressing
Each Teredo client is assigned a public IPv6 address, which is constructed as follows (the higher order bit is numbered 0):
Bits 0 to 31 hold the Teredo prefix (2001::/32).
Bits 32 to 63 embed the primary IPv4 address of the Teredo server that is used.
Bits 64 to 79 hold some flags and other bits; the format for these 16 bits, MSB first, is "CRAAAAUG AAAAAAAA". The "C" bit was set to 1 if the Teredo client is located behind a cone NAT, 0 otherwise, but RFC 5991 changed it to always be 0 to avoid revealing this fact to strangers. The "R" bit is currently unassigned and should be sent as 0. The "U" and "G" bits are set to 0 to emulate the "Universal/local" and "Group/individual" bits in MAC addresses. The 12 "A" bits were 0 in the original RFC 4380 specification, but were changed to random bits chosen by the Teredo client in RFC 5991 to provide the Teredo node with additional protection against IPv6-based scanning attacks.
Bits 80 to 95 contain the obfuscated UDP port number. This is the port number that the NAT maps to the Teredo client, with all bits inverted.
Bits 96 to 127 contain the obfuscated IPv4 address. This is the public IPv4 address of the NAT with all bits inverted.
As an example, the IPv6 address 2001:0000:4136:e378:8000:63bf:3fff:fdd2 refers to a Teredo client that:
Uses Teredo server at address 65.54.227.120 (4136e378 in hexadecimal)
Is behind a cone NAT and client is not fully compliant with RFC 5991 (bit 64 is set)
Is probably (99.98%) not compliant with RFC 5991 (the 12 random bits are all 0, which happens less than 0.025% of the time)
Uses UDP mapped port 40000 on its NAT (in hexadecimal not 63bf equals 9c40, or decimal number 40000)
Has a NAT public IPv4 address of 192.0.2.45 (not 3ffffdd2 equals c000022d, which is to say, 192.0.2.45)
Servers
Teredo clients use Teredo servers to autodetect the kind of NAT they are behind (if any), through a simplified STUN-like qualification procedure. Teredo clients also maintain a binding on their NAT toward their Teredo server by sending a UDP packet at regular intervals. That ensures that the server can always contact any of its clients—which is required for NAT hole punching to work properly.
If a Teredo relay (or another Teredo client) must send an IPv6 packet to a Teredo client, it first sends a Teredo bubble packet to the client's Teredo server, whose IP address it infers from the Teredo IPv6 address of the Teredo client. The server then forwards the bubble to the client, so the Teredo client software knows it must do hole punching toward the Teredo relay.
Teredo servers can also transmit ICMPv6 packet from Teredo clients toward the IPv6 Internet. In practice, when a Teredo client wants to contact a native IPv6 node, it must locate the corresponding Teredo relay, i.e., to which public IPv4 and UDP port number to send encapsulated IPv6 packets. To do that, the client crafts an ICMPv6 Echo Request (ping) toward the IPv6 node, and sends it through its configured Teredo server. The Teredo server de-capsulates the ping onto the IPv6 Internet, so that the ping should eventually reach the IPv6 node. The IPv6 node should then reply with an ICMPv6 Echo Reply, as mandated by RFC 2460. This reply packet is routed to the closest Teredo relay, which — finally — tries to contact the Teredo client.
Maintaining a Teredo server requires little bandwidth, because they are not involved in actual transmission and reception of IPv6 traffic packets. Also, it does not involve any access to the Internet routing protocols. The only requirements for a Teredo server are:
The ability to emit ICMPv6 packets with a source address belonging to the Teredo prefix
Two distinct public IPv4 addresses. Though not written down in the official specification, Microsoft Windows clients expect both addresses to be consecutive — the second IPv4 address is for NAT detection
Public Teredo servers:
teredo.remlab.net / teredo-debian.remlab.net (Germany)
teredo.trex.fi (Finland)
Relays
A Teredo relay potentially requires much network bandwidth. Also, it must export (advertise) a route toward the Teredo IPv6 prefix (2001::/32) to other IPv6 hosts. That way, the Teredo relay receives traffic from the IPv6 hosts addressed to any Teredo client, and forwards it over UDP/IPv4. Symmetrically, it receives packets from Teredo clients addressed to native IPv6 hosts over UDP/IPv4 and injects those into the native IPv6 network.
In practice, network administrators can set up a private Teredo relay for their company or campus. This provides a short path between their IPv6 network and any Teredo client. However, setting up a Teredo relay on a scale beyond that of a single network requires the ability to export BGP IPv6 routes to the other autonomous systems (AS's).
Unlike 6to4, where the two halves of a connection can use different relays, traffic between a native IPv6 host and a Teredo client uses the same Teredo relay, namely the one closest to the native IPv6 host network-wise. The Teredo client cannot localize a relay by itself (since it cannot send IPv6 packets by itself). If it needs to initiate a connection to a native IPv6 host, it sends the first packet through the Teredo server, which sends a packet to the native IPv6 host using the client's Teredo IPv6 address. The native IPv6 host then responds as usual to the client's Teredo IPv6 address, which eventually causes the packet to find a Teredo relay, which initiates a connection to the client (possibly using the Teredo server for NAT piercing). The Teredo Client and native IPv6 host then use the relay for communication as long as they need to. This design means that neither the Teredo server nor client needs to know the IPv4 address of any Teredo relays. They find a suitable one automatically via the global IPv6 routing table, since all Teredo relays advertise the network 2001::/32.
On March 30, 2006, Italian ISP ITGate was the first AS to start advertising a route toward 2001::/32 on the IPv6 Internet, so that RFC 4380-compliant Teredo implementations would be fully usable. As of 16 February 2007, it is no longer functional.
In Q1 2009, IPv6 backbone Hurricane Electric enabled 14 Teredo relays in an anycast implementation and advertising 2001::/32 globally. The relays were located in Seattle, Fremont, Los Angeles, Chicago, Dallas, Toronto, New York, Ashburn, Miami, London, Paris, Amsterdam, Frankfurt, and Hong Kong.
It is expected that large network operators will maintain Teredo relays. As with 6to4, it remains unclear how well the Teredo service will scale up if a large proportion of Internet hosts start using IPv6 through Teredo in addition to IPv4. While Microsoft has operated a set of Teredo servers since they released the first Teredo pseudo-tunnel for Windows XP, they have never provided a Teredo relay service for the IPv6 Internet as a whole.
Limitations
Teredo is not compatible with all NAT devices. Using the terminology of RFC 3489, it supports full cone, restricted, and port-restricted NAT devices, but does not support symmetric NATs. The Shipworm specification original that led to the final Teredo protocol also supported symmetric NATs, but dropped that due to security concerns.
People at the National Chiao Tung University in Taiwan later proposed SymTeredo, which enhanced the original Teredo protocol to support symmetric NATs, and the Microsoft and Miredo implementations implement certain unspecified non-standard extensions to improve support for symmetric NATs. However, connectivity between a Teredo client behind a symmetric NAT, and a Teredo client behind a port-restricted or symmetric NAT remains seemingly impossible.
Indeed, Teredo assumes that when two clients exchange encapsulated IPv6 packets, the mapped/external UDP port numbers used will be the same as those that were used to contact the Teredo server (and building the Teredo IPv6 address). Without this assumption, it would not be possible to establish a direct communication between the two clients, and a costly relay would have to be used to perform triangle routing. A Teredo implementation tries to detect the type of NAT at startup, and will refuse to operate if the NAT appears to be symmetric. (This limitation can sometimes be worked around by manually configuring a port forwarding rule on the NAT box, which requires administrative access to the device).
Teredo can only provide a single IPv6 address per tunnel endpoint. As such, it is not possible to use a single Teredo tunnel to connect multiple hosts, unlike 6to4 and some point-to-point IPv6 tunnels. The bandwidth available to all Teredo clients toward the IPv6 Internet is limited by the availability of Teredo relays, which are no different than 6to4 relays in that respect.
Alternatives
6to4 requires a public IPv4 address, but provides a large 48-bit IPv6 prefix for each tunnel endpoint, and has a lower encapsulation overhead. Point-to-point tunnels can be more reliable and are more accountable than Teredo, and typically provide permanent IPv6 addresses that do not depend on the IPv4 address of the tunnel endpoint. Some point-to-point tunnel brokers also support UDP encapsulation to traverse NATs (for instance, the AYIYA protocol can do this). On the other hand, point-to-point tunnels normally require registration. Automated tools (for instance AICCU) make it easy to use Point-to-Point tunnels.
Security considerations
Exposure
Teredo increases the attack surface by assigning globally routable IPv6 addresses to network hosts behind NAT devices, which would otherwise be unreachable from the Internet. By doing so, Teredo potentially exposes any IPv6-enabled application with an open port to the outside. Teredo tunnel encapsulation can also cause the contents of the IPv6 data traffic to become invisible to packet inspection software, facilitating the spread of malware. Finally, Teredo exposes the IPv6 stack and the tunneling software to attacks should they have any remotely exploitable vulnerability.
In order to reduce the attack surface, the Microsoft IPv6 stack has a "protection level" socket option. This allows applications to specify from which sources they are willing to accept IPv6 traffic: from the Teredo tunnel, from anywhere except Teredo (the default), or only from the local intranet.
The Teredo protocol also encapsulates detailed information about the tunnel's endpoint in its data packets. This information can help potential attackers by increasing the feasibility of an attack, and/or by reducing the effort required.
Firewalling, filtering, and blocking
For a Teredo pseudo-tunnel to operate properly, outgoing UDP packets to port 3544 must be unfiltered. Moreover, replies to these packets (i.e., "solicited traffic") must also be unfiltered. This corresponds to the typical setup of a NAT and its stateful firewall functionality. Teredo tunneling software reports a fatal error and stops if outgoing IPv4 UDP traffic is blocked.
DoS via routing loops
In 2010, new methods to create denial of service attacks via routing loops that use Teredo tunnels were uncovered. They are relatively easy to prevent.
Default use in MS-Windows
Microsoft Windows as of Windows 10, version 1803 and later disable Teredo by default. If needed, this transitional technology can be enabled via a CLI command or Group Policy.
Implementations
Several implementations of Teredo are currently available:
Windows XP SP2 includes a client and host-specific relay (also in the Advanced Networking Pack for Service Pack 1).
Windows Server 2003 has a relay and server provided under the Microsoft Beta program.
Windows Vista and Windows 7 have built-in support for Teredo with an unspecified extension for symmetric NAT traversal. However, if only a link-local and Teredo address are present, these operating systems don't try to resolve IPv6 DNS AAAA records if a DNS A record is present, in which case they use IPv4. Therefore, only literal IPv6 URLs typically use Teredo. This behavior can be modified in the registry.
Windows 10 version 1803 and later disable Teredo by default. If needed, this transitional technology can be enabled via a CLI command or Group Policy.
Miredo is a client, relay, and server for Linux, *BSD, and Mac OS X,
ng_teredo is a relay and server based on netgraph for FreeBSD from the LIP6 University and 6WIND.
NICI-Teredo is a relay for the Linux kernel and a userland Teredo server, developed at the National Chiao Tung University.
Choice of the name
The initial nickname of the Teredo tunneling protocol was Shipworm. The idea was that the protocol would pierce through NAT devices, much as the shipworm (a kind of marine wood-boring clam) bores tunnels through wood. Shipworms have been responsible for the loss of many wooden hulls. Christian Huitema, in the original draft, noted that the shipworm "only survives in relatively clean and unpolluted water; its recent comeback in several Northern American harbors is a testimony to their newly retrieved cleanliness. The Shipworm service should, in turn, contributes to a newly retrieved transparency of the Internet."
To avoid confusion with computer worms, Huitema later changed the protocol's name from Shipworm to Teredo, after the genus name of the shipworm Teredo navalis.
References
External links
Teredo Overview on Microsoft TechNet
Current anycast Teredo BGP routes
Teredo: Tunneling IPv6 over UDP through Network Address Translations (NATs). RFC 4380, C. Huitema. February 2006.
JavaScript Teredo-IP address calculator
Internet architecture
IPv6 transition technologies
Tunneling protocols |
18839442 | https://en.wikipedia.org/wiki/Namco%20System%20N2 | Namco System N2 | The Namco System N2 arcade platform runs on an nForce2-based motherboard that NVIDIA developed. It is based on a NVIDIA GeForce graphics card, using the OpenGL API.
Both Namco System N2 and Namco System ES1 use the Linux operating system that is based on Debian.
The Namco System ES2 PLUS and Namco System ES3 run Windows Embedded 7 as their operating system. It runs in an arcade game cabinet designed by Bandai Namco Games.
The Namco System BNA1 is a relatively new arcade board that runs Windows 10 IoT. A less powerful version of System BNA1, known as System BNA1 LITE has also been created for less demanding games.
Development
Because the N2, ES1(A2), ES2 Plus and ES3 are based on PC architecture, development for it and porting from it is relatively easy and inexpensive.
Specifications
Namco System N2
Motherboard: MSI K7N2GM-IL (NVIDIA nForce2 Chipset, Custom BIOS) (Japan/Asia) / ASUS M2N-MX (Export, Standard BIOS)
CPU: AMD K7 Mobile Athlon XP 2800+ at 2.13 GHz (Socket A/462) (Japan/Asia) / AMD Athlon 64 3500+ at 2.2 GHz (Socket 939) (Export)
RAM: 1×1GB / 2×1GB DDR 400 MHz 3200 MB/s
GPU: NVIDIA GeForce 4 Series / GeForce 7600 GS AGP with 256/512MB GDDR2 memory (GeForce 7800 GS AGP for some japan region maximum tune N2s)
Output: 1 DVI port, 1 VGA port, 1 S-Video port
Storage: Seagate 80GB (Japan/Asia) / WD 80GB PATA IDE HDD (Export)
Operating System: Linux 32-bit (Debian 2.6 based)
Sound: stereo RCA output from front panel audio with external AMP PCB (Audio duplicated to rear speakers by amp)
Protection: HASP HL Max/RTC USB dongle (v0.06)
Namco System ES1
Motherboard: Supermicro C2SBM-Q (Intel Q35 + ICH9DO Chipset)
CPU: Intel Core 2 Duo E8400 at 3.00 GHz
RAM: 2×512 MB DDR2 800 MHz 1.8V
GPU: NVIDIA GeForce 9600 GT PCIe 2.0 x16 with 512 MB GDDR3 memory
Output: 2 DVI ports / 1 DVI-I port, 1 VGA port, 1 HDMI port
Storage: Seagate Barracuda 7200.12 160 GB (ST3160318AS) / Hitachi Deskstar 7K1000.C 160 GB (HDS721016CLA382) SATA HDD
Operating System: arcadelinux 32-bit (Debian 4.0 based)
Sound: 5.1 channel HD Audio
Protection: TPM 1.2, HDD copy protection, HASP HL Max USB dongle
Namco System ES2 PLUS
Operating System: Windows Embedded 7
Namco System ES3
CPU: Intel Core i5-3550S at 3.00 GHz
RAM: 8 GB DDR3 2400 MHz (Revision B) / 16 GB DDR3 2400 MHz (Revision X)
GPU: NVIDIA GeForce GTX 650 Ti (Revision B) / GTX 680 (Revision X) PCIe 3.0 x16
Output: 1 DVI-I port, 1 DVI-D port, 1 HDMI port
Storage: HGST 250 GB 5400 RPM (HTS545025A7E680) SATA III HDD
Operating System: Windows Embedded Standard 7
Sound: Integrated HD Audio
Protection: HASP HL Max USB dongle, Windows BitLocker
Namco System ES4
Operating System: Windows Embedded 7
Namco System BNA1
CPU: Intel Core i5-6500 at 3.2Ghz
GPU: NVIDIA GeForce GTX 1050Ti PCIe 3.0 x16
Output: 1 Dual-Link DVI-I port, 1 DisplayPort 1.2 port, 1 HDMI 2.0b port
Storage: innodisk 3ME4 SATA SSD 256 GB
Operating System: Windows 10 IoT Enterprise 2016 LTSB
Sound: Integrated HD Audio
Protection: Thales Group Sentinel HL Max USB dongle, Windows BitLocker
List of System N2 games
Animal Kaiser: The King of Animal (2008)
Counter-Strike Neo (2005)
Mobile Suit Gundam: Bonds of the Battlefield (2007)
Mobile Suit Gundam: Bonds of the Battlefield Rev.2.00 (2008)
MotoGP DX (2007)
New Space Order (2007)
Wangan Midnight: Maximum Tune 3 (2007)
Wangan Midnight: Maximum Tune 3DX (2008)
Wangan Midnight: Maximum Tune 3DX Plus (2010)
List of System ES1 games
Dead Heat Riders (2013)
Dead Heat Street Racing / Maximum Heat (2011)
Nirin (2009)
Sailor Zombie AKB48 Arcade Edition (2014)
Tank! Tank! Tank! (2009)
Wangan Midnight Maximum Tune 4 (2011)
Wangan Midnight Maximum Tune 5 (North American version) (2017)
List of System ES1(A2) games
Mobile Suit Gundam: Bonds of the Battlefield Rev.3.00 (2011)
Wangan Midnight Maximum Tune 4 (Asia(Others)/Indonesia version) (2012)
Wangan Midnight Maximum Tune 5 (Asia(Others)/Indonesia version) (2015)
Wangan Midnight Maximum Tune 5 DX (Asia(Others)/Indonesia version) (2016)
Wangan Midnight Maximum Tune 5 DX Plus (Asia(Others)/Indonesia version) (2017)
List of System ES2 Plus games
Aikatsu! (2012)
Hyakujuu Taisen Great Animal Kaiser (2012)
List of System ES3 games
Lost Land Adventure (2014)
Mach Storm (2013)
Mario Kart Arcade GP DX (2013)
Mobile Suit Gundam: Bonds of the Battlefield Rev.4.00 (2016)
Mobile Suit Gundam U.C: Card Builder (2016)
Pokken Tournament (2015)
Star Wars Battle Pod (2015)
Synchronica (2015)
Tekken 7 (2015)
Tekken 7 Fated Retribution (2016)
Tekken 7 Fated Retribution ROUND 2 (2019)
Time Crisis 5 (2015)
Wangan Midnight Maximum Tune 5 (2014)
Wangan Midnight Maximum Tune 5 DX (2015)
Wangan Midnight Maximum Tune 5 DX Plus (2016)
Wangan Midnight Maximum Tune 6 (2018)
Wangan Midnight Maximum Tune 6R (2020)
List of System ES4 games
Point Blank X (2015)
List of System BNA1 games
Mobile Suit Gundam: Bonds of the Battlefield II (2020)
Mobile Suit Gundam Extreme Vs. 2 (2018)
JoJo's Bizarre Adventure: Last Survivor (2019)
Poker Station (2020)
Sword Art Online Arcade (2019)
Wangan Midnight Maximum Tune 6 (Export Version) (2019)
List of System BNA1 LITE games
Taiko no Tatsujin (2020)
Similar Hardware
The Sega Lindbergh, Taito's Taito Type X and Taito Type X+ operate in a similar way to the N2 platform, except that they use other operating systems.
References
Namco arcade system boards |
3115123 | https://en.wikipedia.org/wiki/Email%20tracking | Email tracking | Email tracking is a method for monitoring the delivery of email messages to the intended recipient. Most tracking technologies use some form of digitally time-stamped record to reveal the exact time and date that an email was received or opened, as well as the IP address of the recipient.
Email tracking is useful for when the sender wants to know whether the intended recipient actually received the email or clicked the links. However, due to the nature of the technology, email tracking cannot be considered an absolutely accurate indicator that a message was opened or read by the recipient.
Most email marketing software provides tracking features, sometimes in aggregate (e.g., click-through rate), and sometimes on an individual basis.
Read-receipts
Some email applications, such as Microsoft Office Outlook and Mozilla Thunderbird, employ a read-receipt tracking mechanism. The sender selects the receipt request option prior to sending the message, and then upon sending, each recipient has the option of notifying the sender that the message was received or read by the recipient.
However, requesting a receipt does not guarantee that one will be received, for several reasons. Not all email applications or services support sending read receipts, and users can usually disable the functionality if they so wish. Those that do support it are not necessarily compatible with or capable of recognizing requests from a different email service or application. Generally, read receipts are only useful within an organization where all mail users are using the same email service and application.
Depending on the recipient's mail client and settings, they may be forced to click a notification button before they can move on with their work. Even though it is an opt-in process, a recipient might consider it inconvenient, discourteous, or invasive.
Read receipts are sent back to the sender's "inbox" as email messages, but the location may be changed depending on the software used and its configuration. Additional technical information, such as who it is from, the email software they use, the IP addresses of the sender, and their email server is commonly available inside the headers of the read receipt.
The technical term for these is "MDN - Message Disposition Notifications", and they are requested by inserting one or more of the following lines into the email headers: "X-Confirm-Reading-To:"; "Disposition-Notification-To:"; or "Return-Receipt-To:"
Several email tracking services also feature real-time notifications, producing an on-screen pop-up whenever the sender's email has been opened.
Return-receipts
Another kind of receipt can be requested, which is called a DSN (delivery status notification), which is a request to the recipient's email server to send the sender a notification about the delivery of an email that the sender has just sent. The notification takes the form of an email, and will indicate whether the delivery succeeded, failed, or got delayed, and it will warn the sender if any email server involved was unable to give the sender a receipt. DSNs are requested at the time of sending by the sending application or server software (not inside the email or headers itself), and the sender can request to "Never" get any, or to "Always" get one, or (which most software does by default) only to get DSN if delivery fails (i.e.: not for success, delay, or relay DSNs). These failure DSNs are normally referred to as a "Bounce". Additionally, the sender can specify in their DSN request whether the sender wants their receipt to contain a full copy of their original email, or just a summary of what happened. In the SMTP protocol, DSNs are requested at the end of the RCPT TO: command (e.g.: RCPT TO:<> NOTIFY=SUCCESS, DELAY) and the MAIL FROM: command (e.g.: MAIL FROM:<> RET=HDRS).
Email marketing and tracking
Some email marketing tools include tracking as a feature. Such email tracking is usually accomplished using standard web tracking devices known as cookies and web beacons. When an email message is sent, if it is a graphical HTML message (not a plain text message) the email marketing system may embed a tiny, invisible tracking image (a single-pixel gif, sometimes called a web beacon) within the content of the message. When the recipient opens the message, the tracking image is referenced. When they click a link or open an attachment, another tracking code is activated. In each case a separate tracking event is recorded by the system. These response events accumulate over time in a database, enabling the email marketing software to report metrics such as open-rate and click-through rate. Email marketing users can view reports on both aggregate response statistics and individual response over time.
Such email tracking services are used by many companies, but are also available for individuals as subscription services, either web-based or integrated into email clients such as Microsoft Outlook or Gmail
Email tracking services may also offer collations of tracked data, allowing users to analyze the statistics of their email performance.
Privacy issues
Email tracking is used by individuals including email marketers, spammers and phishers to verify that emails are actually read by recipients, that email addresses are valid, and that the content of emails has made it past spam filters. Such tracking can also reveal if emails get forwarded, but who emails get forwarded to are usually not noted. About 24.7% of all emails track their recipients, but no more than half of the users are aware of being tracked. When used maliciously, it can be used to collect confidential information about businesses and individuals and to create more effective phishing schemes.
Common data that can be accessed from email tracking includes, but is not limited to, the IP address, client device
properties (desktop or mobile, browser type and version), and a date/time stamp of when the email was read.
The tracking mechanisms employed are typically first-party cookies and web beacons.
HP email tracking scandal
In the U.S. Congressional Inquiry investigating the
HP pretexting scandal
it was revealed that HP security used an email tracking service called
ReadNotify.com to investigate boardroom leaks.
The California attorney general's office has said that this practice
was not part of the pretexting charges.
HP said they consider email tracking to be legitimate and will continue using it.
See also
Email privacy
Spy pixel
Document automation in supply chain management & logistics
Bounce message
References
Email
Tracking |
41255133 | https://en.wikipedia.org/wiki/Peter%20Van%20Zandt%20Lane | Peter Van Zandt Lane | Peter Van Zandt Lane (born Port Jefferson, New York on May 13, 1985) is an American composer of acoustic and electroacoustic music.
Biography
Peter Van Zandt Lane is a recipient of a 2018 Charles Ives Fellowship from the American Academy of Arts and Letters, a 2017 Aaron Copland House Award, and a 2015 Composers Now residency at the Pocantico Center. He was named the 2020 Music Teachers National Association (MTNA) Distinguished Composer of the Year. Other residencies include MacDowell Colony, Yaddo, Virginia Center for the Creative Arts, and the Atlantic Center for the Arts. He has been commissioned twice by the Barlow Endowment for Music Composition (2011 and 2014), the Atlanta Chamber Players, American Chamber Winds for a concerto for trombonist Joseph Alessi, the Composers Conference and Chamber Music Center at Wellesley College, the Sydney Conservatorium Wind Ensemble, Juventas New Music Ensemble, Emory Wind Ensemble, and Dinosaur Annex Music Ensemble. His music has been played by International Contemporary Ensemble, New York Virtuoso Singers, the Cleveland Orchestra, Ensemble Signal, Talea Ensemble, Freon Ensemble (Rome), and Triton Brass. His compositions for wind ensembles –namely Hivemind and Astrarium– have become programmed widely by college and university wind ensembles in the United States.
Lane holds degrees from Brandeis University (M.A., Ph.D.) and the University of Miami Frost School of Music (B.M.). His composition teachers include Melinda Wagner, David Rakowski, Eric Chasalow, and Lansing McLoskey. He has held positions at Wellesley College, Harvard University, MIT, the University of Florida, and is currently composition faculty at the University of Georgia.
Works
Lane's 2017 concerto for trombone and wind ensemble, Radix Tyrannis, was commissioned by American Chamber Winds for trombonist Joseph Alessi, and was premiered at the 2017 World Association for Symphonic Bands and Ensembles conference in Utrecht, Netherlands.
Peter Van Zandt Lane's 2013 ballet, "HackPolitik" was composed for and premiered by Boston-based Juventas New Music Ensemble and Brooklyn-based contemporary dance company The People Movers. Based on a series of cyber-attacks between 2010 and 2012 linked to the hacker groups Anonymous and LulzSec, the ballet depicts the rise and fall of Topiary (hacktivist) and Sabu (hacktivist) through a combination of “electroacoustic music, modern dance, and video projection" and "examine[s] how the Internet . . . blurs the lines between activism and anarchy.” The music and choreography (by People Movers Artistic Director Kate Ladenheim) aims to "translat[e] cyberspace into music and motion." In an interview with the Clyde Fitch Report, Lane cited the wider cultural implications of social networking as a motivation for composing the piece, stating that “whether or not we are engaged in cyber-activism… we are constantly thinking about ‘what do I write here? How do I portray myself to the rest of the world?’… We spend an enormous amount of effort into shaping our online personalities.” Described as "angular, jarring, and sophisticated . . . very compelling," the piece received positive critical reviews; the Boston Musical Intelligencer stated "Lane’s score was friendly to listeners, emotionally and texturally varied . . . Ballet needs live music and this one offered it on the highest level." Noting the poignancy of the premiere, Forbes writer Parmy Olson (whose book We Are Anonymous served as a primary resource for the ballet) noted that "the same day that hacker Jeremy Hammond was sentenced to 10 years in prison for his role in the vigilante attacks of Anonymous, an altogether more artistic outcome for the online network took place. The hour-long premier of HackPolitik . . . reflect[s] the story of the Anonymous . . . and the rise and fall of its hacker splinter group LulzSec." The ballet was premiered in Boston, and subsequently at Here Space in Manhattan, where it was dubbed a New York Times Critic's Pick.
Selected works
Orchestral and Wind Symphony
Echo Chambers (2019)
Radix Tyrannis (trombone concerto) (2017)
Beacons (2016)
Astrarium (2015)
Hivemind (2014)
Slant Apparatus (2010)
Solo/Chamber Ensemble (with electronics)
/chatter/ (2017)
Persistent Tracings (2017)
/ping/ (2016)
Studies in Momentum (2014)
Anonymous dances (suite from HackPolitik) (2013)
Impulse Control (2012)
String Quartet #1 (2011)
Triptiek (2009)
Manteia (Aeromancer, Hydromancer, Pyromancer, Chronomancer) (2008-2013) for bassoon and electronics
Solo/Chamber Ensemble (without electronics)
Piano Quartet: The Longitude Problem (2018)
Chamber Symphony (2015)
Caecilia's Iris (2012) for organ
Busker Fantasy (2011)
Seven Rants (2011)
Beacons (2010)
Danzas Mecánicas (2010)
Poa Pratensis (2010)
Fugue State (2008)
Piano Trio #1 (Taijitu) (2007)
Pace (2006)
Vocal
To The Sun, To The Risen (2010) (text by Franz Wright)
References
External links
Artist's website
Brandeis University Faculty Profile
"Exploring activism and anarchy through ballet" in the Monadnock Ledger-Transcript
Interview with Peter VZ Lane and Mario Davidovsky on the Composers Conference
Cultivating Culture on "HackPolitik"
SCI Compilation on All Music including "Triptiek" for tenor sax, bassoon, and electronics
1985 births
Living people
American male composers
21st-century American composers
21st-century American male musicians |
23125271 | https://en.wikipedia.org/wiki/Ubuntu%20User | Ubuntu User | Ubuntu User is a paper magazine that was launched by Linux New Media AG in May 2009.
The publication is aimed at users of the Ubuntu operating system and focuses on reviews, community news, how to articles and troubleshooting tips. It also includes a Discovery Guide aimed at beginners.
Background
Ubuntu User is published quarterly. The paper magazine is supported by a website that includes a selection of articles from the magazine available to the public as PDFs, Ubuntu news and free computer wallpaper downloads.
Issue number one consisted of 100 pages (including covers) and in its North American edition had a cover price of US$15.99 and Cdn$17.99. Each issue also includes an Ubuntu live CD in the form of a DVD that new users can use to try out Ubuntu or to install it.
Linux New Media is headquartered in Munich, Germany and has offices of its US subsidiary, Linux New Media USA, LLC, in Lawrence, Kansas. The company also publishes Linux Magazine, LinuxUser, EasyLinux in German, and Linux Community.
Reception
In announcing the launch of the magazine, the company said:
DistroWatch questioned the wisdom of launching a new paper magazine at this point in history:
See also
Full Circle Magazine
References
External links
Computer magazines published in Germany
Linux magazines
Magazines established in 2009
Magazines published in Munich
Ubuntu
2009 establishments in Germany
Quarterly magazines published in Germany |
60125706 | https://en.wikipedia.org/wiki/Ang%20Cui | Ang Cui | Ang Cui () is an American cybersecurity researcher and entrepreneur. He is the founder and CEO of Red Balloon Security in New York City, a cybersecurity firm that develops new technologies to defend embedded systems against exploitation.
Career
Cui was formerly a researcher with Columbia University's Intrusion Detection Systems Lab where he worked while pursuing his Ph.D. in computer science at Columbia University.<ref></refdref name="auto"></ref> His doctoral dissertation, entitled “Embedded System Security: A Software-Based Approach,” focused on scientific inquiries concerning the exploitation and defense of embedded systems. Cui received his Ph.D. in 2015, and founded Red Balloon Security to commercialize his firmware defense technology now known as Symbiote.
Cui has publicly demonstrated security vulnerabilities in widely used commercial and consumer products, including Cisco and Avaya VoIP phones, Cisco routers and HP LaserJet printers. He has presented his research at industry events including Black Hat Briefings, DEF CON conference, RSA Conference, REcon security conference and the Auto-ISAC 2018 Summit. Cui's security research has earned the 2011 Kaspersky Labs American Cup Winner, 2012 Symantec Research Labs Graduate Fellowship and the 2015 DARPA Riser
In 2017, the United States Department of Homeland Security cited his company with the “Crossing the Valley of Death” distinction for the development of a commercially available cyber defense system for critical infrastructure facilities, which was produced following a 12-month DHS funded pilot study to evaluate cyber sabotage risks to the building systems of a DHS Biosafety Level 3 facility.
Dukedom
In 2020, Cui received the noble title of duke from the Principality of Sealand. Cui's royal title grants him an official territory, or duchy, of one square foot within the micronation, which he has named SPACE. As a Duke of the Principality of Sealand, Cui joins the ranks of notable figures who have also received nobility titles from the micronation, including English cricketeer Ben Stokes and musician Ed Sheeran.
Security Research
Symbiote
Cui is best known for his role in the development of Symbiote, a host-based firmware defense technology for embedded devices.
Symbiote is injected into the firmware of a legacy embedded device where it provides intrusion detection functionality. It does so by constantly checking the integrity of static code and data at the firmware level, in order to prevent unauthorized code or commands from executing. Symbiote is operating system agnostic and is compatible with most embedded devices. Red Balloon Security has already released Symbiote for commercial printer brands like HP and other devices.
On June 21, 2017, Red Balloon Security announced the launch of Symbiote for Automotive Defense, an automotive version of the standard Symbiote technology, at the Escar USA Conference in Detroit.
In 2016, Popular Science named Symbiote one of the “9 Most Important Security Innovations of the Year.”
HP LaserJet Printers
In 2011, Cui was part of a research effort at Columbia University, directed by Professor Salvatore Stolfo, to examine security vulnerabilities in HP LaserJet printers. The project found chers announced significant security flaws in these devices which could allow for a range of remote attacks, including triggering a fire hazard by forcing the printer's fuser to continually heat up.
HP released a firmware update soon after these findings were released. However, team claimed they found 201 vulnerable HP laser jet printers in the U.S. Department of Defense's network and two at HP's headquarters months after the security patch was released. In 2015, HP licensed Cui's Symbiote technology to use as a firmware defense against cyber attacks for its LaserJet Enterprise printers and multifunction printers.
Cisco IP Phones
At the 29th Chaos Communication Congress in December 2012, Cui and Solfo presented the findings of their DARPA funded research study, which exposed a vulnerability in Cisco IP phones (CiscoUnified IP Phone 7900 series) that could allow an attacker to turn them into bugging devices. The exploit gained root access to the device's firmware, which could enable the interception of phone calls. It would also allow an attacker to remotely activate the phone's microphone in order to eavesdrop on nearby conversations.
Funtenna
At the 2015 Black Hat Briefings cybersecurity conference, Cui unveiled a firmware exploit called “Funtenna” which manipulates the electronic processes within common devices like printers, phones, and washing machines in order to create radio signals which could secretly transmit data outside of a secure facility. The attack could even work with devices within an air-gapped system.
News outlets such as Ars Technica and Motherboard noted Funtenna's potential for turning infected devices into covert spying tools.
Monitor Darkly
At the DEF CON 24 security conference in 2016, Cui, along with his principal scientist Jatin Kataria and security researcher Francois Charbonneau, demonstrated previously unknown vulnerabilities in the firmware of widely used computer monitors, which an attacker could exploit to both spy on the user's screen activity and to manipulate what the user sees and engages with on the screen.
Called “Monitor Darkly,” the firmware vulnerability was reported to affect Dell, HP, Samsung and Acer computer monitors.
The vulnerability was specific to the monitors’ on-screen-display (OSD) controllers, which are used to control and adjust viewing options on the screen, such as brightness, contrast or horizontal/vertical positioning. However, as Cui, Kataria and Charbonneau noted in their talk abstract for the 2016 REcon security conference, with the Monitor Darkly exploit, the OSD can also be used to “read the content of the screen, change arbitrary pixel values, and execute arbitrary code supplied through numerous control channels.”
The security news site CSO Online said about the vulnerability, “By exploiting a hacked monitor, they could manipulate the pixels and add a secure-lock icon by a URL. They could make a $0 PayPal account balance appear to be a $1 billion balance. They could change ‘the status-alert light on a power plant's control interface from green to red.’”
The exploit was later used in a Season 3 episode of the Mr. Robot show, in which the FBI uses it to take screenshots of Elliot Alderson’s computer.
BadFET
At the 2017 REcon security conference, Cui and security researcher Rick Housley demonstrated a new method for hacking processors through the use of an electromagnetic pulse, or EMP.
Known as electromagnetic fault injection (EMFI), this class of attacks has been investigated before, but Cui and Housley’s new technique, known as “BadFET," is adapted to exploit modern computers and embedded devices, by impacting multiple components within these devices at the same time. By using a 300 volt EMP pulse from 3 millimeters away, the BadFET attack bypasses the Secure Boot protection that keeps processors from running untrusted code.
Cui and Housley also introduced an open source EMFI platform that makes BadFET available to other security researchers, for further analysis, testing and development.
Thrangrycat
On May 13, 2019, Cui and his research team (composed of Jatin Kataria, Richard Housley and James Chambers) jointly announced with Cisco a critical vulnerability in Cisco's secure boot process identified as CVE-2019-1649, and referred to as “Thrangrycat” by Red Balloon Security.
The vulnerability affects a key hardware security component developed by Cisco known as the Trust Anchor module (TAm). The vulnerability is considered significant, as TAm underpins the secure boot process in numerous Cisco devices, including routers and switches. As WIRED Magazine explained in its reporting on the Thrangrycat vulnerability: "Known as the Trust Anchor, this Cisco security feature has been implemented in almost all of the company’s enterprise devices since 2013. The fact that the researchers have demonstrated a way to bypass it in one device indicates that it may be possible, with device-specific modifications, to defeat the Trust Anchor on hundreds of millions of Cisco units around the world. That includes everything from enterprise routers to network switches to firewalls.”
Cisco describes the TAm as a “proprietary, tamper-resistant chip” that is “found in many Cisco products” and “helps verify that Cisco hardware is authentic.”
The vulnerability could enable an attacker to modify the firmware of this module to gain persistent access on a network and carry out many different types of malicious activity, including data theft, importing malware and physical destruction of equipment.
The New York Times called Thrangrycat “super alarming,” with WIRED Magazine warning it has “massive global implications.”
Thrangrycat is believed to be the first security vulnerability to be named with emoji symbols.
References
Living people
American computer scientists
Place of birth missing (living people)
Columbia University alumni
Columbia University faculty
American technology company founders
1983 births |
204620 | https://en.wikipedia.org/wiki/1863%20in%20literature | 1863 in literature | This article contains information about the literary events and publications of 1863.
Events
January 1 – The essayist and poet Ralph Waldo Emerson commemorates today's Emancipation Proclamation in the United States by composing "Boston Hymn" and surprising a crowd of 3,000 with a debut reading of it at Boston Music Hall.
January 31 – Jules Verne's novel Five Weeks in a Balloon, or, Journeys and Discoveries in Africa by Three Englishmen (Cinq semaines en ballon) is published by Pierre-Jules Hetzel in Paris. It will be the first of Verne's Voyages Extraordinaires.
February 3 – Samuel Langhorne Clemens, in signing a humorous letter to the Territorial Enterprise newspaper in Virginia City, Nevada, first uses the pen name Mark Twain.
February 28 – Flaubert and Turgenev meet for the first time, in Paris.
June 12 – The Arts Club is founded by Charles Dickens, Anthony Trollope, Frederic Leighton and others in London's Mayfair, as a social meeting place for those involved or interested in the creative arts.
June 13 – Samuel Butler's dystopian article "Darwin among the Machines" is published (as by "Cellarius") in The Press newspaper in Christchurch, New Zealand; it will be incorporated into his novel Erewhon (1872).
November – Mendele Mocher Sforim's his first Yiddish language story, "Dos Kleine Menshele" (The Little Man), is published in the Odessa weekly Kol Mevasser.
December 29 – An estimated 7000 people attend the funeral of William Makepeace Thackeray at Kensington Gardens and nearly 2000 his burial in London's Kensal Green Cemetery.
unknown dates
The Romanian Junimea literary society is established in Iași. It will exercise a major influence on Romanian culture until the 1910s.
Elvira, or the Love of a Tyrant, a novel by the Neapolitan author Giuseppe Folliero de Luna, becomes the first published in the Maltese language, as Elvira Jew Imħabba ta’ Tirann.
Publication begins in the U.K. of a seminal edition of The Works of William Shakespeare (the "Cambridge Shakespeare"), edited by William George Clark and William Aldis Wright, published by Macmillan and printed by Cambridge University Press.
New books
Fiction
Mary Elizabeth Braddon
Aurora Floyd
Eleanor's Victory
John Marchman's Legacy
Nikolai Chernyshevsky – What Is to Be Done? (Что делать?, Shto delat'?)
George Eliot – Romola
"Charles Felix" (probably Charles Warren Adams) – The Notting Hill Mystery (serialization completed, book form; considered first full-length detective novel in English)
Elizabeth Gaskell
A Dark Night's Work
Sylvia's Lovers
Théophile Gautier – Captain Fracasse
Edward Everett Hale – The Man Without a Country
Mary Jane Holmes – Marian Grey
Jean Ingelow – "The Prince's Dream" (short story)
Julia Kavanagh – Queen Mab
Sheridan Le Fanu – The House by the Churchyard
John Neal — The White-Faced Pacer, or, Before and After the Battle
Margaret Oliphant – Salem Chapel, first of The Chronicles of Carlingford (in book form)
Ouida – Held in Bondage
Charles Reade – Very Hard Cash (later Hard Cash)
Miguel Riofrío – La Emancipada (the first Ecuadorian novel)
Anne Thackeray Ritchie – The Story of Elizabeth
Leo Tolstoy – The Cossacks (Казаки, Kazaki)
Anthony Trollope - Rachel Ray (novel)
John Townsend Trowbridge – Cudjo's Cave
Giovanni Verga – Sulle Lagune (In the Lagoons)
Children and young people
Charles Kingsley – The Water-Babies, A Fairy Tale for a Land Baby (complete in book form)
Jules Verne – Five Weeks in a Balloon
Drama
W. S. Gilbert – Uncle Baby
Tom Taylor – The Ticket-of-Leave Man
Poetry
Rosalía de Castro – Cantares gallegos
Henry Wadsworth Longfellow – Tales of a Wayside Inn, including "Paul Revere's Ride"
Non-fiction
John Austin (posthumously, compiled by Sarah Austin) – Lectures on Jurisprudence
Samuel Bache – Miracles the Credentials of the Christ
William Barnes – Glossary of Dorset Dialect
Henry Walter Bates – The Naturalist on the River Amazons.
William Wells Brown – The Black Man: His Antecedents, His Genius and His Achievements
Francis James Child – Observations on the Language of Chaucer's Canterbury Tales
Gustav Freytag – Die Technik des Dramas
Alexander Gilchrist (posthumously, edited by Anne Gilchrist) – Life of William Blake, "Pictor Ignotus"; with selections from his poems and other writings
William Howitt – History of the Supernatural
Fanny Kemble – Journal of a Residence on a Georgian Plantation in 1838–1839
Charles Lyell – Geological Evidences of the Antiquity of Man
Ernest Renan – The Life of Jesus (Vie de Jésus)
Births
February 9 – Anthony Hope (Anthony Hope Hawkins), English novelist and playwright (died 1933)
March 3 – Arthur Machen (Arthur Llewellyn Jones), Welsh novelist and short story writer (died 1947)
March 12 – Gabriele D'Annunzio, Italian poet (died 1938)
March 17 – Olivia Shakespear (née Tucker), British novelist, playwright and patron of the arts (died 1938)
April 9 – Henry De Vere Stacpoole, Irish novelist (died 1951)
April 26 – Arno Holz, German Naturalist poet and dramatist (died 1929)
April 29 – Constantine Cavafy, Greek Alexandrine poet (died 1933)
June 10 – Louis Couperus, Dutch fiction writer (died 1923)
June 20 – Florence White, English food writer (died 1940)
July 13 – Margaret Murray, Indian-born English archeologist and historian (died 1963)
August 7 – Gene Stratton Porter, American novelist and naturalist (died 1924)
September 1 – Violet Jacob (Violet Kennedy-Erskine), Scottish historical novelist and poet (died 1946)
September 8 – W. W. Jacobs, English short story writer (died 1943)
September 22 – Ferenc Herczeg (Franz Herzog), Hungarian dramatist (died 1954)
November 1
Charlotte O'Conor Eccles, Irish-born London writer, translator and journalist (died 1911)
Arthur Morrison, English writer (died 1945)
November 18 – Richard Dehmel, German poet (died 1920)
November 21 – Sir Arthur Quiller-Couch (Q.), English novelist and anthologist (died 1944)
December 16 – George Santayana, American novelist and poet (died 1952)
Deaths
May 13 – August Hahn, German Protestant theologian (born 1792)
July 3 – William Barksdale, American journalist and Confederate general (killed in action, born 1821
July 10 – Clement Clarke Moore, American classicist and poet (born 1779)
September 17 – Alfred de Vigny, French poet, dramatist and novelist (born 1797)
September 20 – Jacob Grimm, German philologist and fairy-tale author (born 1785)
October 6 – Frances Trollope, English novelist and writer (born 1779)
October 8 – Richard Whately, English theologian and archbishop (born 1787)
December 13 – Christian Friedrich Hebbel, German poet and dramatist (born 1813)
December 17 – Émile Saisset, French philosopher (born 1814)
December 24 – William Makepeace Thackeray, Indian-born English novelist and travel writer (stroke, born 1811)
Awards
Newdigate Prize – Thomas Llewellyn Thomas
References
Years of the 19th century in literature |
31521005 | https://en.wikipedia.org/wiki/No%20Time%20to%20Explain | No Time to Explain | No Time to Explain is a platform action video game developed and published by tinyBuild. Designed by Tom Brien and Alex Nichiporchik, it is the successor to Brien's browser game, released on January 6, 2011. No Time to Explain has been released on Linux, Microsoft Windows, and OS X. A remastered version of the game, No Time to Explain Remastered, was released for Linux, Microsoft Windows, OS X, PlayStation 4, and Xbox One. A Wii U version was planned, but never released.
Plot
The game follows an unnamed male protagonist as he chases his future self, who has been captured, through time for an unknown reason. Central to the game is a powerful laser gun received from the Future Protagonist that can be used as both a weapon and a means of propulsion. A running gag is that, just as a character is about to explain what's going on, they are interrupted, keeping the protagonist - and the player - in the dark.
The plot contains elements of time travel, the time paradox effect, and alternate time-lines. As the game continues, the characters and worlds get increasingly more absurd, including a world made entirely of desserts and a blank world that must be painted with "ink" (via the gun) to traverse. The player will also control several alternate versions of the protagonists, such as the football helmet wearing "Most Popular Guy in the World," who uses his shotgun to propel himself over large distances.
Eventually, the culprit behind the attacks is discovered: the protagonist's evil twin, who was released by accident while the protagonist was chasing his abducted selves. Once all the levels are completed, a round table of protagonists convenes to come up with a plan. The group misunderstands a brainstorm from the original protagonist and graft him and The Most Popular Guy in the World together into a composite being, giving the player the ability to use both types of guns. The plan somehow works, but moments before defeat, the evil twin goes back to the beginning of the game with Composite Guy in pursuit. The original protagonist is killed after the twin steals the laser gun, the twin is then pushed into the attacking monster he released, and after a few awkward moments of silence, the composite protagonist finally announces, "Video Games!" Then, the credits roll.
Gameplay
The game is a 2D side-scroller, with most of the levels involving various means of propulsion. The main method is a laser gun worn as a jetpack that shoots a laser beam with a time limit. Many variations on this, such as a shotgun that launches the player at a great distance, or a slingshot effect that flings the player from wall to wall, are used at different levels.
Development
No Time to Explain was a browser game created by Tom Brien and released on Newgrounds on January 6, 2011. It has currently garnered over 405,000 views at Newgrounds. After the success of No Time to Explain, Brien teamed up with Alex Nichiporchik to start work on a full version of the game.
Development on No Time to Explain began in February 2011. The game was initially announced for PC and Mac, and was released for Windows, Mac and Linux in August 2011, and on Steam in January 2013. The game differs from the original Flash game in that instead of being drawn, the levels are built out of blocks. On April 11, 2011, tinyBuild announced that they opened a Kickstarter account to collect funds to help support the project. In less than 24 hours, the $7,000 goal was met. The Kickstarter page has helped raised over $26,000 for the project with a notable contribution of $2,000 from Minecraft creator Markus Persson.
References
External links
2011 video games
2015 video games
Crowdfunded video games
Indie video games
Kickstarter-funded video games
Linux games
MacOS games
Platform games
PlayStation 4 games
Side-scrolling video games
Steam Greenlight games
Video games with Steam Workshop support
Video games developed in the Netherlands
Windows games
Xbox One games
TinyBuild games
Single-player video games |
959943 | https://en.wikipedia.org/wiki/Circuit%20Switched%20Data | Circuit Switched Data | In communications, Circuit Switched Data (CSD) is the original form of data transmission developed for the time-division multiple access (TDMA)-based mobile phone systems like Global System for Mobile Communications (GSM). After 2010 many telecommunication carriers dropped support for CSD, and CSD has been superseded by GPRS and EDGE (E-GPRS).
Technical
CSD uses a single radio time slot to deliver 9.6 kbit/s data transmission to the GSM network switching subsystem where it could be connected through the equivalent of a normal modem to the Public Switched Telephone Network (PSTN), allowing direct calls to any dial-up service. For backwards compatibility, the IS-95 standard also supports CDMA Circuit Switched Data. However, unlike TDMA, there are no time slots, and all CDMA radios can be active all the time to deliver up to 14.4 kbit/s data transmission speeds. With the evolution of CDMA to CDMA2000 and 1xRTT, the use of IS-95 CDMA Circuit Switched Data declined in favour of the faster data transmission speeds available with the newer technologies.
Prior to CSD, data transmission over mobile phone systems was done by using a modem, either built into the phone or attached to it. Such systems were limited by the quality of the audio signal to 2.4 kbit/s or less. With the introduction of digital transmission in TDMA-based systems like GSM, CSD provided almost direct access to the underlying digital signal, allowing for higher speeds. At the same time, the speech-oriented audio compression used in GSM actually meant that data rates using a traditional modem connected to the phone would have been even lower than with older analog systems.
A CSD call functions in a very similar way to a normal voice call in a GSM network. A single dedicated radio time slot is allocated between the phone and the base station. A dedicated "sub-time slot" (16 kbit/s) is allocated from the base station to the transcoder, and finally, another time slot (64 kbit/s) is allocated from the transcoder to the Mobile Switching Centre (MSC).
At the MSC, it is possible to use a modem to convert to an "analog" signal, though this will typically actually be encoded as a digital pulse-code modulation (PCM) signal when sent from the MSC. It is also possible to directly use the digital signal as an Integrated Services Digital Network (ISDN) data signal and feed it into the equivalent of a remote access server.
High Speed Circuit Switched Data (HSCSD)
High Speed Circuit Switched Data (HSCSD) is an enhancement to CSD designed to provide higher data rates by means of more efficient channel coding and/or multiple (up to 4) time slots. It requires the time slots being used to be fully reserved to a single user. A transfer rate of up to 57.6 kbit/s (i.e., 4 × 14.4 kbit/s) can be reached, or even 115 kbit/s if a network allows combining 8 slots instead of just 4. It is possible that either at the beginning of the call, or at some point during a call, it will not be possible for the user's full request to be satisfied since the network is often configured to allow normal voice calls to take precedence over additional time slots for HSCSD users.
An innovation in HSCSD is to allow different error correction methods to be used for data transfer. The original error correction used in GSM was designed to work at the limits of coverage and in the worst case that GSM will handle. This means that a large part of the GSM transmission capacity is taken up with error correction codes. HSCSD provides different levels of possible error correction which can be used according to the quality of the radio link. This means that in the best conditions 14.4 kbit/s can be put through a single time slot that under CSD would only carry 9.6 kbit/s, i.e. a 50% improvement in throughput.
The user is typically charged for HSCSD at a rate higher than a normal phone call (e.g., by the number of time slots allocated) for the total period of time that the user has a connection active. This makes HSCSD relatively expensive in many GSM networks and is one of the reasons that packet-switched General Packet Radio Service (GPRS), which typically has lower pricing (based on amount of data transferred rather than the duration of the connection), has become more common than HSCSD.
Apart from the fact that the full allocated bandwidth of the connection is available to the HSCSD user, HSCSD also has an advantage in GSM systems in terms of lower average radio interface latency than GPRS. This is because the user of an HSCSD connection does not have to wait for permission from the network to send a packet.
HSCSD is also an option in Enhanced Data Rates for GSM Evolution (EDGE) and Universal Mobile Telecommunications System (UMTS) systems where packet data transmission rates are much higher. In the UMTS system, the advantages of HSCSD over packet data are even lower since the UMTS radio interface has been specifically designed to support high bandwidth, low latency packet connections. This means that the primary reason to use HSCSD in this environment would be access to legacy dial up systems.
Related
GSM data transmission has advanced since the introduction of CSD:
General Packet Radio Service (GPRS) provides more efficient packet-based data transmission directly from the mobile phone at speeds roughly twice those of HSCSD.
Enhanced Data Rates for GSM Evolution (EDGE) (E-GPRS) and Universal Mobile Telecommunications System (UMTS) provide improved radio interfaces with higher data rates, while still being backward compatible with the GSM core network.
See also
Circuit switching
Global System for Mobile Communications (GSM)
General Packet Radio Service (GPRS)
Mobile phone
Time-division multiple access (TDMA)
List of device bandwidths
References
GSM standard |
1148949 | https://en.wikipedia.org/wiki/Lifehouse%20Chronicles | Lifehouse Chronicles | Lifehouse Chronicles is a box set released in 2000 by Pete Townshend with the focus of the box being the formerly "abandoned" Lifehouse rock opera. The set contains song demos by Pete Townshend; including solo versions of "Baba O'Riley", "Won't Get Fooled Again", and "Who Are You", and the Lifehouse Radio Program. The box set release was followed by two Sadler's Wells Lifehouse concerts and the release of a live CD and video/DVD titled, respectively, Pete Townshend Live: Sadler's Wells 2000 and Pete Townshend – Music from Lifehouse.
Concept
The set collects songs and other compositions relating to Lifehouse, a musical concept developed by Townshend in 1970 as a follow-up to The Who's highly successful rock opera, Tommy. Rooted heavily in the teachings of Townshend's spiritual mentor Meher Baba as well as in science fiction literature, Lifehouse was meant to explore the idea that music is the fundamental basis of all life – that every human being on Earth has a unique musical melody that "describes" them, and only them, perfectly. When the unique songs of enough people are played in unison, the result would be a single harmonic note – the One Note – akin to the quintessence sought by ancient alchemists. Lifehouse was to be a true multimedia project: a double LP rock opera, a motion picture, and an interactive concert experience.
The story was to take place in 21st century Britain, in an age where pollution has become such a drastic problem that most people never set foot outdoors in their life. This populace spends most of their time in "experience suits". These suits provide the people with artificial lives superior to any they could eke out in the real world, yet devoid somehow of spiritual fulfilment. One discontented soul, known only as "The Hacker", rediscovers 20th century rock and roll music, and breaks into the computer network controlling the suits to invite people to leave their suits and come together for a concert. Despite the best efforts of the fascist government, thousands of people gather at the Hacker's concert, with millions more watching through their suits, as the musicians and audience perform experimental songs like those described above. Just as the police storm in and shoot the Hacker, the audience and band manage simultaneously to produce the perfect universal tone, The One Note, and everyone participating in and watching the concert simply vanishes, presumably having departed for a higher plane of existence. The story is seen through the eyes of a middle-aged farmer named Ray, a farmer from a remote unpolluted corner of Scotland, who travels south looking for his daughter who has run away to the concert.
History
1970–1971
In September 1970, Townshend penned a song called "Pure and Easy", about the One Note, the first song written specifically for Lifehouse. In the following two months he wrote approximately 20 additional songs, recording intricate home demos of each. Rather than attempting to tell the story through the lyrics, as he had done with Tommy, the songs were stand-alone pieces, meant to be elucidated by the movie and detailed sleeve notes to be included with the album. Most of those songs were recorded by the Who in two sessions in the winter of 1970/1971, as well as several "rehearsals" accompanied by guitarist Leslie West of the band Mountain and an impromptu live concert at the Young Vic Theatre in London in April 1971.
While Townshend had high hopes for the project, others were sceptical. Universal Studios, which had recently inked a two-film deal with the Who for the rights to a film version of Tommy, was not impressed by the screenplay Townshend offered them. A series of spontaneous concerts the Who had held in London failed to produce usable material, and it soon became apparent that the project was doomed to failure. Though many of the songs written for Lifehouse came to be released on the Who album Who's Next, Lifehouse was to remain unfinished for nearly thirty years.
1971–1998
Townshend never abandoned hope that Lifehouse might someday become a reality. He continued to write songs for the project throughout the '70s, and in 1980 worked together with bandmate John Entwistle to produce a new screenplay with a new story. Negotiations to produce this film, however, fell apart when Townshend found himself infatuated with the wife of the film's director (a story recounted in the song "Athena", to be found on the Who album It's Hard).
It was not until 1992 that Townshend again began work on the project. In that year, Townshend recorded the solo album Psychoderelict, a semi-biographical story told in the style of a radio play. The hero of this piece, like Townshend, is an aging rock star labouring tirelessly on a 20-year-old rock opera, called "Gridlife Chronicles" in the story, who finds himself embroiled in a sex scandal that jeopardises the future of the project. Several of the synthesizer pieces Townshend recorded in 1970 make their first official appearance on this album.
In 1998, Townshend's dream of bringing Lifehouse to a wide audience finally came true, when BBC Radio approached him with the idea of developing a radio play based on Lifehouse and incorporating the original music written for the project. The play, just under two hours in length, was transmitted on BBC Radio 3 on 5 December 1999.
The box set
Following the broadcast of the play, Townshend assembled and released the Lifehouse Chronicles box set in 2000 as a formal culmination of his work on the project. The set, made available exclusively through his website and at concerts, consists of six CDs. The first two CDs collect the original demos he recorded of the Lifehouse songs, several of which were never recorded by The Who. The third disc consists of several of Townshend's experimental synthesizer pieces, live recordings of Lifehouse songs, and new studio recordings of those songs produced especially for the set. The fourth disc features classical music by the London Chamber Orchestra which was used in the radio play, featuring compositions by Townshend as well as selections by Baroque composers Henry Purcell, Domenico Scarlatti and Michel Corrette. The fifth and sixth discs contain the radio play itself. Included with the set is a booklet featuring an introduction by Townshend, a history of the project written by Townshend webmaster/publicist Matt Kent , lyrics for most of the Lifehouse songs, and a script of the play. Townshend stated in his introduction that he eventually hoped to release an expanded version of the set, to be titled "The Lifehouse Method", which would include software for producing a synthesizer track based on the user's vital statistics. Instead, The Lifehouse Method debuted in early 2007 as a website. After generating some 10,000 new pieces of music for users, the project closed. Graphic design was by Laurence Sutherland.
Track listing
All songs written and composed by Pete Townshend, except where noted
Related recordings
By The Who
The Who's versions of most of the above-listed songs can be found on the following albums:
1971: Who's Next
1974: Odds & Sods
1975: The Who By Numbers
1978: Who Are You
1981: Hooligans
1982: It's Hard
By Pete Townshend
1972: Who Came First (the album contains Pete Townshend's Lifehouse demos of "Pure and Easy", "Let's See Action", and "Time Is Passing". "Pure And Easy" was shortened by three minutes and received additional overdubs.)
2000: Lifehouse Chronicles
2000: Lifehouse Elements
2002: Music from Lifehouse DVD (the 100-minute video was directed by Hugo Currie and Toby Leslie, and was issued in color as a Region 1 NTSC DVD, ASIN: B00005UQ86. The performances included are: "Fantasia Upon One Note", *"Teenage Wasteland", "Time Is Passing", "Love Ain't For Keeping", "Greyhound Girl", "Mary", "I Don't Know Myself", "Bargain", "Pure and Easy", "Baba O'Riley", "Behind Blue Eyes", "Let's See Action", "Getting in Tune", "Relay", "Join Together", "Won't Get Fooled Again", "Song Is Over", "Can You Help the One You Really Love?".)
By Lawrence Ball
2012: Method Music
Notes
The single-disc sampler of the box set, entitled Lifehouse Elements, is available through most record stores.
The album Pete Townshend Live: Sadler's Wells 2000 features much of the Lifehouse material performed live in concert, and like the box set is exclusively available from Townshend's Eelpie.com website.
References
External links
The Lifehouse Method
Eelpie
Rock operas
The Who
2000 compilation albums
Pete Townshend compilation albums |
205620 | https://en.wikipedia.org/wiki/Squid%20%28software%29 | Squid (software) | Squid is a caching and forwarding HTTP web proxy. It has a wide variety of uses, including speeding up a web server by caching repeated requests, caching web, DNS and other computer network lookups for a group of people sharing network resources, and aiding security by filtering traffic. Although primarily used for HTTP and FTP, Squid includes limited support for several other protocols including Internet Gopher, SSL, TLS and HTTPS. Squid does not support the SOCKS protocol, unlike Privoxy, with which Squid can be used in order to provide SOCKS support.
Squid was originally designed to run as a daemon on Unix-like systems. A Windows port was maintained up to version 2.7. New versions available on Windows use the Cygwin environment. Squid is free software released under the GNU General Public License.
History
Squid was originally developed as the Harvest object cache, part of the Harvest project at the University of Colorado Boulder. Further work on the program was completed at the University of California, San Diego and funded via two grants from the National Science Foundation. Duane Wessels forked the "last pre-commercial version of Harvest" and renamed it to Squid to avoid confusion with the commercial fork called Cached 2.0, which became NetCache. Squid version 1.0.0 was released in July 1996.
Squid is now developed almost exclusively through volunteer efforts.
Basic functionality
After a Squid proxy server is installed, web browsers can be configured to use it as a proxy HTTP server, allowing Squid to retain copies of the documents returned, which, on repeated requests for the same documents, can reduce access time as well as bandwidth consumption. This is often useful for Internet service providers to increase speed to their customers, and LANs that share an Internet connection. Because the caching servers are controlled by the web service operator, caching proxies do not anonymize the user and should not be confused with anonymizing proxies.
A client program (e.g. browser) either has to specify explicitly the proxy server it wants to use (typical for ISP customers), or it could be using a proxy without any extra configuration: "transparent caching", in which case all outgoing HTTP requests are intercepted by Squid and all responses are cached. The latter is typically a corporate set-up (all clients are on the same LAN) and often introduces the privacy concerns mentioned above.
Squid has some features that can help anonymize connections, such as disabling or changing specific header fields in a client's HTTP requests. Whether these are set, and what they are set to do, is up to the person who controls the computer running Squid. People requesting pages through a network which transparently uses Squid may not know whether this information is being logged. Within UK organisations at least, users should be informed if computers or internet connections are being monitored.
Reverse proxy
The above setup—caching the contents of an unlimited number of webservers for a limited number of clients—is the classical one. Another setup is "reverse proxy" or "webserver acceleration" (using ). In this mode, the cache serves an unlimited number of clients for a limited number of—or just one—web servers.
As an example, if slow.example.com is a "real" web server, and www.example.com is the Squid cache server that "accelerates" it, the first time any page is requested from www.example.com, the cache server would get the actual page from slow.example.com, but later requests would get the stored copy directly from the accelerator (for a configurable period, after which the stored copy would be discarded). The end result, without any action by the clients, is less traffic to the source server, meaning less CPU and memory usage, and less need for bandwidth. This does, however, mean that the source server cannot accurately report on its traffic numbers without additional configuration, as all requests would seem to have come from the reverse proxy. A way to adapt the reporting on the source server is to use the X-Forwarded-For HTTP header reported by the reverse proxy, to get the real client's IP address.
It is possible for a single Squid server to serve both as a normal and a reverse proxy simultaneously. For example, a business might host its own website on a web server, with a Squid server acting as a reverse proxy between clients (customers accessing the website from outside the business) and the web server. The same Squid server could act as a classical web cache, caching HTTP requests from clients within the business (i.e., employees accessing the internet from their workstations), so accelerating web access and reducing bandwidth demands.
Media-range limitations
For example, a feature of the HTTP protocol is to limit a request to the range of data in the resource being referenced. This feature is used extensively by video streaming websites such as YouTube, so that if a user clicks to the middle of the video progress bar, the server can begin to send data from the middle of the file, rather than sending the entire file from the beginning and the user waiting for the preceding data to finish loading.
Partial downloads are also extensively used by Microsoft Windows Update so that extremely large update packages can download in the background and pause halfway through the download, if the user turns off their computer or disconnects from the Internet.
The Metalink download format enables clients to do segmented downloads by issuing partial requests and spreading these over a number of mirrors.
Squid can relay partial requests to the origin web server. In order for a partial request to be satisfied at a fast speed from cache, Squid requires a full copy of the same object to already exist in its storage.
If a proxy video user is watching a video stream and browses to a different page before the video completely downloads, Squid cannot keep the partial download for reuse and simply discards the data. Special configuration is required to force such downloads to continue and be cached.
Supported operating systems
Squid can run on the following operating systems:
AIX
BSDI
Digital Unix
FreeBSD
HP-UX
IRIX
Linux
macOS
NetBSD
NeXTStep
OpenBSD
OS/2 (including ArcaOS and eComStation)
SCO OpenServer
Solaris
UnixWare
Windows
See also
Web accelerator which discusses host-based HTTP acceleration
Proxy server which discusses client-side proxies
Reverse proxy which discusses origin-side proxies
Comparison of web servers
References
Further reading
External links
Squid Blog
Squid User's Guide
Squid Transparent Proxy For DD-WRT
Squid reverse proxy — Create a reverse proxy with Squid
Configuration Manual — ViSolve Squid Configuration Manual Guide
Configuration Manual — Authoritative Squid Configuration Options
— Setup squid on solaris
SQUID – Installation on CentOS, Fedora and Red Hat
Free proxy servers
Reverse proxy
Proxy server software for Linux
Unix network-related software
Gopher clients
Cross-platform free software |
39928510 | https://en.wikipedia.org/wiki/LinguaSys | LinguaSys | LinguaSys, Inc. was a company headquartered in Boca Raton, Florida. LinguaSys provided multilingual human language software and services to financial, banking, hospitality, Customer Relations Management, technology, forensics and telecommunications blue chip enterprises, and the government and military.
History
LinguaSys was co-founded by chief executive officer Brian Garr in Boca Raton, Florida, USA; Chief Technology Officer Vadim Berman in Melbourne, Australia; and Vice President of Development and Architecture Can Unal in Darmstadt, Germany in 2010.
CEO Brian Garr was formerly CTO of Globalink from 1995 to 1998 and is a recipient of the Smithsonian Institution's "Heroes in Technology" award for his work in Machine Translation.
Billionaire Mark Cuban began investing in LinguaSys, Inc., in 2012.
Also in 2012, LinguaSys partnered with Salesforce.com, adding multilingual text analytics abilities to the company's social marketing services.
In 2014, LinguaSys made their technology available in a public cloud.
In 2015, LinguaSys added NLUI Server, which enables building Siri-like natural language applications rapidly in a variety of languages, to the products available in the public cloud.
In August 2015, LinguaSys was acquired by Aspect Software.
Products and services
LinguaSys uses interlingual natural language processing software to provide multilingual text, sentiment, relevance and conceptual understanding and analysis. LinguaSys trademarked its proprietary interlingual technology called Carabao Linguistic Virtual Machine. LinguaSys' multilingual software solutions are customized by clients and used via SaaS and behind the firewall. LinguaSys is an IBM Business Partner.
LinguaSys' multilingual technology is used on enterprise servers and consumer smartphones.
LinguaSys has developed an app TGPhoto which allows the user to snap a photo of some text and show a translation to one of fifty languages. The software works on Android, and Blackberry smartphones.
References
External links
2010 establishments in Florida
Companies based in Boca Raton, Florida
Language software
Software companies based in Florida
Technology companies established in 2010
Software companies of the United States |
50010351 | https://en.wikipedia.org/wiki/2016%20Troy%20Trojans%20football%20team | 2016 Troy Trojans football team | The 2016 Troy Trojans football team represented Troy University in the 2016 NCAA Division I FBS football season. They were led by second-year head coach Neal Brown and played their home games at Veterans Memorial Stadium in Troy, Alabama. The Trojans were members of the Sun Belt Conference. They finished the season 10–3, 6–2 in Sun Belt play to finish in a two-way tie for third place. They were invited to the Dollar General Bowl where they defeated Ohio. This was the first 10-win season ever for Troy since joining the FBS in 2001. It was also the first season that Troy had received a Top 25 ranking since joining the FBS in 2001.
Schedule
Troy announced their 2016 football schedule on March 3, 2016. The 2016 schedule consist of six home and seven away games in the regular season. The Trojans will host Sun Belt foes Appalachian State, Arkansas State, Georgia State, and New Mexico State, and will travel to Georgia Southern, Idaho, South Alabama, and Texas State.
Schedule source:
Rankings
Game summaries
Austin Peay
at Clemson
at Southern Miss
New Mexico State
at Idaho
Georgia State
at South Alabama
Massachusetts
Appalachian State
Arkansas State
at Texas State
at Georgia Southern
vs. Ohio–Dollar General Bowl
References
Troy
Troy Trojans football seasons
LendingTree Bowl champion seasons
Troy Trojans football |
2580877 | https://en.wikipedia.org/wiki/1st%20Fighter%20Wing | 1st Fighter Wing | The 1st Fighter Wing (1 FW) is a United States Air Force unit assigned to the Air Combat Command Ninth Air Force. It is stationed at Langley Air Force Base, VA. where it is a tenant unit, being supported by the 633d Air Base Wing.
Its 1st Operations Group (1 OG) is a successor organization of the 1st Fighter Group, one of the 15 original combat air groups formed by the Army before World War II. The 1 OG is the oldest major air combat unit in the United States Air Force, its origins formed on 5 May 1918.
The wing was initially part of Tactical Air Command being formed at March Field, California in 1947 and was one of the first wings to be equipped with the North American F-86 Sabre in February 1949. Briefly a part of Strategic Air Command in 1949, it was reassigned to Air Defense Command in 1950 and provided air defense of the Upper Midwest of the United States until being reassigned to Tactical Air Command in 1970. The 1 FW was the first operational wing equipped with the F-15A/B Eagle in 1976; and in 2005, was the first operational wing equipped with the Lockheed Martin F-22A Raptor air superiority fighter.
History
For additional lineage and history, see 1st Fighter Group
Origins
The 1st Fighter Wing was activated at March Field California on 15 August 1947. It was assigned to Twelfth Air Force, Tactical Air Command (TAC).
In December 1948 Twelfth Air Force was assigned from Tactical Air Command to Continental Air Command (ConAC), established on 1 December 1948. ConAC assumed jurisdiction over both TAC and the Air Defense Command (ADC). This move reflected an effort to concentrate all fighter forces deployed within the continental United States to strengthen the air defense of the North American continent. The move was largely an administrative convenience: the units assigned to ConAC were dual-trained and expected to revert to their primary strategic or tactical roles after the air defense battle was won. The 1st Fighter Wing was subsequently transferred from Twelfth Air Force/TAC to Fourth Air Force/ConAC on 20 December 1948. Organizational and equipment changes continued throughout 1949. The first F-86 Sabre, assigned to the 94th Fighter Squadron, arrived on 15 February. By the end of June the wing had received seventy-nine of its eighty-three authorized F-86s. On 1 May the wing transferred from ConAC to Strategic Air Command (SAC) and the Fifteenth Air Force. The wing was subsequently attached to the 22d Bombardment Wing on 1 July.
At March, the wing trained in large formation flying and competed to establish various formation records. The 71st Fighter Squadron struck first in September 1949, when it launched a twelve and later an eighteen-aircraft formation. The 27th and the 94th countered on 21 October. On that day the 94th launched three thirteen-plane formations, but the 27th topped this with two twenty-one plane formations, The purpose of this exercise became clear in early January 1950, when the wing deployed a sizable contingent of aircraft to participate in the filming of the RKO film Jet Pilot. The group claimed a final formation record on 4 January when it passed a twenty-four plane formation (consisting of eight aircraft from each squadron) before the cameras. The group formed its own aerial demonstration team in January 1950. The team, dubbed the "Sabre Dancers", was composed of five members of the 27th Fighter Squadron. The Sabre Dancers made what was probably their most widely viewed flight on 22 April 1950, when they performed before an Armed Forces Day audience at Eglin AFB, Florida, that included President Harry S. Truman, most of his Cabinet, and numerous other political leaders.
Air Defense Command
Korean War era
Effective 16 April 1950 the 1st Fighter Wing was redesignated the 1st Fighter-Interceptor Wing, the same designation that was simultaneously applied to the group and its three squadrons. The wing had, some days previously, been relieved from its attachment to the 22d Bombardment Wing. The organizational changes the wing had experienced since 1947 paled in comparison to the multitude of changes the unit underwent during the last six months of 1950. As of 30 June 1950, the 1st Fighter-Interceptor Group was assigned to the 1st Fighter-Interceptor Wing, which was itself assigned to Fifteenth Air Force and SAC. On 1 July the wing was relieved from assignment to Fifteenth Air Force and SAC and assigned to the Fourth Air Force and ConAC.
Two days later the wing issued orders establishing advanced parties of its headquarters and component organizations at Victorville (later George) AFB, California. On 22 July an advanced party of personnel from Headquarters, 1st Fighter-Interceptor Group and the 27th and 71st Fighter-Interceptor Squadrons departed for Griffiss AFB, New York. A letter directing the wing to send the group headquarters and the 27th and the 71st to Griffiss for attachment to the Eastern Air Defense Force (EADF), ConAC, arrived on 30 July. Headquarters, 1st Fighter-Interceptor Wing and the 94th Fighter-Interceptor Squadron were assigned to the Western Air Defense Force, ConAC, on 1 August, while the group headquarters and the 27th and 71st were attached to the EADF on 15 August. The wing was attached to the 27th Air Division, WADF, on 20 September. Finally, one month later, the 71st Fighter-Interceptor Squadron moved from Griffiss AFB to Pittsburgh International Airport, Pennsylvania.
As of 31 December 1950 Headquarters, 1st Fighter-Interceptor Wing and the 94th were stationed at George AFB, assigned to the WADF, and attached to the 27th Air Division. Headquarters, 1st Fighter-Interceptor Group, while still assigned to the wing, was stationed at Griffiss AFB with the 27th. The 71st was at Pittsburgh. The units on the East Coast were attached to the EADF.
Air Defense Command was reestablished as a major command on 1 January 1951, and the wing was assigned to ADC. In May, the 27th and the 71st were attached to the Connecticut Air National Guard 103d Fighter Interceptor Group, which provided administrative and logistical support and operational control, although the squadrons remained assigned to the 1st Fighter Group. Headquarters, 1st Fighter Group was relieved from attachment to the Eastern Air Defense Force and moved from Griffiss back to George without personnel or equipment. Meanwhile, at George AFB, the New Mexico Air National Guard 188th Fighter-Interceptor Squadron was attached to the 1st Fighter-Interceptor Wing, which provided administrative support and operational control.
All of these constant moves and reassignments as well as the fact that the wing headquarters stationed in California could provide only limited control and virtually no support to a group headquarters and squadrons deployed on the East Coast. While the policy of attaching units to higher headquarters established an ad hoc means of supplying the needed support, it was a cumbersome procedure that blurred organizational lines and did nothing for morale or unit cohesion above the squadron level. With the exception of the Headquarters and Headquarters Squadron, and the three fighter-interceptor squadrons, all 1st Fighter-Interceptor Wing organizations and the group headquarters were reduced to a strength of one officer and one enlisted man on 30 November 1951, at which time the wing moved from George Air Force Base, California, to Norton Air Force Base, California. The squadrons were reassigned to newly organized "defense wings": the 27th to the 4711th Air Defense Wing (ADW), Eastern Air Defense Force, the 71st to the 4708th Air Defense Wing, EADF, and the 94th to the 4705th Defense Wing, WADF. Headquarters, Air Defense Command inactivated the 1st Fighter-Interceptor Wing on 6 February 1952.
Selfridge AFB
The organizational instability of the early 1950s was rooted in the demands of the Korean War. With the end of the war in Korea the Air Defense Command found itself in a position to return to a more traditional command structure. The 1st Fighter-Interceptor Wing was redesignated the 1st Fighter Wing (Air Defense) on 14 September 1956 and activated on 18 October 1956 at Selfridge AFB, Michigan. It was assigned to the Eastern Air Defense Force. After enduring a six-year period of frequent organizational changes, the wing began a period of stability. For approximately the next thirteen years it remained at Selfridge. Both the 71st and the 94th FIS traded their F-86s for F-102 Delta Dagger interceptors between 1958 and 1960. While the wing and its units operated from Selfridge AFB the 27th Fighter-Interceptor Squadron remained on the east coast. As of 31 December 1961 it was stationed at Dow AFB, Maine, and assigned to the Bangor Air Defense Sector, 26th Air Division. At that time the squadron was equipped with F-106 Delta Darts, and was not part of the 1st Fighter Wing.
In October 1962 the wing responded to the Cuban Missile Crisis by deploying aircraft, support personnel, equipment and supplies to Patrick Air Force Base, Florida, and Volk Field, Wisconsin. From 19 October through 27 November wing aircraft flew 620 sorties and 1,274 hours, most from Patrick AFB, while maintaining a mission-ready rate of approximately eighty percent. Wing life reverted to more normal training routines at year's end, and the pattern continued through 1963 and 1964.
On 15 March 1963 two Soviet bombers overflew Alaska and Alaskan Air Command F-102s were unable to intercept them. The response to this intrusion was to deploy ten F-106s from the 325th Fighter Wing to Alaska in what was called Operation White Shoes. While the 325th wing upgraded its F-106s, the 1st Fighter Wing relieved it from March to June 1964. While deployed in Alaska, two of the wing's F-106s were damaged in the Good Friday earthquake.
Beginning in about 1965 the wing began to transfer pilots to other units in or en route to South Vietnam. While the wing itself did not participate in the Vietnam War, its units were soon manned by personnel who had completed tours in Southeast Asia, with the 1st serving as a transition unit for many pilots en route to or returning from Southeast Asia.
Organizational changes continued to whittle away at the wing's strength in 1966 and 1967. The wing was assigned to the 34th Air Division, First Air Force, on 1 April 1966. This organization changed again on 16 January 1967, when the 71st Fighter-Interceptor Squadron, which had won top prize in the F-106 category at the 1965 William Tell weapons competition at Tyndall AFB, Florida, was transferred to the 328th Fighter Wing (Air Defense), Tenth Air Force, at Richards-Gebaur Air Force Base, Missouri. This reorganization left the 1st Fighter Wing with only one fighter squadron, the 94th. However, the reduced wing stayed busy. From 24 July through 4 August 1967 Selfridge became the hub of federal activities mobilized during the 1967 Detroit riots. Elements of the 3d Brigade, 82d Airborne Division and the 2d Brigade, 101st Airborne Division, a total of some 12,000 combat and support personnel, eventually passed through the base. From 1500 on 24 July to 1500 the next day, the base received 4,700 troops and 1,008 tons of cargo. On 1 August the base handled 363 C-130 Hercules sorties, 6,036 troops, and 2,492 tons of cargo. By the time the tactical command post at Selfridge was closed at 1130 on 4 August, the base had processed 1,389 C-130 sorties, 12,058 troops, and 4,735 tons of cargo.
In September 1968 the detached 71st Fighter-Interceptor Squadron was relieved from assignment to the 328th Fighter Wing, and transferred to the 28th Air Division, Tenth Air Force, at Malmstrom AFB, Montana, where it became a self-contained unit operating on the SAC base. Between 20 May and 5 November 1969, the 94th FIS deployed to Osan Air Base, Korea, for exercise College Cadence.
It was to be the 1st Fighter Wing's last major air defense effort. On 1 December 1969 the 94th was transferred to Wurtsmith AFB, Michigan, pending the inactivation of the 1st Fighter Wing, which was assigned to the 23d Air Division on that date. On 31 December 1969 the wing, with no units under its control, transferred to Hamilton AFB, California, and was assigned to the 26th Air Division. The wing's personnel and equipment were transferred to the 4708th Air Base Group, 23d Air Division, at Duluth International Airport, Minnesota, on 1 January 1970.
Tactical Air Command
On 1 August 1968, General William W. Momyer became commander of Tactical Air Command. While he devoted most of his attention to the pressing problems the command faced during the war in Vietnam, General Momyer also concerned himself with the designation of the units under his command. The movement of units to and from Vietnam left TAC with a mixed force. Some of its organizations had long and honorable tactical traditions. Others used a provisional, four-digit, command-controlled designations that gave them no history or traditions. General Momyer therefore directed the TAC planning staff to replace the provisional four-digit designations with those of units that had a combat record dating from either World War 11 or Korea. He also directed the staff to "retain illustrious AFCON designators for the active tactical forces." This policy, plus the training demands caused by the war in Vietnam, led to the 1st Fighter Wing's return to Tactical Air Command in October 1970.
MacDill AFB
Headquarters, United States Air Force authorized the reassignment of the 1st Fighter Wing (Air Defense) from Aerospace Defense Command to Tactical Air Command on 30 July 1970. Three days later, HQ ADC directed the commander of the 26th Air Division to move Headquarters, 1st Fighter Wing (Air Defense) from Hamilton AFB, California, to MacDill AFB, Florida. All units moved without personnel or equipment. The personnel and equipment formerly of the 15th Tactical Fighter wing were reassigned to the 1st TFW. The squadrons of the 15 TFW were assigned to the historic wing: the 45th, 46th, and 47th Tactical Fighter Squadrons. Another organizational change effective 1 July 1971 transferred the wing from the 836th Air Division, inactivated on that date, to Ninth Air Force.
Completing the Wing's historic preservation, the commanders of the three squadrons participated in a shoot-out at the Avon Park Air Force Range to determine which squadrons would receive the designations of the 27th, 71st, and 94th. The commander of the 47 TFS marked the highest score, and chose the 94 TFS; the 46 TFS placed second, choosing the 27 TFS, leaving the 45 TFS with the squadron having the shortest history, the 71 TFS.
The wing spent the next four years providing advanced tactical training to F-4 Phantom II and B-57 Canberra aircrews, most of whom later saw service in the Vietnam War. On 1 October 1971, HQ TAC inactivated the 4530th Tactical Training Squadron, which, in addition to other duties, had trained Australian F-4 aircrew members and maintenance personnel during project Peace Reef. The 4501st Tactical Fighter Replacement Squadron, equipped with F-4s, assumed the 4530th's place in the wing's structure on the same date. The command inactivated the 4424th Combat Crew Training Squadron, the wing's B-57 training unit, on 30 June 1972, leaving the wing with four flying squadrons. All conducted advanced F-4 tactical training.
On 14 March 1974, the Air Force publicly announced plans to station the Air Force's first operational F-15 wing at Langley Air Force Base, Virginia. Langley was chosen due to its heritage and ideal location for TAC's secondary air defense mission. After studying the heritage of its wings, TAC selected the 1st Fighter Wing as the unit to receive the first Eagle. On 6 June 1975, Tactical Air Command directed Ninth Air Force to move the 1st Fighter Wing from MacDill to Langley AFB. Although the designation of the unit moved, the majority of MacDill personnel remained in place, and served under the newly designated 56th Tactical Fighter Wing which continued to conduct F-4 training.
Langley AFB
1st Tactical Fighter Wing personnel spent six months preparing for the arrival of the F-15. By the end of 1975, the Wing was ready for its new air superiority weapon, and on 18 December 1975, Lt Col John Britt, Operations Officer, flew the Wing's first F-15 (a two-seat trainer) into Langley. Official welcoming ceremonies were held on 9 January 1976, when Lt Col Richard L. Craft, 27th Fighter Squadron Commander, landed with the Wing's first single seat F-15. In recognition of its accomplishment of introducing the F-15 into the Air Force's operational inventory, the 1st Tactical Fighter Wing received its first Air Force Outstanding Unit Award, for the period 1 July 1975 – 31 October 1976.
After achieving operational ready status, the Wing took the experience they had earned and utilized it on a program nicknamed "Ready Eagle." The 1st helped prepare the 36th Tactical Fighter Wing at Bitburg Air Base, Germany, for their reception of the F-15. The 1st assisted in the training of maintenance personnel and pilots. By 23 September 1977, the wing provided Bitburg with 88 operationally ready pilots, 522 maintenance specialists, and later trained an additional 1,100 maintenance personnel at Bitburg.
On 15 April 1977, the 1 TFW acquired a new mission. The wing assumed responsibility for the 6th Airborne Command and Control Squadron's EC-135 aircraft and crews, previously assigned to the 4500th Air Base Wing at Langley. The 6 ACCS flew EC-135 airborne command posts in support of U.S. Commander in Chief, Atlantic Command (USCINCLANT) with deployments throughout the Atlantic region until early 1992. 1st Fighter Wing participation in worldwide deployments and training exercises continued through the 1980s. The Wing served in countries throughout Europe, Asia, the Middle East, Africa, and Central America.
The final F-15s left the 1st Fighter Wing on 3 September 2010, after operating the weapon system for nearly 35 years.
Southwest Asia operations
The training and experience gained was called upon in the summer of 1990, when Iraqi forces invaded Kuwait. On 7 August 1990, the 27th and 71st Tactical Fighter Squadrons began deploying to Saudi Arabia as the first American combat units on the ground in Saudi Arabia, in support of the defense of the Arabian peninsula from further Iraqi aggression—an operation dubbed Operation Desert Shield. In all, the 1 TFW deployed 48 aircraft to the Persian Gulf. By 16 January 1991, when Desert Shield came to a close, the Wing amassed 4,207 sorties patrolling the Kuwait and Iraq border areas.
At 0115 local Saudi Arabia time, on 17 January 1991, sixteen 1st Tactical Fighter Wing F-15s departed King Abdul-Aziz Air Base and flew toward Iraq to participate in Operation Desert Storm, the liberation of Kuwait from the Iraqis.
During the first night of the operation, Captain Steven W. Tate of the 71st Tactical Fighter Squadron, shot down an Iraqi Mirage F-1, which turned out to be the wing's only kill during the war. It was also the first combat credit awarded to the wing under command of the U.S. Air Force. Upon its return on 8 March 1991, the 1st Tactical Fighter Wing had amassed a total of 2,564 sorties during Operation Desert Storm.
The end of the First Gulf War did not bring an end to the Wing's support in Southwest Asia. Monitoring the southern no-fly zone, the 1st provided six-month coverage every year under Operation Southern Watch and Operation Northern Watch. In October 1994, when Saddam Hussein again placed forces near the Kuwaiti border, the Wing participated in a short-notice deployment, Operation Vigilant Warrior.
Operation Vigilant Warrior demonstrated the need for an Air Force capability of providing combat air power globally at short notice. This requirement resulted in the concept of the Air Expeditionary Force (AEF.) During AEF II, the 1st Fighter Wing deployed 12 F-15s and over 600 personnel to Shaheed Mwaffaq Air Base, Jordan, from 12 April – 28 June 1996. Wing members built and operated from the bare base, and provided support to Operation Southern Watch, supporting UN sanctions and enforcing the no-fly zones in Iraq.
On 25 June 1996, a fuel truck loaded with explosives detonated outside the Khobar Towers Housing area, in Dhahran, Saudi Arabia. The bomb killed 19 Air Force members, including five airmen of the 71st Rescue Squadron, and consequently the 1st Fighter Wing relocated its Southwest Asia operations from Dhahran to Prince Sultan Air Base, Al Kharj.
From 1991
On 1 October 1991, the 1st Tactical Fighter Wing was redesignated 1st Fighter Wing; the 1st Fighter Group was redesignated as the 1st Operations Group and reactivated as part of the wing. The 1st Fighter Wing assumed responsibility of three additional missions—air control, airlift, and search and rescue:
—On 15 March 1992, the 74th Air Control Squadron was transferred to the 1st Fighter Wing. The 74th provided command and control of air operations for worldwide operations.
—On 1 February 1993, the 41st and 71st Rescue Squadrons, and the 741st Maintenance Squadron were assigned to the 1st Fighter Wing. Stationed at Patrick AFB, Fla., the units provided search and rescue for NASA's space shuttle missions, and support of combat search and rescue operations in Southwest Asia. Additionally,
—On 1 April 1993 C-21 operational support aircraft were assigned to the Wing with the establishment of Detachment 1, 1 OG. On 1 May, the detachment inactivated and the 12th Airlift Flight, with the same mission, activated.
The 1st Rescue Group was activated as part of the 1st Fighter Wing on 14 June 1995, to provide operational control of the Search and Rescue mission.
Two realignments ordered by Air Combat Command took effect on the same day, 1 April 1997. The most substantial one had been the 1st Rescue Group's reassignment to the 347th Wing at Moody Air Force Base. This move meant the loss of two types of aircraft, the HC-130P "Hercules" gunship, and the HH-60G "Pave Hawk" helicopter. When the Air Force decided to transfer 12th Airlift Flight to Air Mobility Command, another type of aircraft, the C-21, was removed from the 1st Fighter Wing's possession exactly four years after it had been assigned.
What made the wing's valued participation in this contingency unique is the fact it sent no aircraft in support of it, exemplifying the diversity of the 1st Fighter Wing's comprehensive mission. More than 150 personnel from 11 units within the 1st Fighter Wing deployed to the European theater in direct support of Operation Allied Force and associated operations such as Noble Anvil and Shining Hope.
Responsible for the worldwide mobility commitment to execute command and control operations, the 74th Air Control Squadron provided the largest contingent of 1st Fighter Wing personnel and equipment to Operation Noble Anvil. The 74th ACS set up their equipment outside Budapest, Hungary, to provide joint forces and theater commanders with an accurate air picture for conducting offensive and defensive missions. During Operation Allied Force, the 74th Air Control Squadron deployed to provide critical air control in the European Theater of Operations.
After September 11 terrorist attacks
After the September 11 attacks in 2001, the 1st Fighter Wing took to the skies to simultaneously defend the east and west coasts of the US against further terrorist attacks. The wing's F-15s were among the first fighters on scene over Washington D.C. and remained on station continuously for the next six months. The 1st Fighter Wing simultaneously participated in the US homeland defense mission in Operation Noble Eagle; maintained its lead wing status in the USAF's Air Expeditionary Force rotations to Southwest Asia and Turkey, enforcing no-fly zones in Operation Southern Watch/Operation Northern Watch until 2003; and deployed fighters to Keflavík, Iceland to fulfill NATO treaty obligations.
During the first stages of the Iraq War in 2003, the 71st Fighter Squadron deployed again to Southwest Asia. In 2005, the 27th and 94th Fighter Squadrons became the first squadrons in the world to achieve operational status flying the F-22 Raptor.
Joint basing
The 1st Fighter Wing served as the host unit of Langley AFB from 1975 until 7 January 2010. The wing relinquished two of its four groups to the newly reactivated 633d Air Base Wing, which assumed host duties for Langley AFB. The change of command also was a pivotal step in the realignment consolidation of Langley AFB and Fort Eustis into Joint Base Langley-Eustis, which stood up in January 2010.
Lineage, assignments
Designated as: 1st Fighter Wing on 28 July 1947
Organized on: 15 August 1947
Redesignated as: 1st Fighter-Interceptor Wing on 16 April 1950
Inactivated on: 6 February 1952
Redesignated as: 1st Fighter Wing (Air Defense), 14 September 1956
Activated on: 18 October 1956
Redesignated as: 1st Tactical Fighter Wing on 1 October 1970
Redesignated as: 1st Fighter Wing on 1 October 1991
Assignments
Twelfth Air Force, 15 August 1947
Fourth Air Force, 20 December 1948
Fifteenth Air Force, 1 May 1949
Attached to: 67th Tactical Reconnaissance Wing, 25 November 1947 – 28 March 1949
Attached to: 22d Bombardment Wing, 10 May 1949 – 1 April 1950
Fourth Air Force, 1 July 1950
Attached to Western Air Defense Force, 1–31 July 1950)
Western Air Defense Force, 1 August 1950 – 6 February 1952
Attached to: Southern California Air Defense Sector [Provisional], 7 August – 19 September 1950
Attached to: 27th Air Division, 20 September 1950 – c. 6 February 1952)
30th Air Division, 18 October 1956
Detroit Air Defense Sector, 1 April 1959
34th Air Division, 1 April 1966
23rd Air Division, 1 December 1969
26th Air Division, 31 December 1969
836th Air Division, 1 October 1970
Ninth Air Force, 30 September 1971 – present
Flying components
Groups
1st Fighter Group (later, 1st Fighter-Interceptor Group; 1st Fighter; 1st Operations Group): 15 August 1947 – 6 February 1952 (detached 15 August 1950 – 3 June 1951); 18 October 1956 – 1 February 1961; 1 October 1991–.
1st Airdrome Group (later 1st Air Base Group, 1st Combat Support Group, 1st Support Group, 1st Mission Support Group): 15 August 1957 – 6 February 1952; 18 October 1956 – 30 June 1975, 15 April 1977 – 7 January 2010
1st Maintenance & Supply Group (later 1st Logistics Group, 1st Maintenance Group): 15 August 1947 – 6 February 1952; 18 October 1956 – 1 February 1961; 1 October 1991–.
1st Medical Group (earlier USAF Hospital, Langley, USAF Regional Hospital, Langley): 15 April 1977 – present
1st Rescue Group: 14 June 1995 – 1 April 1997.
1st Station Medical Group (later 1st Medical Group, 1st USAF Hospital, 1st Tactical Hospital): 15 August 1947 – 6 February 1952, 18 October 1956 – 1 May 1973, 1 February 1978 – 15 March 1987
67th Reconnaissance Group: 15 August – 25 November 1947.
Squadrons
6th Airborne Command and Control Squadron: 19 April 1976 – 1 October 1991.
7th Liaison Squadron: 1 September 1947 – 28 March 1949.
27th Tactical Fighter Squadron: 1 July 1971 – 1 October 1991 (detached 7 August 1990 – 8 March 1991).
45th Tactical Fighter Squadron: 1 October 1970 – 1 July 1971.
46th Tactical Fighter Squadron: 1 October 1970 – 1 July 1971.
47th Tactical Fighter Squadron: 1 October 1970 – 1 July 1971.
71st Fighter-Interceptor Squadron (later, 71st Tactical Fighter, then 71st Fighter Squadron): 1 February 1961 – 16 January 1967; 1 July 1971 – 1 October 1991 (detached 7 August 1990 – 8 March 1991).
84th Fighter-Interceptor Squadron: 31 December 1969 – 1 October 1970.
94th Fighter-Interceptor Squadron (later, 94th Tactical Fighter, then 94th Fighter Squadron): attached 15 August 1950 – 3 June 1951; assigned 1 February 1961 – 1 December 1969 (detached 24 May – 3 November 1969); assigned 1 July 1971 – 1 October 1991.
188th Fighter-Interceptor Squadron: attached 15 June 1951 – 6 February 1952.
4424th Combat Crew Training Squadron: 1 October 1970 – 30 June 1972.
4501st Tactical Fighter Replacement Squadron: 1 October 1971 – 30 June 1975.
Flights
4401st Helicopter Flight: 31 March 1987 – 1 October 1991.
Stations
March Field (later, AFB), California, 15 August 1947
George AFB, California, 18 July 1950
Norton AFB, California, 1 December 1951 – 6 February 1952
Selfridge AFB, Michigan, 18 October 1956
Hamilton AFB, California, 31 December 1969
MacDill AFB, Florida, 1 October 1970
Langley AFB, Virginia, 30 June 1975–.
Components of wing deployed to King Abdul Aziz Air Base, Saudi Arabia
1st Tactical Fighter Wing (Provisional)
(Operation Desert Storm/Desert Shield), August 1990 – March 1991
Aircraft
P-80 Shooting Star (later F-80) (1947–1949)
F-22 Raptor (2004–present)
FA (later, RB)-26 (1947–1949)
Stinson L-13 (1947–1949)
B-26 Marauder (1948–1949)
Piper L-4 (1948–1949)
L-5 Sentinel (1948–1949)
B-29 Superfortress (1949)
P-51 Mustang (1951–1952)
F-86 Sabre (1956–1960, 1949–1952)
F-102 Delta Dagger (1958–1960)
F-106 Delta Dart (1960–1969, 1969–1970)
F-4 Phantom II (1970–1990's)
B-57 Canberra (1970–1972)
F-15 Eagle (1976–present)
EC-135 (1976–1992)
HH-3 (1993–1994)
Lockheed HC-130 (1993–present)
C-21 (1993–1997)
HH-60 Pave Hawk (1994–1997)
Organization
The major units currently comprising the 1st Fighter Wing are as follows:
Headquarters, First Fighter Wing
1st Operations Group
27th Fighter Squadron
71st Fighter Squadron
94th Fighter Squadron
1st Operations Support Squadron
1st Maintenance Group
1st Maintenance Squadron
1st Munitions Squadron
Commanders
Col Carl J. Crane, 15 August 1947
Col Elvin F. Maughn, 19 January 1948
Col Clifford H. Rees, 17 May 1948
Col Joseph H. Davidson, 13 January 1949
Col George McCoy Jr., 14 June 1949
Col William L. Lee, 19 August 1949
Col Wiley D. Ganey, 4 January 1950
Col George McCoy Jr., 17 February 1950
Brig Gen Donald R. Hutchinson, c. 17 October 1950
Col Dolf E. Muehleisen, 14 December 1950
Col Robert F. Worley, c. June 1951 – 6 February 1952
Col Glenn E. Duncan, 18 October 1956
Col Charles D. Sonnkalb, c. August 1959
Col George J. LaBreche, c. December 1960
Col Ralph G. Taylor Jr., 15 June 1962
Col Wallace B. Frank, 11 September 1963
Col Converse B. Kelly, 16 September 1963
Col Kenneth E. Rosebush, August 1966
Col Taras T. Popovich, 29 April 1968
Col Morris B. Pitts, c. 31 October 1969
Col Mervin M. Taylor, January 1970
Col Travis R. McNeil, 1 October 1970
Col Robert F. Titus, 1 March 1971
Col Howard W. Leaf, 6 May 1971
Col Walter D. Druen Jr., 1 November 1971
Col Sidney L. Davis, 18 April 1972
Col Gerald J. Carey Jr., 25 June 1973
Col Ernest A. Bedke, by June 1975
Lt Col George H. Miller, 1 July 1975
Brig Gen Larry D. Welch, 1 August 1975
Brig Gen John T. Chain, Jr., 1 August 1977
Col Neil L. Eddins, 27 March 1978
Col Donald L. Miller, 15 May 1979
Brig Gen William T. Tolbert, 11 August 1980
Brig Gen Eugene H. Fischer, 29 January 1982
Brig Gen Henry Viccellio Jr., 6 April 1983
Brig Gen Billy G. McCoy, 31 May 1985
Col Buster C. Glosson, 10 July 1986
Col Richard B. Myers, 11 June 1987
Col John M. McBroom, 24 February 1989
Col David J. McCloud, 27 June 1991
Brig Gen Gregory S. Martin, 15 June 1993
Brig Gen William R. Looney III, 23 May 1995
Col Felix Dupre, 11 April 1996 (temporary)
Brig Gen William R. Looney III, 29 June 1996
Brig Gen Theodore W. Lay II, 10 July 1996
Col Gary R. Dylewski, 21 October 1997
Col Felix Dupre, 7 April 1999
Brig Gen Stephen M. Goldfein, 10 April 2000
Col Stephen J. Miller, 11 January 2002 – September 2003
Col Frank Gorenc, September 2003 – June 2005
Brig Gen Burton M. Field, June 2005 – April 2007
Brig Gen Mark Barrett, April 2007 – May 2009
Col Matthew H. Molloy, 8 May 2009 – 23 May 2011
Col Kevin J. Robbins, 23 May 2011 – July 2013
Col Kevin A. Huyck, July 2013 – 2015
Col Peter Fesler, 2015 – 2017
Col Jason T. Hinds, 2017–2019
Col David R. Lopez – 2019–Present
Honors
Authorized to display honors earned by the 1st Operations Group prior to 15 August 1947.
Service Streamers. None.
Campaign Streamers.
World War I Champagne-Marne; Aisne-Marne; Oise-Aisne; St Mihiel; Meuse-Argonne; Lorraine Defensive Sector; Champagne Defensive Sector.
World War II Air Offensive, Europe; Algeria-French Morocco; Tunisia; Sicily; Naples-Foggia; Anzio; Rome-Arno; Normandy; Northern France; Southern France; North Apennines; Rhineland; Central Europe; Po Valley; Air Combat, EAME Theater. Decorations. Distinguished Unit Citations: Italy, 25 August 1943; Italy, 30 August 1943; Ploieşti, Romania, 18 May 1944.
Southwest Asia Defense of Saudi Arabia; Liberation and Defense of Kuwait.
Decorations.
Air Force Outstanding Unit Awards 1 July 1975 – 31 October 1976; 15 June 1982 – 15 June 1984; 16 June 1984 – 15 June 1986; 1 June 1995 – 31 May 1997; 1 June 1998 – 31 May 2000; 1 June 2000 – 31 May 2001.
Emblem
Approved for 1st Operations Group on 10 February 1924 and for 1st Fighter Wing on 22 May 1957. The five stripes stand for the original five squadrons, and the crosses represent the group's five campaigns during World War I.
References
Notes
Citations
Bibliography
Alford, Major James S. History of the 1st Fighter Group: Volume II: The 1st Fighter Group in World War II. Privately Printed, 1960.
Gabler, Lt Col. Clyde. What Did you Do in WW II Grandpa?. Baltimore, Maryland: Gateway Press, 1994.
Haiber, William P. Frank Luke – The September Rampage. Devel Press, 1999. The story of the 1st Pursuit Group's 17-Kill ace in World War I.
Hartney, Harold E. Up and At 'Em. Harrisburg, Stackpole Sons, 1940. (republished Garden City, New York: Doubleday & Co., Inc., 1971.; New York: Arno Press 1980). Memoirs of the Commander of the 1st Pursuit Group in World War I.
McMullen, Richard F. (1964) "The Fighter Interceptor Force 1962–1964" ADC Historical Study No. 27, Air Defense Command, Ent Air Force Base, CO (Confidential, declassified 22 March 2000)
Mullins, John D. An Escort of P-38s: The 1st Fighter Group in WW2. St. Paul, Minnesota: Phalanx Publishing Co., 1995. . (Expanded and republished in 2004)
O'Connell, Charles. A History of First Fighter 1918–1983. Office of TAC History, 1987.
External links
Langley Air Force Base (official site)
AFHRA 1st Fighter Wing fact sheet
1st Fighter Wing Heritage Site
0001
Military units and formations in Virginia
Military units and formations established in 1947
Military units and formations of the United States in the Gulf War |
8945349 | https://en.wikipedia.org/wiki/6th%20Space%20Operations%20Squadron | 6th Space Operations Squadron | The 6th Space Operations Squadron is an Air Force Reserve satellite command and control squadron located at Schriever Space Force Base, Colorado. The squadron is a backup to NOAA for Defense Meteorological Satellite Program operations.
Mission
The 6th Space Operations Squadron provides a backup command and control center for the Defense Meteorological Satellite Program (DMSP). DMSP is the longest running production satellite program ever. The DMSP satellite constellation provides strategic and tactical weather prediction to aid military operations planning at sea, on land, and in the air. The satellites can image visible and infrared cloud cover, measure precipitation, surface temperature, and soil moisture. In addition, it collects specialized global meteorological oceanographic and solar-geophysical information in all weather conditions. It also has sensors for space weather data that is used to assist in high-frequency communications, over-the-horizon radar and spacecraft drag and reentry tasks. The information provided by the DMSP satellites is used to compile various worldwide weather products for numerous users, such as the Air Force Weather Agency and Fleet Numerical Meteorology and Oceanography Center as well as civilian authorities through the Department of Commerce. The 6th is located at Schriever Air Force Base, Colorado.
History
The 4000th Support Group was organized on 1 February 1963 as a component of Strategic Air Command. It was reassigned to the 1st Strategic Aerospace Division on 1 January 1966. On 1 January 1973, the organization was redesignated 4000th Aerospace Application Group without change in assignment or location. It was redesignated 4000th Satellite Operations Group on 3 April 1981. .
On 1 May 1983, the 4000th Satellite Operations Group at Offutt Air Force Base was transferred to the newly formed Air Force Space Command under the 1st Space Wing and was given a new designation, the 1000th Satellite Operations Group ('One Grand'). The group was reassigned to the 2d Space Wing on 1 April 1986. In May 1989, Detachment 1 at Fairchild Air Force Base, Washington, was upgraded to squadron status, becoming the 5th Satellite Control Squadron. On 30 January 1992, the group was reassigned to the 50th Space Wing. On 31 July 1992, the 1000th Satellite Operations Group re-designated as the 6th Space Operations Squadron and was reassigned to the 50th Operations Group. The unit was still a Regular Air Force unit and was still stationed at Offutt.
In 1994, President Bill Clinton signed a bill that merged federal weather programs. Prior to the merge, federal programs were deemed to be redundant. This merger would save the government money and allow one entity to control national weather products. The merger also moved weather operations to the National Oceanographic and Atmospheric Administration in Suitland, Maryland.
On 30 September 1998, the 6th Space Operations Squadron was inactivated. It activated in the Air Force reserve with assignment to the 310th Space Group at Schriever Air Force Base, Colorado on 1 October 1998.
The squadron's operations have been command and control of the Defense Meteorological Satellite Program (DMSP) satellites since 1 February 1963.
Lineage
Designated as the 4000th Support Group and organized on 1 February 1963
Redesignated 4000th Aerospace Application Group on 1 January 1973
Redesignate 4000th Satellite Operations Group on 3 April 1981
Redesignated 1000th Satellite Operations Group on 1 May 1983
Redesignated 6th Space Operations Squadron on 31 July 1992
Inactivated on 30 September 1998
Activated in the reserve on 1 October 1998
Assignments
Strategic Air Command, 1 February 1963
1st Strategic Aerospace Division, I January 1966
1st Space Wing, 1 May 1983
2d Space Wing, 1 April 1986
50th Space Wing, 30 January 1992
50th Operations Group, 31 July 1992 – 30 September 1998
310th Space Group, 1 October 1998
310th Operations Group, 7 March 2008
Stations
Offutt Air Force Base, Nebraska, 1 February 1963 – 30 September 1998
Schriever Air Force Base, Colorado, 1 October 1998 – present
Spacecraft controlled
Defense Meteorological Satellite Program (1963–1998; 1998 – present)
References
Notes
Bibliography
Military units and formations in Colorado
Space Operations 0006 |
21572326 | https://en.wikipedia.org/wiki/GraphPad%20Software | GraphPad Software | GraphPad Software Inc. is a privately held software development corporation. It was founded in 1989 and operates in California. Its products include the 2D scientific graphing, biostatistics, curve fitting software GraphPad Prism and the free, web-based statistical calculation software, GraphPad QuickCalcs.
GraphPad Prism
GraphPad Prism is a commercial scientific 2D graphing and statistics software for Windows and Mac OS desktop computers.
Software features include nonlinear regression, with functionalities including the removal of outliers, comparisons of models, comparisons of curves, and interpolation of standard curves. The software allows the automatic updating of results and graphs, and has functionality for displaying error bars.
Alternative graphing and statistical software packages to GraphPad Prism include SciDAVis, SigmaPlot, Origin (data analysis software) and interfaces to R.
References
Software companies based in California
Companies based in San Diego
Software companies of the United States |
4783525 | https://en.wikipedia.org/wiki/Cosmobot | Cosmobot | CozmoBot is a child-friendly, interactive remote controlled telerehabilitation robot designed by AnthroTronix, Inc. CozmoBot is part of an overall assistive technology system that includes the CozmoBot robot, Mission Control input device, and accompanying software. With the accompanying software, CozmoBot can be used as part of a play therapy program that promotes rehabilitation and development of disabled children. During therapy sessions, the CozmoBot system automatically collects data for therapist evaluation.
History
The concept of CozmoBot was created by Dr. Corinna Lathan, who cofounded AnthroTronix, Inc. with Jack Vice in 1999. The entire CozmoBot system is manufactured and marketed by AT KidSystems, Inc. Development of CozmoBot was sponsored by the National Institutes of Health and the National Science Foundation.
Theory
Why was it designed?
CozmoBot was designed as an assistive tool for therapists and educators working with developmentally and learning disabled children, including those with autism and cerebral palsy. Enjoyable interaction with CozmoBot provides motivation for children to develop new skills more quickly than in traditional therapy. CozmoBot is designed to target many educational goals, ranging from communication to developmental goals.
Design goals
The most important goal of CozmoBot is to provide long-term motivation for children to actively participate in therapy and to help children achieve goals set by therapists and educators. Since CozmoBot will be used by children with varying levels of mobility, motor skills, and language, it needs to be easy to use and adaptable to different users. CozmoBot is designed for an inclusive classroom setting and must allow children to interact with their environment. It must be safe to use, hygienic, and durable. The CozmoBot system must include the capability to collect data that the therapist needs to monitor different objectives.
Target Audience
CosmoBot is intended for use by developmentally disabled children ages 5–12 under the guidance of a therapist or educator. The CosmoBot system is expected to be used as part of an Individualized Education Program developed in accordance with the IDEA. The most current version of the law is known as PL 108-446 or IDEA 2004. It is currently being marketed to therapists and educators, although AT KidSystems expects to produce a home version of the robot. A home version of the Mission Control input device and accompanying software, Cosmo's Play and Learn, is currently being marketed to parents of children ages 3–5 with and without disabilities.
The robot
CosmoBot is a 16-inch tall robot with nine degrees of freedom that is controlled by components of the CosmoBot system: a therapist can operate CosmoBot via computer-based software, and children can operate CosmoBot by using one of several input devices described in the next section.
Movement
Hidden wheels allow the robot to move forward and backward on flat surfaces and to rotate left and right. Each arm has two degrees of freedom, allowing the shoulders to flex and rotate to imitate human shoulder joint movement; the robot can raise and lowers its arms, grab objects, and clap. The head moves in pitch (nodding yes) and yaw (shaking no), and the mouth opens and closes.
Modes of operation
The therapist selects which of three modes of operation is appropriate for each therapy goal and creates a lesson tailored to each child.
Live Play
CosmoBot can be programmed to immediately perform actions upon receipt of a command from the therapist or child through any of the input devices (below).
Simon Says
The therapist can make CosmoBot perform an activity, such as lifting its arms, and ask the child to mimic the motions that CosmoBot makes; this activity is similar to the game Simon Says. The therapist can also use a microphone to talk through CosmoBot and ask the child to perform an activity or issue a voice command.
Playback
The therapist or the child can make CosmoBot perform a series of activities while the system records the sequence. The therapist can then play back the sequence while the child performs the activities at the same time. The therapist or child can also tell and record a story or a song through CosmoBot and interact as it is repeated.
Interface and input devices
Software
The software includes a graphical user interface (GUI) that allows the therapist to control the movement of the robot. The software also allows the therapist to set up and monitor the interaction between the child and CosmoBot, and to evaluate their interaction via automatic data collection. The software also stores individual data on the input actions of the children and resulting robotic movement.
Mission Control
Mission Control is CosmoBot's child-friendly version of a keyboard. It contains four large, pressure-sensitive buttons, called aFFx Activators, and incorporates a microphone. It also includes two USB ports for connection of gestural sensors. The therapist uses the GUI to assign a function to each button, such as indicating that depression of the red button will move CosmoBot forward. Activity labels can be placed in front of each button to remind the child of which activity is associated with which button. Four additional buttons can be connected to the back of Mission Control, allowing the therapist to maintain control of the lesson.
Voice input
A microphone is one of the components in Mission Control, allowing voice input to control the robot. The child can control CosmoBot's movement with speech, using commands such as “forward” and “back”. The therapist can also use the microphone to speak through CosmoBot and engage the child in conversation, or the child can speak through the microphone while CosmoBot's mouth moves.
Gestural sensors
The child can control CosmoBot's movement using additional sensors connected to Mission Control. The array of custom sensors from left to right are an adapted OEM joystick, a wearable leg sensor, wearable arm sensor, wearable head sensor, wearable wrist extension glove, and a sensor to measure forearm pronation and supination, showns with arm restraint brace.
See also
Autism therapies
Autism friendly
Educational Psychology
Physical Therapy
Telerobotics
Voice command device
References
Brisben, A. J., Lockerd, A. D., & Lathan, C. (Jun, 2004). Design evolution of an interactive robot for therapy. Telemedicine Journal and e-Health. 10, 252-259.
Lathan, C., Brisben, A., & Safos, C. (April 2005). CosmoBot levels the playing field for disabled children. Interactions -- Special Issue: Robots!. 12, 14-16.
Lathan, C.E., Tracey, M. R., Vice, J.M., Druin, A., & Plaisant, C. Robotic Apparatus and Wireless Communication System, US Patent Application 10/085, 821 filed February 27, 2002
External links
AnthroTronix, Inc.
AT Kid Systems
RERC on Telerehabilitation
Telerehabilitation
Therapeutic robots
1999 robots
Robots of the United States
Humanoid robots
Rolling robots |
47782660 | https://en.wikipedia.org/wiki/OREDA | OREDA | The Offshore and Onshore Reliability Data (OREDA) project was established in 1981 in cooperation with the Norwegian Petroleum Directorate (now Petroleum Safety Authority Norway). It is "one of the main reliability data sources for the oil and gas industry" and considered "a unique data source on failure rates, failure mode distribution and repair times for equipment used in the offshore industr[y]. OREDA's original objective was the collection of petroleum industry safety equipment reliability data. The current organization, as a cooperating group of several petroleum and natural gas companies, was established in 1983, and at the same time the scope of OREDA was extended to cover reliability data from a wide range of equipment used in oil and gas exploration and production (E&P). OREDA primarily covers offshore, subsea and topside equipment, but does also include some onshore E&P, and some downstream equipment as well.
The main objective of the OREDA project is to contribute to an improved safety and cost-effectiveness in design and operation of oil and gas E&P facilities, through collection and analysis of maintenance and operational data, establishment of a high quality reliability database, and exchange of reliability, availability, maintenance and safety (RAMS) technology among the participating companies.
History
Work on the OREDA project proceeds in phases spanning 2–3 years. Handbooks summarizing the data collected and other work completed are issued regularly.
Phase I (1983–1985)The primary activity during this phase was the collection and compilation of offshore drilling installations, and the publication of these data in the first OREDA Handbook. This demonstrated the ability of the eight petroleum industry companies involved in the project to cooperate on safety issues. While data in this initial phase included a wide range of equipment types, the level of detail was not as complete as in later phases of the project. Data collected in this phase are published in the OREDA Handbook (1984 edition); Phase I data are not, however, included in the OREDA database.
Phase II (1987–1990)To improve data quality, the project's scope was altered to include collection of production-critical equipment data only. Data began to be stored in a Windows OS database. The development of a tailor-made data collection and analysis program, the OREDA software, was begun. Data collected in this phase are published in the OREDA Handbook (1992 edition), which also contains re-published data collected in phase I.
Phase III (1990–1992)The number of equipment categories included was increased, and more data on maintenance programs were collected. Data quality was improved following established "Guidelines for Data Collection" and via improved quality control. The user interface of the OREDA software was improved, and programming changes allowed it to be used as a broader-purpose tool for data collection. Data collected in this phase are published in the OREDA Handbook (1997 edition).
Phase IV (1993–1996)New software for data collection and analysis was developed, plus specific software and procedures for automatic data import and conversion. Data collected were mainly for the same equipment classes as in phase III, and the data collection was — to a greater extent than previously — carried out by the companies themselves. Data on planned maintenance were also included. Data collected in this phase are published in the OREDA Handbook (2002 edition).
Phase V (1997–2000)New classes of equipment were added to the project, coinciding with a greater emphasis on the collection of subsea data. Development of a new ISO standard, "Petroleum and natural gas industries — Collection and exchange of reliability and maintenance data for equipment" was begun; ISO standard 14224 was issued in July 1999. Data collected in this phase are published in the OREDA Handbook (2002 edition).
Phase VI (2000–2001)Data collection on subsea equipment and new equipment classes were prioritised. A forum for co-operation between major subsea equipment manufacturers was formed. Data collected in this phase are published in the OREDA Handbook (2009 edition).
Phase VII (2002–2003)Priority continued to be given to subsea equipment data collection. A revision of ISO 14224 was begun, with contribution from members of the OREDA project. Data collected in this phase are published in the OREDA Handbook (2009 edition).
Phase VIII (2004–2005)Phase VIII mainly continued the goals and activities of phase VII. OREDA members participated in the revision of ISO 14224, issued in December 2006. Data collected in this phase are published in the OREDA Handbook (2015 edition).
Phase IX (2006–2008)OREDA software and taxonomy were made consistent with ISO 14224. There was a continued focus on including worldwide safety data. In observance of OREDA's 25-year anniversary, a seminar was conducted. Data collected in this phase are published in the OREDA Handbook (2015 edition).
Phase X (2009–2011)The 5th OREDA Handbook (2009 edition) was released; new safety analysis software was developed; initial steps to SIL (safety integrity level) data based on OREDA were taken; and GDF Suez and Petrobras became associated members.
Phase XI (2012–2014)New data collection software was developed; the 6th OREDA Handbook (2015 edition) was planned; a quality assurance review of the database was conducted; a new logo was designed, as were new looks for both the Handbook and the website.
Phase XII (2015–2017)The OREDA project is in its 12th phase as of 2015. During this phase, the 6th OREDA Handbook (2015 edition) was published. A new webshop for the project has been established in collaboration with the European Safety Reliability & Data Association (ESReDA).
Phase XIII (2018-)From 2018 the OREDA project will enter its 13th phase. Digitalization and efficiency improvements is part of the industry and there is a need for OREDA data as a decision support tool and as support for equipment in operation. Cost effective solutions is a focus area in the industry and in line with this trend the OREDA project will provide more efficient procedures and digitalized solutions.
Participants
At times companies have left or joined the project, sometimes as the result of name changes or mergers. The following table lists which companies have contributed data to the OREDA project in phases VIII, IX and XII.
Organization
The OREDA project's Steering Committee consists of one member and one deputy member from each of the participating companies. From these members, a chairperson is elected, and appoints a Project Manager to coordinate activities approved by the steering committee. The Project Manager is also responsible for data quality assurance. Det Norske Veritas (DNV, now called DNV GL), an international certification body and classification society, served as Project Manager during phases I and II and SINTEF (Stiftelsen for INdustriell og TEknisk Forskning; "Foundation for Scientific and Industrial Research" in English) during phases III–IX, after which DNV GL again took over Project Manager duties. The OREDA Handbook releases have been prepared as separate projects, but in consultation with the OREDA Steering Committee; the current version, 2015's 6th Edition, was prepared by SINTEF and NTNU (Norges Teknisk-Naturvitenskapelige Universitet; "Norwegian University of Science and Technology" in English), and is marketed by DNV GL.
Need
Before the OREDA project began collecting data, "no authenticated source of failure information existed for offshore installations," and risk assessments had to be made using "generic data from onshore petroleum plants and data from other industries."
Data
By 1996, OREDA had collated data about 24,000 pieces of equipment in use in offshore installations, and documented 33,000 equipment failures.
The severity of failures documented in the database are categorized as either critical, degradation, incipient, or unknown severity.
The database contains data from almost 300 installations, over 15,000 pieces of equipment, nearly 40,000 failure records, and close to 75,000 maintenance records. Access to this data, and to the search and analysis functions of the OREDA software, is restricted to the OREDA member companies, though contractors working with member companies may be granted temporary access.
Database structure
Data are entered by installation and by owner. Each piece of equipment (e.g. a gas turbine) constitutes a single database inventory record, which includes a technical description of the equipment, and of its environmental and operating conditions, along with all associated failure events. Every failure event is given a set of data including failure cause, date, effect, and mode. Corrective and preventive maintenance data are also included.
Software
The OREDA software handles data acquisition, analysis and collation. Features include advanced data search, automated data transfer, quality checking, reliability analyses, a tailor-made module for subsea data which includes an event-logging tool, and the option to configure user-defined applications. It can also be used to collect internal company data.
The most current version of the software, released in concert with the 6th edition of the OREDA Handbook, contains an expanded set of equipment classes, including common subsea components, subsea control systems, subsea power cables, subsea pumps, and subsea vessels.
Impact
Use of the OREDA database has "led to significant savings in the development and operation of platforms."
OREDA's example has inspired the creation of similar inter-company cooperation projects in related fields, such as the SPARTA (System Performance, Availability and Reliability Trend Analysis) database created by the wind farm industry in the UK.
References
Energy economics
International energy organizations
Organizations established in 1981
Petroleum politics |
48851729 | https://en.wikipedia.org/wiki/Monrobot%20XI | Monrobot XI | The Monroe Calculating Machine Mark XI (or Monrobot XI) was a general-purpose stored-program electronic digital computer introduced in 1960 by the Monroe Calculating Machine Division of Litton Industries. The system was marketed for "primarily for billing, and invoice writing", but could also be used for low-end scientific computing.
The computer had an unusual architecture, in that all data flowed through a central spinning drum magnetic memory. This enabled a low hardware cost, with the tradeoff of low-speed performance. The machine was marketed as an entry-level computer suitable for small businesses.
Pricing and applications
Upon introduction in May 1960, the Monrobot XI sold for US$24,500. In March 1961, the US Army reported that seven units had been made. In November 1961, the price remained unchanged and leasing ran US$700 monthly. By 1966, there were about 350 machines in the field, but by 2013 no machines were believed to remain in existence.
The manufacturer also marketed other computer systems in the family, such as the Monrobot IX and Monrobot MU, but the Monrobot XI appeared to be the most popular model.
In 2021, a collector in North Carolina revealed that he owns half a dozen complete Monrobots, along with ancillary items, manuals, and tape programs.
Development history
Physical appearance and operating environment
The bare-bones Monrobot XI resembled an ordinary steel desk in length, breadth, and height, surmounted by an ordinary typewriter and a breadbox-sized control panel with indicator lights and switches. A paper tape reader and punch were the only machine-readable data media peripherals on the base configuration. At a weight of , its purveyors pronounced it "portable". It could operate outside of an air-conditioned room (tolerating +-25% voltage margins at ambient temperature), using a conventional mains power line (15 A, 110 V, 60 c.p.s. service) and about half as much electrical power (850 W) as a toaster.
Architectural philosophy
Unlike virtually all electronic digital computers ever built, as an early machine, the Monrobot XI was one of the small family of computers which totally lacked random-access memory (RAM), an alternative tecnhnology which would have allowed it to access all memory words equally rapidly. Even at the time it was introduced, it was not rare for electronic digital computers to use magnetic-core memory for RAM; the price (per bit) of which would eventually fall from over US$1 in the early 1950s to about US$0.20 by the mid-1960s.
Instead to keep the cost of the machine very low, the Monorobot XI used a form of memory in which words were only periodically accessible in sequential order, via an electromechanical moving device called magnetic drum memory. Thus, physically it bore some resemblance to the theoretical Turing machine of computer science, albeit with the idealized data tape being of finite length and joined end-to-end, and then finally replicated 32 times in parallel. The long latency of memory access, which followed from exclusive reliance on such a macroscopic moving part, made the Monrobot XI operate very slowly, despite the use of non-mechanical electronics for logical functions.
The Monorobot XI might best be thought of as a modernized (solid-state), low cost version of the IBM 650, which had been the world's first mass-produced computer, leased at US$3,250 per month, almost 800 of which were made between 1954 and 1958, a total of 2000 by 1962. Both the IBM 650 and Monrobot XI used a magnetic drum for primary memory, but the former used vacuum tubes and bi-quinary coding, rather than transistors and binary coding, for its electronics.
All input and output was performed one character at a time, under direct program control. Only one input device could be active at a time, but one to three output devices could operate simultaneously in synchronization.
Persistent electro-mechanical memory
The Monrobot XI's rewritable, persistent ("nonvolatile") memory consisted of a rotating magnetic drum storing 1,024 words of 32 bits, each of which could record either a single integer, or a pair of zero- or single-address instructions. The average access time of 6 milliseconds (ms) derived from the fact that the drum made a full rotation every 11.7 ms (spinning at 5,124 rpm). Even the 8 "high-speed" registers of the central processing unit (CPU) physically resided on the drum, in two dedicated tracks, but by being replicated 16 times (with 16 times as many read/write heads heads distributed around the drum periphery), they could be read or written 16 times as fast as the bulk of persistent memory.
The whine of the drum could easily be heard, as it continuously spun for as long as the machine was powered up. A perforated metal screen at the side or back of the cabinet could be removed, affording a direct view of the reddish-brown iron-oxide-coated drum, surrounded by multiple stationary magnetic read/write heads. There was no special provision for protection from dust, as the magnetic heads were rigidly mounted at fixed distances from the magnetic surface, and did not use "flying head" technology. The diameter of the drum was approximately .
Electronics
Except for neon lamps in the control panel and 10 to 30 blue-green electroluminescent lamp vacuum tubes employed for output displays in later versions, the electronics used only discrete solid-state components, including 383 transistors (mostly 2N412) and 2,300 diodes (mostly 1N636). The arithmetic unit alone used 190 transistors and 1,675 diodes. This astoundingly small active component count (383) - little more than in the Manchester Baby (250), the world's first (1948) stored-program Turing-complete computer – contrasts starkly with the many billions of transistors present in modern microprocessors used in handheld cellphones. The low component count was a key benefit of its slow electromechanical memory, which exploited synchronization with a spinning drum's rotational angle, rather than adding electronic switches, to accomplish multiplexing of bits. For comparison, even Intel's first (1971) microprocessor, the four-bit Intel 4004, required about 2,300 transistors in its monolithic design.
Construction was via pluggable printed-circuit boards, allowing economical partial replacement of a defective module as the principal means of repair. This continued an electronics construction tradition pioneered when the relatively unreliable short-lived vacuum tubes had been used as active components, prior to the advance to more modern, highly-reliable solid-state transistors used in the Monorobot XI. Unlike vacuum tubes, which were always plugged into sockets, discrete transistors were often permanently soldered into place.
System timing
The arithmetic unit performed computations using the binary number system, with machine-language programming using hexadecimal digits (called "sexadecimal" in the programmer's manual), and employing the unusual character set of {0,1,2,3,4,5,6,7,8,9,S,T,U,V,W,X}. Addition of 32-bit fixed-point integers required 3 to 9 milliseconds (ms), and multiplication required 28 ms to 34 ms. The longer durations reflected the mean latency (6 ms) of accessing a persistent memory location, rather than a register, to retrieve the second of the two operands.
Division (500 ms) and more-advanced floating-point functions were implemented in software. Advanced built-in mathematical functions included square root, logarithm, and antilogarithm (on both decimal and natural bases), plus trigonometric functions (in degrees or in radians). A total of 27 machine opcode instructions were defined. Addressing the 1,023 word memory was allocated 10 bits. An optional 2,048 word drum could be installed, and addressed via two extra address bits. The system was in many ways presented as an advanced programmable calculator, in keeping with the heritage of its manufacturer. Simple subroutine calls and returns were supported, as was autoincrement of operands.
The system clock and all timings were synchronized with the rotation of the storage drum, since all data flow passed onto or off of its central data store. Programs could be hand-optimized for maximum speed by carefully considering the timing of the drum rotation and the physical location of instructions and data.
Programming
The computer could be programmed using an assembly language system called QUIKOMP, but its simple machine language instruction set and slow operation speed encouraged many programmers to code directly in numeric opcodes. A reference card was available to help in remembering the numeric opcodes and data codes. Bits were idiosyncratically numbered on the control panel from 16 (MSB, leftmost) down to 1 (LSB, rightmost), although the programmer's manual numbered them from 15 to 0 in a more standard manner.
The minimal loader program had no provisions to support multiple users on a single machine. To accommodate multiple users economically, time-consuming manual data entry could be performed offline, by use of several separate key-to-punch papertape machines (called "add-punch" machines), whose numeric-only keyboards were slightly-modified versions of mechanical desk calculators. Because the mechanical calculator-style keyboards could only generate decimal (base 10) codes, the numerical opcodes were specified in decimal, even though the actual processing was in binary.
Editing and copying of punched tapes was also possible offline, and tapes could be spliced using special adhesive tape and alignment jigs. Experienced programmers soon learned to read the numeric codes visually from the punched paper tapes. When an "add-punch" tape had been proofread and corrected, it was ready for loading via a paper tape reader into the Monrobot XI for execution and debugging.
The console terminal typically was a modified IBM typewriter. An option was a heavy-duty Flexowriter, which rattled and shook the entire machine, especially when the heavy carriage forcefully returned to the beginning of a new print line. Output was via printed paper typed by the typeprinter, or punched oiled paper tape. An 80-column punched card reader/punch could optionally be added to the base configuration.
A single 16-bit register could be displayed on the control panel, primarily for troubleshooting or diagnostic purposes. The control panel could also be used to single-step, halt, or start the processor, for debugging or troubleshooting. There were also provisions for connection of an oscilloscope for more advanced technical troubleshooting.
Eight different control panel "sense switches" could be used to enter simple data into a running program, or to select different modes of program operation under control of software.
The Monrobot computer series in popular culture
An episode of the animated television series Futurama, originally airing in 2001, featured a humanoid robot resembling mid-20th century sex symbol Marilyn Monroe, named "Marilyn Monrobot", as a character within a film viewed by the episode cast.
References
External links
detailed technical specifications of the Monorobot XI
pg 75 (on the Monrobot XI) in Digitale Kleinrechner by Günter Schubert (Springer-Verlag, Mar 13, 2013)
cover page of A Brief History of the Monrobot XI Computer by Donald O. Caselli, May 15, 2011
illustrated memoir (March 15, 2007) by John Mann, about use of the Monorobot XI at Scotch College in Melbourne, Australia during the 1970's
photograph of Monrobot XI control panel
photograph of Monrobot XI QUIKOMP reference card
Computer History Museum catalog entries
Monrobot XI datasheet (promotional material)
Monrobot XI text (promotional material)
Materials related to the Monrobot XI (promotional material & manual)
Transistorized computers |
1121207 | https://en.wikipedia.org/wiki/BusinessObjects | BusinessObjects | SAP BusinessObjects (BO, BOBJ, or BObjects) is an enterprise software company, specializing in business intelligence (BI). BusinessObjects was acquired in 2007 by German company SAP AG. The company claimed more than 46,000 customers in its final earnings release prior to being acquired by SAP. Its flagship product is BusinessObjects XI (or BOXI), with components that provide performance management, planning, reporting, query and analysis, as well as enterprise information management. BusinessObjects also offers consulting and education services to help customers deploy its business intelligence projects. Other toolsets enable universes (the BusinessObjects name for a semantic layer between the physical data store and the front-end reporting tool) and ready-written reports to be stored centrally and made selectively available to communities of the users.
History
co-founded BusinessObjects in 1990 together with , and was chief until September 2005, when he became chairman and chief until January 2008. The concept of BusinessObjects and its initial implementation came from Jean-Michel Cambot.
In 1990, the first customer, Coface, was signed. The company went public on NASDAQ in September 1994, making it the first European software company listed in the United States. In 2002, the company made Time magazine Europe's Digital Top 25 of 2002 and were BusinessWeek Europe Stars of Europe.
On 7 October 2007, SAP AG announced that it would acquire BusinessObjects for $6.8B. As of 22 January 2008, the corporation is fully operated by SAP; this was seen as part of a growing consolidation trend in the business software industry, with Oracle acquiring Hyperion in 2007 and IBM acquiring Cognos in 2008.
BusinessObjects had two headquarters in San Jose, California, and Paris, France, but their biggest office was in Vancouver, British Columbia, Canada. The company's stock was traded on both the Nasdaq and Euronext Paris (BOB) stock exchanges.
Legal
On April 2, 2007, a lawsuit from Informatica (inherited by BusinessObjects from the purchase of Acta Technologies in 2002) resulted in an award of $25 million in damages to Informatica for patent infringement. The lawsuit related to embedded data flows with one input and one output. Informatica asserted that the ActaWorks product (now sold by BusinessObjects as part of Data Integrator), infringed several Informatica patents including US Patent Nos. 6,014,670 and 6,339,775, both titled "Apparatus and Method for Performing Data Transformations in Data Warehousing." BusinessObjects subsequently released a new version of Data Integrator (11.7.2) which removed the infringing product capability.
Timeline
1990: BusinessObjects launches Skipper SQL 2.0.x.
1994: Launches BusinessObjects v3.0 and goes public on the NASDAQ in September — the first French software company listed in the United States.
1996: Enters the OLAP market and launches BusinessObjects v4.0. Bernard Liautaud named one of BusinessWeek's "Hottest Entrepreneurs of the Year."
1997: Introduces WebI thin client, which enables shared information across an extranet.
1999: General Electric (GE) begins working with the company. BusinessObjects goes public in France on the Premier Marché. Acquires Next Action Technologies.
2000: Acquires OLAP@Work for approximately $15 million and announces MDX Connect from this acquisition.
2001: SAP signs an OEM and reseller agreement to bundle Crystal Reports. Acquires Blue Edge Software.
2001: Signs up its single largest global software licensing transaction with Three, formerly known as Hutchison 3G. Transaction was led by Edwin Moore Momife and Jon Stubbington of the UK Company.
2002: Acquires Acta Technologies. Bernard Liautaud named to Business Week's "Stars of Europe," and the company is named one of the "100 Fastest Growing Tech Companies" by Business 2.0. Informatica files a lawsuit against Acta, claiming patent rights infringement.
2003: Acquires Crystal Decisions for $820 million. BusinessObjects releases Dashboard Manager, BusinessObjects Enterprise 6, and BusinessObjects Performance Manager.
2004: Debuts new combined company with the slogan, "Our Future is Clear, Crystal Clear." Launches Crystal v10 and BusinessObjects v6.5.
2005: Launches BusinessObjects XI. Acquires SRC Software, Infommersion, and Medience. Launches BusinessObjects Enterprise XI Release 2.
2006: BusinessObjects acquires Firstlogic, Inc and Nsite Software, Inc.
2006: Acquires ALG Software (formerly Armstrong Laing Group). Launches Crystal Xcelsius, which allows users to transform Microsoft Excel spreadsheet data into interactive Flash media files.
2007: Continuing its string of acquisitions, BusinessObjects acquires Cartesis and Inxight.
2007: In October, SAP AG's Chief Executive Henning Kagermann announced a $6.8 billion deal to acquire BusinessObjects.
2008: In January, SAP absorbs all of BusinessObjects' offices, and renames the entity "BusinessObjects, an SAP company". Following the acquisition of BusinessObjects by SAP, the founder and CEO of BusinessObjects, Bernard Liautaud, announces his resignation.
2009: BusinessObjects becomes a division of SAP instead of a separate company. The portfolio brand "SAP BusinessObjects" was created. Some former BusinessObjects employees now officially work for SAP.
References
External links
SAP BusinessObjects portfolio
Software companies established in 1990
Software companies of France
SAP SE acquisitions
2008 mergers and acquisitions
Data visualization software
Business software companies
Big data companies
Extract, transform, load tools
Data analysis software
Business intelligence companies
Data companies
Data warehousing products |
22393126 | https://en.wikipedia.org/wiki/Inferno%20%28Dante%29 | Inferno (Dante) | Inferno (; Italian for "Hell") is the first part of Italian writer Dante Alighieri's 14th-century epic poem Divine Comedy. It is followed by Purgatorio and Paradiso. The Inferno describes Dante's journey through Hell, guided by the ancient Roman poet Virgil. In the poem, Hell is depicted as nine concentric circles of torment located within the Earth; it is the "realm ... of those who have rejected spiritual values by yielding to bestial appetites or violence, or by perverting their human intellect to fraud or malice against their fellowmen".
As an allegory, the Divine Comedy represents the journey of the soul toward God, with the Inferno describing the recognition and rejection of sin.
Prelude to Hell
Canto I
The poem begins on the night of Maundy Thursday on March 24 (or April 7), 1300, shortly before dawn of Good Friday. The narrator, Dante himself, is thirty-five years old, and thus "midway in the journey of our life" (Nel mezzo del cammin di nostra vita) – half of the biblical lifespan of seventy (Psalm 89:10, Vulgate; Psalm 90:10, KJV). The poet finds himself lost in a dark wood (selva oscura), astray from the "straight way" (diritta via, also translatable as "right way") of salvation. He sets out to climb directly up a small mountain, but his way is blocked by three beasts he cannot evade: a lonza (usually rendered as "leopard" or "leopon"), a leone (lion), and a lupa (she-wolf). The three beasts, taken from Jeremiah 5:6, are thought to symbolize the three kinds of sin that bring the unrepentant soul into one of the three major divisions of Hell. According to John Ciardi, these are incontinence (the she-wolf); violence and bestiality (the lion); and fraud and malice (the leopard); Dorothy L. Sayers assigns the leopard to incontinence and the she-wolf to fraud/malice. It is now dawn of Good Friday, April 8, with the sun rising in Aries. The beasts drive him back despairing into the darkness of error, a "lower place" (basso loco) where the sun is silent (l sol tace). However, Dante is rescued by a figure who announces that he was born sub Iulio (i.e., in the time of Julius Caesar) and lived under Augustus: it is the shade of the Roman poet Virgil, author of the Aeneid, a Latin epic.
Canto II
On the evening of Good Friday, Dante hesitates as he follows Virgil; Virgil explains that he has been sent by Beatrice, the symbol of Divine Love. Beatrice had been moved to aid Dante by the Virgin Mary (symbolic of compassion) and Saint Lucia (symbolic of illuminating Grace). Rachel, symbolic of the contemplative life, also appears in the heavenly scene recounted by Virgil. The two of them then begin their journey to the underworld.
Canto III: Vestibule of Hell
Dante passes through the gate of Hell, which bears an inscription ending with the famous phrase "Lasciate ogne speranza, voi ch'intrate", most frequently translated as "Abandon all hope, ye who enter here."{{Refn|group=nb|There are many English translations of this famous line. Some examples include
All hope abandon, ye who enter here – Henry Francis Cary (1805–1814)
All hope abandon, ye who enter in! – Henry Wadsworth Longfellow (1882)
Leave every hope, ye who enter! – Charles Eliot Norton (1891)
Leave all hope, ye that enter – Carlyle-Okey-Wicksteed (1932)
Lay down all hope, you that go in by me. – Dorothy L. Sayers (1949)
Abandon all hope, ye who enter here – John Ciardi (1954)
Abandon every hope, you who enter. – Charles S. Singleton (1970)
No room for hope, when you enter this place – C. H. Sisson (1980)
Abandon every hope, who enter here. – Allen Mandelbaum (1982)
Abandon all hope, you who enter here. – Robert Pinsky (1993); Robert Hollander (2000)
Abandon every hope, all you who enter – Mark Musa (1995)
Abandon every hope, you who enter. – Robert M. Durling (1996)
Verbatim, the line translates as "Leave (lasciate) every (ogne) hope (speranza), ye (voi) that (ch) enter (intrate)."}} Dante and his guide hear the anguished screams of the Uncommitted. These are the souls of people who in life took no sides; the opportunists who were for neither good nor evil, but instead were merely concerned with themselves. Among these Dante recognizes a figure implied to be Pope Celestine V, whose "cowardice (in selfish terror for his own welfare) served as the door through which so much evil entered the Church". Mixed with them are outcasts who took no side in the Rebellion of Angels. These souls are forever unclassified; they are neither in Hell nor out of it, but reside on the shores of the Acheron. Naked and futile, they race around through the mist in eternal pursuit of an elusive, wavering banner (symbolic of their pursuit of ever-shifting self-interest) while relentlessly chased by swarms of wasps and hornets, who continually sting them. Loathsome maggots and worms at the sinners' feet drink the putrid mixture of blood, pus, and tears that flows down their bodies. This symbolizes the sting of their guilty conscience and the repugnance of sin. This may also be seen as a reflection of the spiritual stagnation in which they lived.
After passing through the vestibule, Dante and Virgil reach the ferry that will take them across the river Acheron and to Hell proper. The ferry is piloted by Charon, who does not want to let Dante enter, for he is a living being. Virgil forces Charon to take him by declaring, Vuolsi così colà dove si puote / ciò che si vuole ("It is so willed there where is power to do / That which is willed"), referring to the fact that Dante is on his journey on divine grounds. The wailing and blasphemy of the damned souls entering Charon's boat contrast with the joyful singing of the blessed souls arriving by ferry in the Purgatorio. The passage across the Acheron, however, is undescribed, since Dante faints and does not awaken until they reach the other side.
Nine circles of Hell
Overview
Virgil proceeds to guide Dante through the nine circles of Hell. The circles are concentric, representing a gradual increase in wickedness, and culminating at the centre of the earth, where Satan is held in bondage. The sinners of each circle are punished for eternity in a fashion fitting their crimes: each punishment is a contrapasso, a symbolic instance of poetic justice. For example, later in the poem, Dante and Virgil encounter fortune-tellers who must walk forward with their heads on backward, unable to see what is ahead, because they tried to see the future through forbidden means. Such a contrapasso "functions not merely as a form of divine revenge, but rather as the fulfilment of a destiny freely chosen by each soul during his or her life". People who sinned, but prayed for forgiveness before their deaths are found not in Hell but in Purgatory, where they labour to become free of their sins. Those in Hell are people who tried to justify their sins and are unrepentant.
Dante's Hell is structurally based on the ideas of Aristotle, but with "certain Christian symbolisms, exceptions, and misconstructions of Aristotle's text", and a further supplement from Cicero's De Officiis. Virgil reminds Dante (the character) of "Those pages where the Ethics tells of three / Conditions contrary to Heaven's will and rule / Incontinence, vice, and brute bestiality". Cicero, for his part, had divided sins between Violence and Fraud. By conflating Cicero's violence with Aristotle's bestiality, and his fraud with malice or vice, Dante the poet obtained three major categories of sin, as symbolized by the three beasts that Dante encounters in Canto I: these are Incontinence, Violence/Bestiality, and Fraud/Malice.Dorothy L. Sayers, Hell, notes on Canto XI, p. 139 Sinners punished for incontinence (also known as wantonness) – the lustful, the gluttonous, the hoarders and wasters, and the wrathful and sullen – all demonstrated weakness in controlling their appetites, desires, and natural urges; according to Aristotle's Ethics, incontinence is less condemnable than malice or bestiality, and therefore these sinners are located in four circles of Upper Hell (Circles 2–5). These sinners endure lesser torments than do those consigned to Lower Hell, located within the walls of the City of Dis, for committing acts of violence and fraud – the latter of which involves, as Dorothy L. Sayers writes, "abuse of the specifically human faculty of reason". The deeper levels are organized into one circle for violence (Circle 7) and two circles for fraud (Circles 8 and 9). As a Christian, Dante adds Circle 1 (Limbo) to Upper Hell and Circle 6 (Heresy) to Lower Hell, making 9 Circles in total; incorporating the Vestibule of the Futile, this leads to Hell containing 10 main divisions. This "9+1=10" structure is also found within the Purgatorio and Paradiso. Lower Hell is further subdivided: Circle 7 (Violence) is divided into three rings, Circle 8 (Fraud) is divided into ten bolge, and Circle 9 (Treachery) is divided into four regions. Thus, Hell contains, in total, 24 divisions.
First Circle (Limbo)Canto IVDante wakes up to find that he has crossed the Acheron, and Virgil leads him to the first circle of the abyss, Limbo, where Virgil himself resides. The first circle contains the unbaptized and the virtuous pagans, who, although not sinful enough to warrant damnation, did not accept Christ. Dorothy L. Sayers writes, "After those who refused choice come those without opportunity of choice. They could not, that is, choose Christ; they could, and did, choose human virtue, and for that they have their reward." Limbo shares many characteristics with the Asphodel Meadows, and thus, the guiltless damned are punished by living in a deficient form of Heaven. Without baptism ("the portal of the faith that you embrace") they lacked the hope for something greater than rational minds can conceive. When Dante asked if anyone has ever left Limbo, Virgil states that he saw Jesus ("a Mighty One") descend into Limbo and take Adam, Abel, Noah, Moses, Abraham, David, Rachel, and others (see Limbo of the Patriarchs) into his all-forgiving arms and transport them to Heaven as the first human souls to be saved. The event, known as the Harrowing of Hell, would have occurred in AD 33 or 34.
Dante encounters the poets Homer, Horace, Ovid, and Lucan, who include him in their number and make him "sixth in that high company". They reach the base of a great Castle – the dwelling place of the wisest men of antiquity – surrounded by seven gates, and a flowing brook. After passing through the seven gates, the group comes to an exquisite green meadow and Dante encounters the inhabitants of the Citadel. These include figures associated with the Trojans and their descendants (the Romans): Electra (mother of Troy's founder Dardanus), Hector, Aeneas, Julius Caesar in his role as Roman general ("in his armor, falcon-eyed"), Camilla, Penthesilea (Queen of the Amazons), King Latinus and his daughter, Lavinia, Lucius Junius Brutus (who overthrew Tarquin to found the Roman Republic), Lucretia, Julia, Marcia, and Cornelia Africana. Dante also sees Saladin, a Muslim military leader known for his battle against the Crusaders, as well as his generous, chivalrous, and merciful conduct.
Dante next encounters a group of philosophers, including Aristotle with Socrates and Plato at his side, as well as Democritus, "Diogenes" (either Diogenes the Cynic or Diogenes of Apollonia), Anaxagoras, Thales, Empedocles, Heraclitus, and "Zeno" (either Zeno of Elea or Zeno of Citium). He sees the scientist Dioscorides, the mythical Greek poets Orpheus and Linus, and Roman statesmen Marcus Tullius Cicero and Seneca. Dante sees the Alexandrian geometer Euclid and Ptolemy, the Alexandrian astronomer and geographer, as well as the physicians Hippocrates and Galen. He also encounters Avicenna, a Persian polymath, and Averroes, a medieval Andalusian polymath known for his commentaries on Aristotle's works. Dante and Virgil depart from the four other poets and continue their journey.
Although Dante implies that all virtuous non-Christians find themselves here, he later encounters two (Cato of Utica and Statius) in Purgatory and two (Trajan and Ripheus) in Heaven. In Purg. XXII, Virgil names several additional inhabitants of Limbo who were not mentioned in the Inferno.
Second Circle (Lust)Canto VDante and Virgil leave Limbo and enter the Second Circle – the first of the circles of Incontinence – where the punishments of Hell proper begin. It is described as "a part where no thing gleams". They find their way hindered by the serpentine Minos, who judges all of those condemned for active, deliberately willed sin to one of the lower circles. At this point in Inferno, every soul is required to confess all of their sins to Minos after which Minos sentences each soul to its torment by wrapping his tail around himself a number of times corresponding to the number of the circle of hell to which the soul must go. The role of Minos here is a combination of his classical role as condemner and unjust judge of the underworld and the role of classical Rhadamanthus, interrogator and confessor of the underworld. This mandatory confession makes it so every soul verbalizes and sanctions their own ranking amongst the condemned since these confessions are the sole grounds for their placement in hell. Dante is not forced to make this confession; instead, Virgil rebukes Minos, and he and Dante continue on.
In the second circle of Hell are those overcome by lust. These "carnal malefactors" are condemned for allowing their appetites to sway their reason. These souls are buffeted back and forth by the terrible winds of a violent storm, without rest. This symbolizes the power of lust to blow needlessly and aimlessly: "as the lovers drifted into self-indulgence and were carried away by their passions, so now they drift for ever. The bright, voluptuous sin is now seen as it is – a howling darkness of helpless discomfort." Since lust involves mutual indulgence and is not, therefore, completely self-centered, Dante deems it the least heinous of the sins and its punishment is the most benign within Hell proper.John Ciardi, Inferno, notes on Canto V, p. 51 The "ruined slope" in this circle is thought to be a reference to the earthquake that occurred after the death of Christ.
In this circle, Dante sees Semiramis, Dido, Cleopatra, Helen of Troy, Paris, Achilles, Tristan, and many others who were overcome by sexual love during their life. Due to the presence of so many rulers among the lustful, the fifth Canto of Inferno has been called the "canto of the queens". Dante comes across Francesca da Rimini, who married the deformed Giovanni Malatesta (also known as "Gianciotto") for political purposes but fell in love with his younger brother Paolo Malatesta; the two began to carry on an adulterous affair. Sometime between 1283 and 1286, Giovanni surprised them together in Francesca's bedroom and violently stabbed them both to death. Francesca explains:
Love, which in gentlest hearts will soonest bloom
seized my lover with passion for that sweet body
from which I was torn unshriven to my doom.
Love, which permits no loved one not to love,
took me so strongly with delight in him
that we are one in Hell, as we were above.
Love led us to one death. In the depths of Hell
Caïna waits for him who took our lives."
This was the piteous tale they stopped to tell.
Francesca further reports that she and Paolo yielded to their love when reading the story of the adultery between Lancelot and Guinevere in the Old French romance Lancelot du Lac. Francesca says, "Galeotto fu 'l libro e chi lo scrisse". The word "Galeotto" means "pander" but is also the Italian term for Gallehaut, who acted as an intermediary between Lancelot and Guinevere, encouraging them on to love. John Ciardi renders line 137 as "That book, and he who wrote it, was a pander." Inspired by Dante, author Giovanni Boccaccio invoked the name Prencipe Galeotto in the alternative title to The Decameron, a 14th-century collection of novellas. Ultimately, Francesca never makes a full confession to Dante. Rather than admit to her and Paolo's sins, the very reasons they reside in this circle of hell, she consistently takes an erroneously passive role in the adulterous affair. The English poet John Keats, in his sonnet "On a Dream", imagines what Dante does not give us, the point of view of Paolo:
... But to that second circle of sad hell,
Where 'mid the gust, the whirlwind, and the flaw
Of rain and hail-stones, lovers need not tell
Their sorrows. Pale were the sweet lips I saw,
Pale were the lips I kiss'd, and fair the form
I floated with, about that melancholy storm.
As he did at the end of Canto III, Dante – overcome by pity and anguish – describes his swoon: "I fainted, as if I had met my death. / And then I fell as a dead body falls".
Third Circle (Gluttony)Canto VIIn the third circle, the gluttonous wallow in a vile, putrid slush produced by a ceaseless, foul, icy rain – "a great storm of putrefaction" – as punishment for subjecting their reason to a voracious appetite. Cerberus (described as "il gran vermo", literally "the great worm", line 22), the monstrous three-headed beast of Hell, ravenously guards the gluttons lying in the freezing mire, mauling and flaying them with his claws as they howl like dogs. Virgil obtains safe passage past the monster by filling its three mouths with mud.
Dorothy L. Sayers writes that "the surrender to sin which began with mutual indulgence leads by an imperceptible degradation to solitary self-indulgence". The gluttons grovel in the mud by themselves, sightless and heedless of their neighbors, symbolizing the cold, selfish, and empty sensuality of their lives. Just as lust has revealed its true nature in the winds of the previous circle, here the slush reveals the true nature of sensuality – which includes not only overindulgence in food and drink, but also other kinds of addiction.
In this circle, Dante converses with a Florentine contemporary identified as Ciacco, which means "hog". A character with the same nickname later appears in The Decameron of Giovanni Boccaccio, where his gluttonous behaviour is clearly portrayed. Ciacco speaks to Dante regarding strife in Florence between the "White" and "Black" Guelphs, which developed after the Guelph/Ghibelline strife ended with the complete defeat of the Ghibellines. In the first of several political prophecies in the Inferno, Ciacco "predicts" the expulsion of the White Guelphs (Dante's party) from Florence by the Black Guelphs, aided by Pope Boniface VIII, which marked the start of Dante's long exile from the city. These events occurred in 1302, prior to when the poem was written but in the future at Easter time of 1300, the time in which the poem is set.
Fourth Circle (Greed)Canto VIIThe Fourth Circle is guarded by a figure Dante names as Pluto: this is Plutus, the deity of wealth in classical mythology. Although the two are often conflated, he is a distinct figure from Pluto (Dis), the classical ruler of the underworld. At the start of Canto VII, he menaces Virgil and Dante with the cryptic phrase Pape Satàn, pape Satàn aleppe, but Virgil protects Dante from him.
Those whose attitude toward material goods deviated from the appropriate mean are punished in the fourth circle. They include the avaricious or miserly (including many "clergymen, and popes and cardinals"), who hoarded possessions, and the prodigal, who squandered them. The hoarders and spendthrifts joust, using great weights as weapons that they push with their chests:
Here, too, I saw a nation of lost souls,
far more than were above: they strained their chests
against enormous weights, and with mad howls
rolled them at one another. Then in haste
they rolled them back, one party shouting out:
"Why do you hoard?" and the other: "Why do you waste?"
Relating this sin of incontinence to the two that preceded it (lust and gluttony), Dorothy L. Sayers writes, "Mutual indulgence has already declined into selfish appetite; now, that appetite becomes aware of the incompatible and equally selfish appetites of other people. Indifference becomes mutual antagonism, imaged here by the antagonism between hoarding and squandering." The contrast between these two groups leads Virgil to discourse on the nature of Fortune, who raises nations to greatness and later plunges them into poverty, as she shifts, "those empty goods from nation unto nation, clan to clan". This speech fills what would otherwise be a gap in the poem, since both groups are so absorbed in their activity that Virgil tells Dante that it would be pointless to try to speak to them – indeed, they have lost their individuality and been rendered "unrecognizable".
Fifth Circle (Wrath)
In the swampy, stinking waters of the river Styx – the Fifth Circle – the actively wrathful fight each other viciously on the surface of the slime, while the sullen (the passively wrathful) lie beneath the water, withdrawn, "into a black sulkiness which can find no joy in God or man or the universe". At the surface of the foul Stygian marsh, Dorothy L. Sayers writes, "the active hatreds rend and snarl at one another; at the bottom, the sullen hatreds lie gurgling, unable even to express themselves for the rage that chokes them". As the last circle of Incontinence, the "savage self-frustration" of the Fifth Circle marks the end of "that which had its tender and romantic beginnings in the dalliance of indulged passion".Canto VIIIPhlegyas reluctantly transports Dante and Virgil across the Styx in his skiff. On the way they are accosted by Filippo Argenti, a Black Guelph from the prominent Adimari family. Little is known about Argenti, although Giovanni Boccaccio describes an incident in which he lost his temper; early commentators state that Argenti's brother seized some of Dante's property after his exile from Florence. Just as Argenti enabled the seizing of Dante's property, he himself is "seized" by all the other wrathful souls.
When Dante responds "In weeping and in grieving, accursed spirit, may you long remain," Virgil blesses him with words used to describe Christ himself (Luke 11:27). Literally, this reflects the fact that souls in Hell are eternally fixed in the state they have chosen, but allegorically, it reflects Dante's beginning awareness of his own sin.
Entrance to Dis
In the distance, Dante perceives high towers that resemble fiery red mosques. Virgil informs him that they are approaching the City of Dis. Dis, itself surrounded by the Stygian marsh, contains Lower Hell within its walls. Dis is one of the names of Pluto, the classical king of the underworld, in addition to being the name of the realm. The walls of Dis are guarded by fallen angels. Virgil is unable to convince them to let Dante and him enter.Canto IXDante is threatened by the Furies (consisting of Alecto, Megaera, and Tisiphone) and Medusa. An angel sent from Heaven secures entry for the poets, opening the gate by touching it with a wand, and rebukes those who opposed Dante. Allegorically, this reveals the fact that the poem is beginning to deal with sins that philosophy and humanism cannot fully understand. Virgil also mentions to Dante how Erichtho sent him down to the lowest circle of Hell to bring back a spirit from there.
Sixth Circle (Heresy)Canto XIn the sixth circle, heretics, such as Epicurus and his followers (who say "the soul dies with the body") are trapped in flaming tombs. Dante holds discourse with a pair of Epicurian Florentines in one of the tombs: Farinata degli Uberti, a famous Ghibelline leader (following the Battle of Montaperti in September 1260, Farinata strongly protested the proposed destruction of Florence at the meeting of the victorious Ghibellines; he died in 1264 and was posthumously condemned for heresy in 1283); and Cavalcante de' Cavalcanti, a Guelph who was the father of Dante's friend and fellow poet, Guido Cavalcanti. The political affiliation of these two men allows for a further discussion of Florentine politics. In response to a question from Dante about the "prophecy" he has received, Farinata explains that what the souls in Hell know of life on earth comes from seeing the future, not from any observation of the present. Consequently, when "the portal of the future has been shut", it will no longer be possible for them to know anything. Farinata explains that also crammed within the tomb are Emperor Frederick II, commonly reputed to be an Epicurean, and Ottaviano degli Ubaldini, whom Dante refers to as il Cardinale.Canto XIDante reads an inscription on one of the tombs indicating it belongs to Pope Anastasius II – although some modern scholars hold that Dante erred in the verse mentioning Anastasius ("Anastasio papa guardo, / lo qual trasse Fotin de la via dritta", lines 8–9), confusing the pope with the Byzantine emperor of the time, Anastasius I. Pausing for a moment before the steep descent to the foul-smelling seventh circle, Virgil explains the geography and rationale of Lower Hell, in which the sins of violence (or bestiality) and fraud (or malice) are punished. In his explanation, Virgil refers to the Nicomachean Ethics and the Physics of Aristotle, with medieval interpretations. Virgil asserts that there are only two legitimate sources of wealth: natural resources ("Nature") and human labor and activity ("Art"). Usury, to be punished in the next circle, is therefore an offence against both; it is a kind of blasphemy, since it is an act of violence against Art, which is the child of Nature, and Nature derives from God.
Virgil then indicates the time through his unexplained awareness of the stars' positions. The "Wain", the Great Bear, now lies in the northwest over Caurus (the northwest wind). The constellation Pisces (the Fish) is just appearing over the horizon: it is the zodiacal sign preceding Aries (the Ram). Canto I notes that the sun is in Aries, and since the twelve zodiac signs rise at two-hour intervals, it must now be about two hours prior to sunrise: 4:00 AM on Holy Saturday, April 9.John Ciardi, Inferno, notes on Canto XI, p. 95
Seventh Circle (Violence)Canto XIIThe Seventh Circle, divided into three rings, houses the Violent. Dante and Virgil descend a jumble of rocks that had once formed a cliff to reach the Seventh Circle from the Sixth Circle, having first to evade the Minotaur (L'infamia di Creti, "the infamy of Crete", line 12); at the sight of them, the Minotaur gnaws his flesh. Virgil assures the monster that Dante is not its hated enemy, Theseus. This causes the Minotaur to charge them as Dante and Virgil swiftly enter the seventh circle. Virgil explains the presence of shattered stones around them: they resulted from the great earthquake that shook the earth at the moment of Christ's death (Matt. 27:51), at the time of the Harrowing of Hell. Ruins resulting from the same shock were previously seen at the beginning of Upper Hell (the entrance of the Second Circle, Canto V).Ring 1: Against Neighbors: In the first round of the seventh circle, the murderers, war-makers, plunderers, and tyrants are immersed in Phlegethon, a river of boiling blood and fire. Ciardi writes, "as they wallowed in blood during their lives, so they are immersed in the boiling blood forever, each according to the degree of his guilt". The Centaurs, commanded by Chiron and Pholus, patrol the ring, shooting arrows into any sinners who emerge higher out of the boiling blood than each is allowed. The centaur Nessus guides the poets along Phlegethon and points out Alexander the Great (disputed), "Dionysius" (either Dionysius I or Dionysius II, or both; they were bloodthirsty, unpopular tyrants of Sicily), Ezzelino III da Romano (the cruelest of the Ghibelline tyrants), Obizzo d'Este, and Guy de Montfort. The river grows shallower until it reaches a ford, after which it comes full circle back to the deeper part where Dante and Virgil first approached it; immersed here are tyrants including Attila, King of the Huns (flagello in terra, "scourge on earth", line 134), "Pyrrhus" (either the bloodthirsty son of Achilles or King Pyrrhus of Epirus), Sextus, Rinier da Corneto, and Rinier Pazzo. After bringing Dante and Virgil to the shallow ford, Nessus leaves them to return to his post. This passage may have been influenced by the early medieval Visio Karoli Grossi.Canto XIIIRing 2: Against Self: The second round of the seventh circle is the Wood of the Suicides, in which the souls of the people who attempted or committed suicide are transformed into gnarled, thorny trees and then fed upon by Harpies, hideous clawed birds with the faces of women; the trees are only permitted to speak when broken and bleeding. Dante breaks a twig off one of the trees and from the bleeding trunk hears the tale of Pietro della Vigna, a powerful minister of Emperor Frederick II until he fell out of favor and was imprisoned and blinded. He subsequently committed suicide; his presence here, rather than in the Ninth Circle, indicates that Dante believes that the accusations made against him were false. The Harpies and the characteristics of the bleeding bushes are based on Book 3 of the Aeneid. According to Dorothy L. Sayers, the sin of suicide is an "insult to the body; so, here, the shades are deprived of even the semblance of the human form. As they refused life, they remain fixed in a dead and withered sterility. They are the image of the self-hatred which dries up the very sap of energy and makes all life infertile." The trees can also be interpreted as a metaphor for the state of mind in which suicide is committed. Dante learns that these suicides, unique among the dead, will not be corporally resurrected after the Final Judgement since they threw their bodies away; instead, they will maintain their bushy form, with their own corpses hanging from the thorny limbs. After Pietro della Vigna finishes his story, Dante notices two shades (Lano da Siena and Jacopo Sant' Andrea) race through the wood, chased and savagely mauled by ferocious bitches – this is the punishment of the violently profligate who, "possessed by a depraved passion ... dissipated their goods for the sheer wanton lust of wreckage and disorder". The destruction wrought upon the wood by the profligates' flight and punishment as they crash through the undergrowth causes further suffering to the suicides, who cannot move out of the way.Canto XIVRing 3: Against God, Art, and Nature: The third round of the seventh circle is a great Plain of Burning Sand scorched by great flakes of flame falling slowly down from the sky, an image derived from the fate of Sodom and Gomorrah (Gen. 19:24). The Blasphemers (the Violent against God) are stretched supine upon the burning sand, the Sodomites (the Violent against Nature) run in circles, while the Usurers (the Violent against Art, which is the Grandchild of God, as explained in Canto XI) crouch huddled and weeping. Ciardi writes, "Blasphemy, sodomy, and usury are all unnatural and sterile actions: thus the unbearing desert is the eternity of these sinners; and thus the rain, which in nature should be fertile and cool, descends as fire". Dante finds Capaneus stretched out on the sands; for blasphemy against Jove, he was struck down with a thunderbolt during the war of the Seven against Thebes; he is still scorning Jove in the afterlife. The overflow of Phlegethon, the river of blood from the first ring, flows boiling through the Wood of the Suicides (the second ring) and crosses the Burning Plain. Virgil explains the origin of the rivers of Hell, which includes references to the Old Man of Crete.Canto XVProtected by the powers of the boiling rivulet, Dante and Virgil progress across the burning plain. They pass a roving group of Sodomites, and Dante, to his surprise, recognizes Brunetto Latini. Dante addresses Brunetto with deep and sorrowful affection, "paying him the highest tribute offered to any sinner in the Inferno", thus refuting suggestions that Dante only placed his enemies in Hell. Dante has great respect for Brunetto and feels spiritual indebtedness to him and his works ("you taught me how man makes himself eternal; / and while I live, my gratitude for that / must always be apparent in my words"); Brunetto prophesies Dante's bad treatment by the Florentines. He also identifies other sodomites, including Priscian, Francesco d'Accorso, and Bishop Andrea de' Mozzi.Canto XVIThe Poets begin to hear the waterfall that plunges over the Great Cliff into the Eighth Circle when three shades break from their company and greet them. They are Iacopo Rusticucci, Guido Guerra, and Tegghiaio Aldobrandi – all Florentines much admired by Dante. Rusticucci blames his "savage wife" for his torments. The sinners ask for news of Florence, and Dante laments the current state of the city. At the top of the falls, at Virgil's order, Dante removes a cord from about his waist and Virgil drops it over the edge; as if in answer, a large, distorted shape swims up through the filthy air of the abyss.Canto XVIIThe creature is Geryon, the Monster of Fraud; Virgil announces that they must fly down from the cliff on the monster's back. Dante goes alone to examine the Usurers: he does not recognize them, but each has a heraldic device emblazoned on a leather purse around his neck ("On these their streaming eyes appeared to feast"). The coats of arms indicate that they came from prominent Florentine families; they indicate the presence of Catello di Rosso Gianfigliazzi, Ciappo Ubriachi, the Paduan Reginaldo degli Scrovegni (who predicts that his fellow Paduan Vitaliano di Iacopo Vitaliani will join him here), and Giovanni di Buiamonte. Dante then rejoins Virgil and, both mounted atop Geryon's back, the two begin their descent from the great cliff in the Eighth Circle: the Hell of the Fraudulent and Malicious.
Geryon, the winged monster who allows Dante and Virgil to descend a vast cliff to reach the Eighth Circle, was traditionally represented as a giant with three heads and three conjoined bodies. Dante's Geryon, meanwhile, is an image of fraud, combining human, bestial, and reptilian elements: Geryon is a "monster with the general shape of a wyvern but with the tail of a scorpion, hairy arms, a gaudily-marked reptilian body, and the face of a just and honest man". The pleasant human face on this grotesque body evokes the insincere fraudster whose intentions "behind the face" are all monstrous, cold-blooded, and stinging with poison.
Eighth Circle (Fraud)Canto XVIIIDante now finds himself in the Eighth Circle, called Malebolge ("Evil ditches"): the upper half of the Hell of the Fraudulent and Malicious. The Eighth Circle is a large funnel of stone shaped like an amphitheatre around which run a series of ten deep, narrow, concentric ditches or trenches called bolge (singular: bolgia). Within these ditches are punished those guilty of Simple Fraud. From the foot of the Great Cliff to the Well (which forms the neck of the funnel) are large spurs of rock, like umbrella ribs or spokes, which serve as bridges over the ten ditches. Dorothy L. Sayers writes that the Malebolge is "the image of the City in corruption: the progressive disintegration of every social relationship, personal and public. Sexuality, ecclesiastical and civil office, language, ownership, counsel, authority, psychic influence, and material interdependence – all the media of the community's interchange are perverted and falsified".Bolgia 1 – Panderers and seducers: These sinners make two files, one along either bank of the ditch, and march quickly in opposite directions while being whipped by horned demons for eternity. They "deliberately exploited the passions of others and so drove them to serve their own interests, are themselves driven and scourged". Dante makes reference to a recent traffic rule developed for the Jubilee year of 1300 in Rome. In the group of panderers, the poets notice Venedico Caccianemico, a Bolognese Guelph who sold his own sister Ghisola to the Marchese d'Este. In the group of seducers, Virgil points out Jason, the Greek hero who led the Argonauts to fetch the Golden Fleece from Aeëtes, King of Colchis. He gained the help of the king's daughter, Medea, by seducing and marrying her only to later desert her for Creusa. Jason had previously seduced Hypsipyle when the Argonauts landed at Lemnos on their way to Colchis, but "abandoned her, alone and pregnant".Bolgia 2 – Flatterers: These also exploited other people, this time abusing and corrupting language to play upon others' desires and fears. They are steeped in excrement (representative of the false flatteries they told on earth) as they howl and fight amongst themselves. Alessio Interminei of Lucca and Thaïs are seen here.Canto XIXBolgia 3 – Simoniacs: Dante now forcefully expresses his condemnation of those who committed simony, or the sale of ecclesiastic favors and offices, and therefore made money for themselves out of what belongs to God: "Rapacious ones, who take the things of God, / that ought to be the brides of Righteousness, / and make them fornicate for gold and silver! / The time has come to let the trumpet sound / for you; ...". The sinners are placed head-downwards in round, tube-like holes within the rock (debased mockeries of baptismal fonts), with flames burning the soles of their feet. The heat of the fire is proportioned to their guilt. The simile of baptismal fonts gives Dante an incidental opportunity to clear his name of an accusation of malicious damage to the font at the Baptistery of San Giovanni. Simon Magus, who offered gold in exchange for holy power to Saint Peter and after whom the sin is named, is mentioned here (although Dante does not encounter him). One of the sinners, Pope Nicholas III, must serve in the hellish baptism by fire from his death in 1280 until 1303 – the arrival in Hell of Pope Boniface VIII – who will take his predecessor's place in the stone tube until 1314, when he will in turn be replaced by Pope Clement V, a puppet of King Philip IV of France who moved the Papal See to Avignon, ushering in the Avignon Papacy (1309–77). Dante delivers a denunciation of simoniacal corruption of the Church.Canto XXBolgia 4 – Sorcerers: In the middle of the bridge of the Fourth Bolgia, Dante looks down at the souls of fortune tellers, diviners, astrologers, and other false prophets. The punishment of those who attempted to "usurp God's prerogative by prying into the future", is to have their heads twisted around on their bodies; in this horrible contortion of the human form, these sinners are compelled to walk backwards for eternity, blinded by their own tears. John Ciardi writes, "Thus, those who sought to penetrate the future cannot even see in front of themselves; they attempted to move themselves forward in time, so must they go backwards through all eternity; and as the arts of sorcery are a distortion of God's law, so are their bodies distorted in Hell." While referring primarily to attempts to see into the future by forbidden means, this also symbolises the twisted nature of magic in general. Dante weeps in pity, and Virgil rebukes him, saying, "Here pity only lives when it is dead; / for who can be more impious than he / who links God's judgment to passivity?" Virgil gives a lengthy explanation of the founding of his native city of Mantua. Among the sinners in this circle are King Amphiaraus (one of the Seven against Thebes; foreseeing his death in the war, he sought to avert it by hiding from battle but died in an earthquake trying to flee) and two Theban soothsayers: Tiresias (in Ovid's Metamorphoses III, 324–331, Tiresias was transformed into a woman upon striking two coupling serpents with his rod; seven years later, he was changed back to a man in an identical encounter) and his daughter Manto. Also in this bolgia are Aruns (an Etruscan soothsayer who predicted the Caesar's victory in the Roman civil war in Lucan's Pharsalia I, 585–638), the Greek augur Eurypylus, astrologers Michael Scot (served at Frederick II's court at Palermo) and Guido Bonatti (served the court of Guido da Montefeltro), and Asdente (a shoemaker and soothsayer from Parma). Virgil implies that the moon is now setting over the Pillars of Hercules in the West: the time is just after 6:00 AM, the dawn of Holy Saturday.Canto XXIBolgia 5 – Barrators: Corrupt politicians, who made money by trafficking in public offices (the political analogue of the simoniacs), are immersed in a lake of boiling pitch, which represents the sticky fingers and dark secrets of their corrupt deals. They are guarded by demons called the Malebranche ("Evil Claws"), who tear them to pieces with claws and grappling hooks if they catch them above the surface of the pitch. The Poets observe a demon arrive with a grafting Senator of Lucca and throw him into the pitch where the demons set upon him. Virgil secures safe-conduct from the leader of the Malebranche, named Malacoda ("Evil Tail"). He informs them that the bridge across the Sixth Bolgia is shattered (as a result of the earthquake that shook Hell at the death of Christ in 34 AD) but that there is another bridge further on. He sends a squad of demons led by Barbariccia to escort them safely. Based on details in this Canto (and if Christ's death is taken to have occurred at exactly noon), the time is now 7:00 AM of Holy Saturday. The demons provide some satirical black comedy – in the last line of Canto XXI, the sign for their march is provided by a fart: "and he had made a trumpet of his ass".Canto XXIIOne of the grafters, an unidentified Navarrese (identified by early commentators as Ciampolo) is seized by the demons, and Virgil questions him. The sinner speaks of his fellow grafters, Friar Gomita (a corrupt friar in Gallura eventually hanged by Nino Visconti (see Purg. VIII) for accepting bribes to let prisoners escape) and Michel Zanche (a corrupt Vicar of Logodoro under King Enzo of Sardinia). He offers to lure some of his fellow sufferers into the hands of the demons, and when his plan is accepted he escapes back into the pitch. Alichino and Calcabrina start a brawl in mid-air and fall into the pitch themselves, and Barbariccia organizes a rescue party. Dante and Virgil take advantage of the confusion to slip away.Canto XXIIIBolgia 6 – Hypocrites: The Poets escape the pursuing Malebranche by sliding down the sloping bank of the next pit. Here they find the hypocrites listlessly walking around a narrow track for eternity, weighted down by leaden robes. The robes are brilliantly gilded on the outside and are shaped like a monk's habit – the hypocrite's "outward appearance shines brightly and passes for holiness, but under that show lies the terrible weight of his deceit", a falsity that weighs them down and makes spiritual progress impossible for them. Dante speaks with Catalano dei Malavolti and Loderingo degli Andalò, two Bolognese brothers of the Jovial Friars, an order that had acquired a reputation for not living up to its vows and was eventually disbanded by Papal decree. Friar Catalano points out Caiaphas, the High Priest of Israel under Pontius Pilate, who counseled the Pharisees to crucify Jesus for the public good (John 11:49–50). He himself is crucified to the floor of Hell by three large stakes, and in such a position that every passing sinner must walk upon him: he "must suffer upon his body the weight of all the world's hypocrisy". The Jovial Friars explain to Virgil how he may climb from the pit; Virgil discovers that Malacoda lied to him about the bridges over the Sixth Bolgia.Canto XXIVBolgia 7 – Thieves: Dante and Virgil leave the bolgia of the Hypocrites by climbing the ruined rocks of a bridge destroyed by the great earthquake, after which they cross the bridge of the Seventh Bolgia to the far side to observe the next chasm. The pit is filled with monstrous reptiles: the shades of thieves are pursued and bitten by snakes and lizards, who curl themselves about the sinners and bind their hands behind their backs. The full horror of the thieves' punishment is revealed gradually: just as they stole other people's substance in life, their very identity becomes subject to theft here. One sinner, who reluctantly identifies himself as Vanni Fucci, is bitten by a serpent at the jugular vein, bursts into flames, and is re-formed from the ashes like a phoenix. Vanni tells a dark prophecy against Dante.Canto XXVVanni hurls an obscenity at God and the serpents swarm over him. The centaur Cacus arrives to punish him; he has a fire-breathing dragon on his shoulders and snakes covering his equine back. (In Roman mythology, Cacus, the monstrous, fire-breathing son of Vulcan, was killed by Hercules for raiding the hero's cattle; in Aeneid VIII, 193–267, Virgil did not describe him as a centaur). Dante then meets five noble thieves of Florence and observes their various transformations. Agnello Brunelleschi, in human form, is merged with the six-legged serpent that is Cianfa Donati. A figure named Buoso (perhaps either Buoso degli Abati or Buoso Donati, the latter of whom is mentioned in Inf. XXX.44) first appears as a man, but exchanges forms with Francesco de' Cavalcanti, who bites Buoso in the form of a four-footed serpent. Puccio Sciancato remains unchanged for the time being.Canto XXVIBolgia 8 – Counsellors of Fraud: Dante addresses a passionate lament to Florence before turning to the next bolgia. Here, fraudulent advisers or evil counsellors move about, hidden from view inside individual flames. These are not people who gave false advice, but people who used their position to advise others to engage in fraud. Ulysses and Diomedes are punished together within a great double-headed flame; they are condemned for the stratagem of the Trojan Horse (resulting in the Fall of Troy), for persuading Achilles to sail for Troy (causing Deidamia to die of grief), and for the theft of the sacred statue of Pallas, the Palladium (upon which, it was believed, the fate of Troy depended). Ulysses, the figure in the larger horn of the flame, narrates the tale of his last voyage and death, a creation of Dante's that illustrates the extent of his own pride despite his condemnation of this principal vice throughout the Divine Comedy. Ulysses tells how, after his detainment by Circe, his love for neither his son, his father, nor his wife could overpower his desire to set out on the open sea to "gain experience of the world / and of the vices and the worth of men". As they approach the Pillars of Hercules, Ulysses urges his crew:
Consider well the seed that gave you birth:
you were not made to live your lives as brutes,
but to be followers of worth and knowledge.<ref name=
This passage exemplifies the danger of utilizing rhetoric without proper wisdom, a failing condemned by several of Dante's most prominent philosophical influences. Although Ulysses successfully convinces his crew to venture into the unknown, he lacks the wisdom to understand the danger this entails, leading to their death in a shipwreck after sighting Mount Purgatory in the Southern Hemisphere.Canto XXVIIDante is approached by Guido da Montefeltro, head of the Ghibellines of Romagna, asking for news of his country. Dante replies with a tragic summary of the current state of the cities of Romagna. Guido then recounts his life: he advised Pope Boniface VIII to offer a false amnesty to the Colonna family, who, in 1297, had walled themselves inside the castle of Palestrina in the Lateran. When the Colonna accepted the terms and left the castle, the Pope razed it to the ground and left them without a refuge. Guido describes how St. Francis, founder of the Franciscan order, came to take his soul to Heaven, only to have a devil assert prior claim. Although Boniface had absolved Guido in advance for his evil advice, the devil points out the invalidity: absolution requires contrition, and a man cannot be contrite for a sin at the same time that he is intending to commit it.Canto XXVIIIBolgia 9 – Sowers of Discord: In the Ninth Bolgia, the Sowers of Discord are hacked and mutilated for all eternity by a large demon wielding a bloody sword; their bodies are divided as, in life, their sin was to tear apart what God had intended to be united; these are the sinners who are "ready to rip up the whole fabric of society to gratify a sectional egotism". The souls must drag their ruined bodies around the ditch, their wounds healing in the course of the circuit, only to have the demon tear them apart anew. These are divided into three categories: (i) religious schism and discord, (ii) civil strife and political discord, and (iii) family disunion, or discord between kinsmen. Chief among the first category is Muhammad, the founder of Islam: his body is ripped from groin to chin, with his entrails hanging out. Dante apparently saw Muhammad as causing a schism within Christianity when he and his followers splintered off.Wallace Fowlie, A Reading of Dante's Inferno, University of Chicago Press, 1981, p. 178. Dante also condemns Muhammad's son-in-law, Ali, for schism between Sunni and Shiite: his face is cleft from top to bottom. Muhammad tells Dante to warn the schismatic and heretic Fra Dolcino. In the second category are Pier da Medicina (his throat slit, nose slashed off as far as the eyebrows, a wound where one of his ears had been), the Roman tribune Gaius Scribonius Curio (who advised Caesar to cross the Rubicon and thus begin the Civil War; his tongue is cut off), and Mosca dei Lamberti (who incited the Amidei family to kill Buondelmonte dei Buondelmonti, resulting in conflict between Guelphs and Ghibellines; his arms are hacked off). Finally, in the third category of sinner, Dante sees Bertrand de Born (1140–1215). The knight carries around his severed head by its own hair, swinging it like a lantern. Bertrand is said to have caused a quarrel between Henry II of England and his son Prince Henry the Young King; his punishment in Hell is decapitation, since dividing father and son is like severing the head from the body.Canto XXIXBolgia 10 – Falsifiers: The final bolgia of the Eighth Circle, is home to various sorts of falsifiers. A "disease" on society, they are themselves afflicted with different types of afflictions: horrible diseases, stench, thirst, filth, darkness, and screaming. Some lie prostrate while others run hungering through the pit, tearing others to pieces. Shortly before their arrival in this pit, Virgil indicates that it is approximately noon of Holy Saturday, and he and Dante discuss one of Dante's kinsmen (Geri de Bello) among the Sowers of Discord in the previous ditch. The first category of falsifiers Dante encounters are the Alchemists (Falsifiers of Things). He speaks with two spirits viciously scrubbing and clawing at their leprous scabs: Griffolino d'Arezzo (an alchemist who extracted money from the foolish Alberto da Siena on the promise of teaching him to fly; Alberto's reputed father the Bishop of Siena had Griffolino burned at the stake) and Capocchio (burned at the stake at Siena in 1293 for practicing alchemy).Canto XXXSuddenly, two spirits – Gianni Schicchi de' Cavalcanti and Myrrha, both punished as Imposters (Falsifiers of Persons) – run rabid through the pit. Schicchi sinks his teeth into the neck of an alchemist, Capocchio, and drags him away like prey. Griffolino explains how Myrrha disguised herself to commit incest with her father King Cinyras, while Schicchi impersonated the dead Buoso Donati to dictate a will giving himself several profitable bequests. Dante then encounters Master Adam of Brescia, one of the Counterfeiters (Falsifiers of Money): for manufacturing Florentine florins of twenty-one (rather than twenty-four) carat gold, he was burned at the stake in 1281. He is punished by a loathsome dropsy-like disease, which gives him a bloated stomach, prevents him from moving, and an eternal, unbearable thirst. Master Adam points out two sinners of the fourth class, the Perjurers (Falsifiers of Words). These are Potiphar's wife (punished for her false accusation of Joseph, Gen. 39:7–19) and Sinon, the Achaean spy who lied to the Trojans to convince them to take the Trojan Horse into their city (Aeneid II, 57–194); Sinon is here rather than in Bolgia 8 because his advice was false as well as evil. Both suffer from a burning fever. Master Adam and Sinon exchange abuse, which Dante watches until he is rebuked by Virgil. As a result of his shame and repentance, Dante is forgiven by his guide. Sayers remarks that the descent through Malebolge "began with the sale of the sexual relationship, and went on to the sale of Church and State; now, the very money is itself corrupted, every affirmation has become perjury, and every identity a lie" so that every aspect of social interaction has been progressively destroyed.
Central Well of MalebolgeCanto XXXIDante and Virgil approach the Central Well, at the bottom of which lies the Ninth and final Circle of Hell. The classical and biblical Giants – who perhaps symbolize pride and other spiritual flaws lying behind acts of treachery – stand perpetual guard inside the well-pit, their legs embedded in the banks of the Ninth Circle while their upper halves rise above the rim and can be visible from the Malebolge. Dante initially mistakes them for great towers of a city. Among the Giants, Virgil identifies Nimrod (who tried to build the Tower of Babel; he shouts out the unintelligible Raphèl mai amècche zabì almi); Ephialtes (who with his brother Otus tried to storm Olympus during the Gigantomachy; he has his arms chained up) and Briareus (who Dante claimed had challenged the gods); and Tityos and Typhon, who insulted Jupiter. Also here is Antaeus, who did not join in the rebellion against the Olympian gods and therefore is not chained. At Virgil's persuasion, Antaeus takes the poets in his large palm and lowers them gently to the final level of Hell.
Ninth Circle (Treachery)Canto XXXIIAt the base of the well, Dante finds himself within a large frozen lake: Cocytus, the Ninth Circle of Hell. Trapped in the ice, each according to his guilt, are punished sinners guilty of treachery against those with whom they had special relationships. The lake of ice is divided into four concentric rings (or "rounds") of traitors corresponding, in order of seriousness, to betrayal of family ties, betrayal of community ties, betrayal of guests, and betrayal of lords. This is in contrast to the popular image of Hell as fiery; as Ciardi writes, "The treacheries of these souls were denials of love (which is God) and of all human warmth. Only the remorseless dead center of the ice will serve to express their natures. As they denied God's love, so are they furthest removed from the light and warmth of His Sun. As they denied all human ties, so are they bound only by the unyielding ice." This final, deepest level of hell is reserved for traitors, betrayers and oathbreakers (its most famous inmate is Judas Iscariot).Round 1 – Caina: this round is named after Cain, who killed his own brother in the first act of murder (Gen. 4:8). This round houses the Traitors to their Kindred: they have their necks and heads out of the ice and are allowed to bow their heads, allowing some protection from the freezing wind. Here Dante sees the brothers Alessandro and Napoleone degli Alberti, who killed each other over their inheritance and their politics some time between 1282 and 1286. Camiscion de' Pazzi, a Ghibelline who murdered his kinsman Ubertino, identifies several other sinners: Mordred (traitorous son of King Arthur); Vanni de' Cancellieri, nicknamed Focaccia (a White Guelph of Pistoia who killed his cousin, Detto de' Cancellieri); and Sassol Mascheroni of the noble Toschi family of Florence (murdered a relative). Camiscion is aware that, in July 1302, his relative Carlino de' Pazzi would accept a bribe to surrender the Castle of Piantravigne to the Blacks, betraying the Whites. As a traitor to his party, Carlino belongs in Antenora, the next circle down – his greater sin will make Camiscion look virtuous by comparison.Round 2 – Antenora: the second round is named after Antenor, a Trojan soldier who betrayed his city to the Greeks. Here lie the Traitors to their Country: those who committed treason against political entities (parties, cities, or countries) have their heads above the ice, but they cannot bend their necks. Dante accidentally kicks the head of Bocca degli Abati, a traitorous Guelph of Florence, and then proceeds to treat him more savagely than any other soul he has thus far met. Also punished in this level are Buoso da Duera (Ghibelline leader bribed by the French to betray Manfred, King of Naples), Tesauro dei Beccheria (a Ghibelline of Pavia; beheaded by the Florentine Guelphs for treason in 1258), Gianni de' Soldanieri (noble Florentine Ghibelline who joined with the Guelphs after Manfred's death in 1266), Ganelon (betrayed the rear guard of Charlemagne to the Muslims at Roncesvalles, according to the French epic poem The Song of Roland), and Tebaldello de' Zambrasi of Faenza (a Ghibelline who turned his city over to the Bolognese Guelphs on Nov. 13, 1280). The Poets then see two heads frozen in one hole, one gnawing the nape of the other's neck.Canto XXXIIIThe gnawing sinner tells his story: he is Count Ugolino, and the head he gnaws belongs to Archbishop Ruggieri. In "the most pathetic and dramatic passage of the Inferno", Ugolino describes how he conspired with Ruggieri in 1288 to oust his nephew, Nino Visconti, and take control over the Guelphs of Pisa. However, as soon as Nino was gone, the Archbishop, sensing the Guelphs' weakened position, turned on Ugolino and imprisoned him with his sons and grandsons in the Torre dei Gualandi. In March 1289, the Archbishop condemned the prisoners to death by starvation in the tower.Round 3 – Ptolomaea: the third region of Cocytus is named after Ptolemy, who invited his father-in-law Simon Maccabaeus and his sons to a banquet and then killed them (1 Maccabees 16). Traitors to their Guests lie supine in the ice while their tears freeze in their eye sockets, sealing them with small visors of crystal – even the comfort of weeping is denied to them. Dante encounters Fra Alberigo, one of the Jovial Friars and a native of Faenza, who asks Dante to remove the visor of ice from his eyes. In 1285, Alberigo invited his opponents, Manfred (his brother) and Alberghetto (Manfred's son), to a banquet at which his men murdered the dinner guests. He explains that often a living person's soul falls to Ptolomea before he dies ("before dark Atropos has cut their thread"). Then, on earth, a demon inhabits the body until the body's natural death. Fra Alberigo's sin is identical in kind to that of Branca d'Oria, a Genoese Ghibelline who, in 1275, invited his father-in-law, Michel Zanche (seen in the Eighth Circle, Bolgia 5) and had him cut to pieces. Branca (that is, his earthly body) did not die until 1325, but his soul, together with that of his nephew who assisted in his treachery, fell to Ptolomaea before Michel Zanche's soul arrived at the bolgia of the Barrators. Dante leaves without keeping his promise to clear Fra Alberigo's eyes of ice ("And yet I did not open them for him; / and it was courtesy to show him rudeness").Canto XXXIVRound 4 – Judecca: the fourth division of Cocytus, named for Judas Iscariot, contains the Traitors to their Lords''' and benefactors. Upon entry into this round, Virgil says "Vexilla regis prodeunt inferni" ("The banners of the King of Hell draw closer"). Judecca is completely silent: all of the sinners are fully encapsulated in ice, distorted and twisted in every conceivable position. The sinners present an image of utter immobility: it is impossible to talk with any of them, so Dante and Virgil quickly move on to the centre of Hell.
Centre of Hell
In the very centre of Hell, condemned for committing the ultimate sin (personal treachery against God), is the Devil, referred to by Virgil as Dis (the Roman god of the underworld; the name "Dis" was often used for Pluto in antiquity, such as in Virgil's Aeneid). The arch-traitor, Lucifer was once held by God to be fairest of the angels before his pride led him to rebel against God, resulting in his expulsion from Heaven. Lucifer is a giant, terrifying beast trapped waist-deep in the ice, fixed and suffering. He has three faces, each a different color: one red (the middle), one a pale yellow (the right), and one black (the left):
... he had three faces: one in front bloodred;
and then another two that, just above
the midpoint of each shoulder, joined the first;
and at the crown, all three were reattached;
the right looked somewhat yellow, somewhat white;
the left in its appearance was like those
who come from where the Nile, descending, flows.
Dorothy L. Sayers notes that Satan's three faces are thought by some to suggest his control over the three human races: red for the Europeans (from Japheth), yellow for the Asiatic (from Shem), and black for the African (the race of Ham). All interpretations recognize that the three faces represent a fundamental perversion of the Trinity: Satan is impotent, ignorant, and full of hate, in contrast to the all-powerful, all-knowing, and all-loving nature of God. Lucifer retains his six wings (he originally belonged to the angelic order of Seraphim, described in Isaiah 6:2), but these are now dark, bat-like, and futile: the icy wind that emanates from the beating of Lucifer's wings only further ensures his own imprisonment in the frozen lake. He weeps from his six eyes, and his tears mix with bloody froth and pus as they pour down his three chins. Each face has a mouth that chews eternally on a prominent traitor. Marcus Junius Brutus and Gaius Cassius Longinus dangle with their feet in the left and right mouths, respectively, for their involvement in the assassination of Julius Caesar (March 15, 44 BC) – an act which, to Dante, represented the destruction of a unified Italy and the killing of the man who was divinely appointed to govern the world. In the central, most vicious mouth is Judas Iscariot, the apostle who betrayed Christ. Judas is receiving the most horrifying torture of the three traitors: his head is gnawed inside Lucifer's mouth while his back is forever flayed and shredded by Lucifer's claws. According to Dorothy L. Sayers, "just as Judas figures treason against God, so Brutus and Cassius figure treason against Man-in-Society; or we may say that we have here the images of treason against the Divine and the Secular government of the world".
At about 6:00 p.m. on Saturday evening, Virgil and Dante begin their escape from Hell by clambering down Satan's ragged fur, feet-first. When they reach Satan's genitalia, the poets pass through the center of the universe and of gravity from the Northern Hemisphere of land to the Southern Hemisphere of water. When Virgil changes direction and begins to climb "upward" towards the surface of the Earth at the antipodes, Dante, in his confusion, initially believes they are returning to Hell. Virgil indicates that the time is halfway between the canonical hours of Prime (6:00a.m.) and Terce that is, 7:30 a.m. of the same Holy Saturday which was just about to end. Dante is confused as to how, after about an hour and a half of climbing, it is now apparently morning. Virgil explains that it is as a result of passing through the Earth's center into the Southern Hemisphere, which is twelve hours ahead of Jerusalem, the central city of the Northern Hemisphere (where, therefore, it is currently 7:30 p.m.).
Virgil goes on to explain how the Southern Hemisphere was once covered with dry land, but the land recoiled in horror to the north when Lucifer fell from Heaven and was replaced by the ocean. Meanwhile, the inner rock Lucifer displaced as he plunged into the center of the earth rushed upwards to the surface of the Southern Hemisphere to avoid contact with him, forming the Mountain of Purgatory. This the only land mass in the waters of the Southern rises above the surface at a point directly opposite Jerusalem. The poets then ascend a narrow chasm of rock through the "space contained between the floor formed by the convex side of Cocytus and the underside of the earth above," moving in opposition to Lethe, the river of oblivion, which flows down from the summit of Mount Purgatory. The poets finally emerge a little before dawn on the morning of Easter Sunday (April 10, 1300) beneath a sky studded with stars.
Illustrations
See also
Notes
References
External links
Texts
Dante Dartmouth Project: Full text of more than 70 Italian, Latin, and English commentaries on the Commedia, ranging in date from 1322 (Iacopo Alighieri) to the 2000s (Robert Hollander)
World of Dante Multimedia website that offers Italian text of Divine Comedy, Allen Mandelbaum's translation, gallery, interactive maps, timeline, musical recordings, and searchable database for students and teachers by Deborah Parker and IATH (Institute for Advanced Technologies in the Humanities) of the University of Virginia
Dante's Divine Comedy: Full text paraphrased in modern English verse by Scottish author and artist Alasdair Gray
Audiobooks: Public domain recordings from LibriVox (in Italian, Longfellow translation); some additional recordings
Secondary materials
A 72-piece art collection featured in Dante's Hell Animated and Inferno by Dante films.
On-line Concordance to the Divine Comedy
Wikisummaries summary and analysis of Inferno
Danteworlds, multimedia presentation of the Divine Comedy for students by Guy Raffa of the University of Texas
Dante's Places: a map (still a prototype) of the places named by Dante in the Commedia'', created with GoogleMaps. Explanatory PDF is available for download
See more Dante's Inferno images by selecting the ""Heaven & Hell" subject at the Persuasive Cartography, The PJ Mode Collection, Cornell University Library
"Mapping Dante's Inferno, One Circle of Hell at a Time", article by Anika Burgess, Atlas Obscura, July 13, 2017
Afterlife in Christianity
Cultural depictions of Muhammad
Divine Comedy
Epic poems in Italian
Hell in popular culture
Italian poems
The Devil in fiction
Cultural depictions of Virgil
Visionary poems
Works by Dante Alighieri
Hell (Christianity)
Caiaphas |
328325 | https://en.wikipedia.org/wiki/COFF | COFF | The Common Object File Format (COFF) is a format for executable, object code, and shared library computer files used on Unix systems. It was introduced in Unix System V, replaced the previously used a.out format, and formed the basis for extended specifications such as XCOFF and ECOFF, before being largely replaced by ELF, introduced with SVR4.
COFF and its variants continue to be used on some Unix-like systems, on Microsoft Windows (Portable Executable), in UEFI environments and in some embedded development systems.
History
The original Unix object file format a.out is unable to adequately support shared libraries, foreign format identification , or explicit address linkage . As development of Unix-like systems continued both inside and outside AT&T, different solutions to these and other issues emerged.
COFF was introduced in 1983, in AT&T's UNIX System V for non-VAX 32-bit platforms such as the 3B20. Improvements over the existing AT&T a.out format included arbitrary sections, explicit processor declarations, and explicit address linkage.
However, the COFF design was both too limited and incompletely specified: there was a limit on the maximum number of sections, a limit on the length of section names, included source files, and the symbolic debugging information was incapable of supporting real world languages such as C, much less newer languages like C++, or new processors. All real world implementations of COFF were necessarily violations of the standard as a result. This led to numerous COFF extensions. IBM used the XCOFF format in AIX; DEC, SGI and others used ECOFF; and numerous SysV ports and tool chains targeting embedded development each created their own, incompatible, variations.
With the release of SVR4, AT&T replaced COFF with ELF.
While extended versions of COFF continue to be used for some Unix-like platforms, primarily in embedded systems, perhaps the most widespread use of the COFF format today is in Microsoft's Portable Executable (PE) format. Developed for Windows NT, the PE format (sometimes written as PE/COFF) uses a COFF header for object files, and as a component of the PE header for executable files.
Features
COFF's main improvement over a.out was the introduction of multiple named sections in the object file. Different object files could have different numbers and types of sections.
Symbolic debugging information
The COFF symbolic debugging information consists of symbolic (string) names for program functions and variables, and line number information, used for setting breakpoints and tracing execution.
Symbolic names are stored in the COFF symbol table. Each symbol table entry includes a name, storage class, type, value and section number. Short names (8 characters or fewer) are stored directly in the symbol table; longer names are stored as an offset into the string table at the end of the COFF object.
Storage classes describe the type entity the symbol represents, and may include external variables (C_EXT), automatic (stack) variables (C_AUTO), register variables (C_REG), functions (C_FCN), and many others. The symbol type describes the interpretation of the symbol entity's value and includes values for all the C data types.
When compiled with appropriate options, a COFF object file will contain line number information for each possible break point in the text section of the object file. Line number information takes two forms: in the first, for each possible break point in the code, the line number table entry records the address and its matching line number. In the second form, the entry identifies a symbol table entry representing the start of a function, enabling a breakpoint to be set using the function's name.
Note that COFF was not capable of representing line numbers or debugging symbols for included source as with header files rendering the COFF debugging information virtually useless without incompatible extensions.
Relative virtual address
When a COFF file is generated, it is not usually known where in memory it will be loaded. The virtual address where the first byte of the file will be loaded is called image base address. The rest of the file is not necessarily loaded in a contiguous block, but in different sections.
Relative virtual addresses (RVAs) are not to be confused with standard virtual addresses. A relative virtual address is the virtual address of an object from the file once it is loaded into memory, minus the base address of the file image. If the file were to be mapped literally from disk to memory, the RVA would be the same as that of the offset into the file, but this is actually quite unusual.
Note that the RVA term is only used with objects in the image file. Once loaded into memory, the image base address is added, and ordinary VAs are used.
Problems
The COFF file header stores the date and time that the object file was created as a 32-bit binary integer, representing the number of seconds since the Unix epoch, 1 January 1970 00:00:00 UTC. Dates occurring after 19 January 2038 cannot be stored in this format.
See also
Comparison of executable file formats
Notes
References
MIPS COFF Spec
External links
More on the PE Format and Public documentation at Microsoft.com
Executable file formats |
8592308 | https://en.wikipedia.org/wiki/Comment%20%28computer%20programming%29 | Comment (computer programming) | In computer programming, a comment is a programmer-readable explanation or annotation in the source code of a computer program. They are added with the purpose of making the source code easier for humans to understand, and are generally ignored by compilers and interpreters. The syntax of comments in various programming languages varies considerably.
Comments are sometimes also processed in various ways to generate documentation external to the source code itself by documentation generators, or used for integration with source code management systems and other kinds of external programming tools.
The flexibility provided by comments allows for a wide degree of variability, but formal conventions for their use are commonly part of programming style guides.
Overview
Comments are generally formatted as either block comments (also called prologue comments or stream comments) or line comments (also called inline comments).
Block comments delimit a region of source code which may span multiple lines or a part of a single line. This region is specified with a start delimiter and an end delimiter. Some programming languages (such as MATLAB) allow block comments to be recursively nested inside one another, but others (such as Java) do not.
Line comments either start with a comment delimiter and continue until the end of the line, or in some cases, start at a specific column (character line offset) in the source code, and continue until the end of the line.
Some programming languages employ both block and line comments with different comment delimiters. For example, C++ has block comments delimited by /* and */ that can span multiple lines and line comments delimited by //. Other languages support only one type of comment. For example, Ada and Lua comments are line comments: they start with -- and continue to the end of the line.
Uses
How best to make use of comments is subject to dispute; different commentators have offered varied and sometimes opposing viewpoints.
There are many different ways of writing comments and many commentators offer conflicting advice.
Planning and reviewing
Comments can be used as a form of pseudocode to outline intention prior to writing the actual code. In this case it should explain the logic behind the code rather than the code itself.
/* loop backwards through all elements returned by the server
(they should be processed chronologically)*/
for (i = (numElementsReturned - 1); i >= 0; i--) {
/* process each element's data */
updatePattern(i, returnedElements[i]);
}
If this type of comment is left in, it simplifies the review process by allowing a direct comparison of the code with the intended results. A common logical fallacy is that code that is easy to understand does what it's supposed to do.
Code description
Comments can be used to summarize code or to explain the programmer's intent. According to this school of thought, restating the code in plain English is considered superfluous; the need to re-explain code may be a sign that it is too complex and should be rewritten, or that the naming is bad.
"Don't document bad code – rewrite it."
"Good comments don't repeat the code or explain it. They clarify its intent. Comments should explain, at a higher level of abstraction than the code, what you're trying to do."
Comments may also be used to explain why a block of code does not seem to fit conventions or best practices. This is especially true of projects involving very little development time, or in bug fixing. For example:
' Second variable dim because of server errors produced when reuse form data. No
' documentation available on server behavior issue, so just coding around it.
vtx = server.mappath("local settings")
Algorithmic description
Sometimes source code contains a novel or noteworthy solution to a specific problem. In such cases, comments may contain an explanation of the methodology. Such explanations may include diagrams and formal mathematical proofs. This may constitute explanation of the code, rather than a clarification of its intent; but others tasked with maintaining the code base may find such explanation crucial. This might especially be true in the case of highly specialized problem domains; or rarely used optimizations, constructs or function-calls.
For example, a programmer may add a comment to explain why an insertion sort was chosen instead of a quicksort, as the former is, in theory, slower than the latter. This could be written as follows:
list = [f (b), f (b), f (c), f (d), f (a), ...];
// Need a stable sort. Besides, the performance really does not matter.
insertion_sort (list);
Resource inclusion
Logos, diagrams, and flowcharts consisting of ASCII art constructions can be inserted into source code formatted as a comment. Further, copyright notices can be embedded within source code as comments. Binary data may also be encoded in comments through a process known as binary-to-text encoding, although such practice is uncommon and typically relegated to external resource files.
The following code fragment is a simple ASCII diagram depicting the process flow for a system administration script contained in a Windows Script File running under Windows Script Host. Although a section marking the code appears as a comment, the diagram itself actually appears in an XML CDATA section, which is technically considered distinct from comments, but can serve similar purposes.
<!-- begin: wsf_resource_nodes -->
<resource id="ProcessDiagram000">
<![CDATA[
HostApp (Main_process)
|
V
script.wsf (app_cmd) --> ClientApp (async_run, batch_process)
|
|
V
mru.ini (mru_history)
]]>
</resource>
Although this identical diagram could easily have been included as a comment, the example illustrates one instance where a programmer may opt not to use comments as a way of including resources in source code.
Metadata
Comments in a computer program often store metadata about a program file.
In particular, many software maintainers put submission guidelines in comments to help people who read the source code of that program to send any improvements they make back to the maintainer.
Other metadata includes:
the name of the creator of the original version of the program file and the date when the first version was created,
the name of the current maintainer of the program,
the names of other people who have edited the program file so far,
the URL of documentation about how to use the program,
the name of the software license for this program file,
etc.
When an algorithm in some section of the program is based on a description in a book or other reference, comments can be used to give the page number and title of the book or Request for Comments or other reference.
Debugging
A common developer practice is to comment out a code snippet, meaning to add comment syntax causing that block of code to become a comment, so that it will not be executed in the final program. This may be done to exclude certain pieces of code from the final program, or (more commonly) it can be used to find the source of an error. By systematically commenting out and running parts of the program, the source of an error can be determined, allowing it to be corrected.
An example of commenting out code for exclusion purposes is below:
The above code fragment suggests that the programmer opted to disable the debugging option for some reason.
Many IDEs allow quick adding or removing such comments with single menu options or key combinations. The programmer has only to mark the part of text they want to (un)comment and choose the appropriate option.
Automatic documentation generation
Programming tools sometimes store documentation and metadata in comments. These may include insert positions for automatic header file inclusion, commands to set the file's syntax highlighting mode, or the file's revision number. These functional control comments are also commonly referred to as annotations. Keeping documentation within source code comments is considered as one way to simplify the documentation process, as well as increase the chances that the documentation will be kept up to date with changes in the code.
Examples of documentation generators include the programs Javadoc for use with Java, Ddoc for D, Doxygen for C, C++, Java, IDL, Visual Expert for PL/SQL, Transact-SQL, PowerBuilder and PHPDoc for PHP. Forms of docstring are supported by Python, Lisp, Elixir, and Clojure.
C#, F# and Visual Basic .NET implement a similar feature called "XML Comments" which are read by IntelliSense from the compiled .NET assembly.
Syntax extension
Occasionally syntax elements that were originally intended to be comments are re-purposed to convey additional information to a program, such as "conditional comments".
Such "hot comments" may be the only practical solution that maintains backward-compatibility, but are widely regarded as a kludge.
Directive uses
There are cases where the normal comment characters are co-opted to create a special directive for an editor or interpreter.
Two examples of this directing an interpreter are:
The Unix "shebang" – #! – used on the first line of a script to point to the interpreter to be used.
"Magic comments" identifying the encoding a source file is using, e.g. Python's PEP 263.
The script below for a Unix-like system shows both of these uses:
#!/usr/bin/env python3
# -*- coding: UTF-8 -*-
print("Testing")
Somewhat similar is the use of comments in C to communicate to a compiler that a default "fallthrough" in a case statement has been done deliberately:
switch (command) {
case CMD_SHOW_HELP_AND_EXIT:
do_show_help();
/* Fall thru */
case CMD_EXIT:
do_exit();
break;
case CMD_OTHER:
do_other();
break;
/* ... etc. ... */
}
Inserting such a /* Fall thru */ comment for human readers was a already a common convention, but in 2017 the gcc compiler began looking for these (or other indications of deliberate intent), and, if not found, emitting: "warning: this statement may fall through".
Many editors and IDEs will read specially formatted comments. For example, the "modeline" feature of Vim; which would change its handling of tabs while editing a source with this comment included near the top of the file:
# vim: tabstop=8 expandtab shiftwidth=4 softtabstop=4
Stress relief
Sometimes programmers will add comments as a way to relieve stress by commenting about development tools, competitors, employers, working conditions, or the quality of the code itself. The occurrence of this phenomenon can be easily seen from online resources that track profanity in source code.
Normative views
There are various normative views and long-standing opinions regarding the proper use of comments in source code. Some of these are informal and based on personal preference, while others are published or promulgated as formal guidelines for a particular community.
Need for comments
Experts have varying viewpoints on whether, and when, comments are appropriate in source code. Some assert that source code should be written with few comments, on the basis that the source code should be self-explanatory or self-documenting. Others suggest code should be extensively commented (it is not uncommon for over 50% of the non-whitespace characters in source code to be contained within comments).
In between these views is the assertion that comments are neither beneficial nor harmful by themselves, and what matters is that they are correct and kept in sync with the source code, and omitted if they are superfluous, excessive, difficult to maintain or otherwise unhelpful.
Comments are sometimes used to document contracts in the design by contract approach to programming.
Level of detail
Depending on the intended audience of the code and other considerations, the level of detail and description may vary considerably.
For example, the following Java comment would be suitable in an introductory text designed to teach beginning programming:
String s = "Wikipedia"; /* Assigns the value "Wikipedia" to the variable s. */
This level of detail, however, would not be appropriate in the context of production code, or other situations involving experienced developers. Such rudimentary descriptions are inconsistent with the guideline: "Good comments ... clarify intent." Further, for professional coding environments, the level of detail is ordinarily well defined to meet a specific performance requirement defined by business operations.
Styles
There are many stylistic alternatives available when considering how comments should appear in source code. For larger projects involving a team of developers, comment styles are either agreed upon before a project starts, or evolve as a matter of convention or need as a project grows. Usually programmers prefer styles that are consistent, non-obstructive, easy to modify, and difficult to break.
Block comment
The following code fragments in C demonstrate just a tiny example of how comments can vary stylistically, while still conveying the same basic information:
/*
This is the comment body.
Variation One.
*/
/***************************\
* *
* This is the comment body. *
* Variation Two. *
* *
\***************************/
Factors such as personal preference, flexibility of programming tools, and other considerations tend to influence the stylistic variants used in source code. For example, Variation Two might be disfavored among programmers who do not have source code editors that can automate the alignment and visual appearance of text in comments.
Software consultant and technology commentator Allen Holub is one expert who advocates aligning the left edges of comments:
/* This is the style recommended by Holub for C and C++.
* It is demonstrated in ''Enough Rope'', in rule 29.
*/
/* This is another way to do it, also in C.
** It is easier to do in editors that do not automatically indent the second
** through last lines of the comment one space from the first.
** It is also used in Holub's book, in rule 31.
*/
The use of /* and */ as block comment delimiters was inherited from PL/I into the B programming language, the immediate predecessor of the C programming language.
Line comments
Line comments generally use an arbitrary delimiter or sequence of tokens to indicate the beginning of a comment, and a newline character to indicate the end of a comment.
In this example, all the text from the ASCII characters // to the end of the line is ignored.
// -------------------------
// This is the comment body.
// -------------------------
Often such a comment has to begin at far left and extend to the whole line. However, in many languages, it is also possible to put a comment inline with a command line, to add a comment to it – as in this Perl example:
print $s . "\n"; # Add a newline character after printing
If a language allows both line comments and block comments, programming teams may decide upon a convention of using them differently: e.g. line comments only for minor comments, and block comments to describe higher-level abstractions.
Tags
Programmers may use informal tags in comments to assist in indexing common issues. They may then be able to be searched for with common programming tools, such as the Unix grep utility or even syntax-highlighted within text editors. These are sometimes referred to as "codetags" or "tokens".
Such tags differ widely, but might include:
BUG – a known bug that should be corrected.
FIXME – should be corrected.
HACK – a workaround.
TODO – something to be done.
NOTE – used to highlight especially notable gotchas.
UNDONE – a reversal or "roll back" of previous code.
XXX – warn other programmers of problematic or misguiding code
Examples
Comparison
Typographic conventions to specify comments vary widely. Further, individual programming languages sometimes provide unique variants. For a detailed review, please consult the programming language comparison article.
Ada
The Ada programming language uses '--' to indicate a comment up to the end of the line.
For example:
-- the air traffic controller task takes requests for takeoff and landing
task type Controller (My_Runway: Runway_Access) is
-- task entries for synchronous message passing
entry Request_Takeoff (ID: in Airplane_ID; Takeoff: out Runway_Access);
entry Request_Approach(ID: in Airplane_ID; Approach: out Runway_Access);
end Controller;
APL
APL uses ⍝ to indicate a comment up to the end of the line.
For example:
⍝ Now add the numbers:
c←a+b ⍝ addition
In dialects that have the ⊣ ("left") and ⊢ ("right") primitives, comments can often be inside or separate statements, in the form of ignored strings:
d←2×c ⊣'where'⊢ c←a+ 'bound'⊢ b
AppleScript
This section of AppleScript code shows the two styles of comments used in that language.
(*
This program displays a greeting.
*)
on greet(myGreeting)
display dialog myGreeting & " world!"
end greet
-- Show the greeting
greet("Hello")
BASIC
In this classic early BASIC code fragment the REM ("Remark") keyword is used to add comments.
10 REM This BASIC program shows the use of the PRINT and GOTO Statements.
15 REM It fills the screen with the phrase "HELLO"
20 PRINT "HELLO"
30 GOTO 20
In later Microsoft BASICs, including Quick Basic, Q Basic, Visual Basic, Visual Basic .NET, and VB Script; and in descendants such as FreeBASIC and Gambas any text on a line after an ' (apostrophe) character is also treated as a comment.
An example in Visual Basic .NET:
Public Class Form1
Private Sub Button1_Click(sender As Object, e As EventArgs) Handles Button1.Click
' The following code is executed when the user
' clicks the button in the program's window.
rem comments still exist.
MessageBox.Show("Hello, World") 'Show a pop-up window with a greeting
End Sub
End Class
C
This C code fragment demonstrates the use of a prologue comment or "block comment" to describe the purpose of a conditional statement. The comment explains key terms and concepts, and includes a short signature by the programmer who authored the code.
/*
* Check if we are over our maximum process limit, but be sure to
* exclude root. This is needed to make it possible for login and
* friends to set the per-user process limit to something lower
* than the amount of processes root is running. -- Rik
*/
if (atomic_read(&p->user->processes) >= p->rlim[RLIMIT_NPROC].rlim_cur
&& !capable(CAP_SYS_ADMIN) && !capable(CAP_SYS_RESOURCE))
goto bad_fork_free;
Since C99, it has also been possible to use the // syntax from C++, indicating a single-line comment.
Cisco IOS and IOS-XE configuration
The exclamation point (!) may be used to mark comments in a Cisco router's configuration mode, however such comments are not saved to non-volatile memory (which contains the startup-config), nor are they displayed by the "show run" command.
It is possible to insert human-readable content that is actually part of the configuration, and may be saved to the NVRAM startup-config via:
The "description" command, used to add a description to the configuration of an interface or of a BGP neighbor
The "name" parameter, to add a remark to a static route
The "remark" command in access lists
! Paste the text below to reroute traffic manually
config t
int gi0/2
no shut
ip route 0.0.0.0 0.0.0.0 gi0/2 name ISP2
no ip route 0.0.0.0 0.0.0.0 gi0/1 name ISP1
int gi0/1
shut
exit
ColdFusion
ColdFusion uses comments similar to HTML comments, but instead of two dashes, it uses three. These comments are caught by the ColdFusion engine and not printed to the browser.
<!--- This prints "Hello World" to the browser. --->
<cfoutput>
Hello World<br />
</cfoutput>
Fortran IV
This Fortran IV code fragment demonstrates how comments are used in that language, which is very column-oriented. A letter "C" in column 1 causes the entire line to be treated as a comment.
C
C Lines that begin with 'C' (in the first or 'comment' column) are comments
C
WRITE (6,610)
610 FORMAT(12H HELLO WORLD)
END
Note that the columns of a line are otherwise treated as four fields: 1 to 5 is the label field, 6 causes the line to be taken as a continuation of the previous statement; and declarations and statements go in 7 to 72.
Fortran 90
This Fortran code fragment demonstrates how comments are used in that language, with the comments themselves describing the basic formatting rules.
!* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
!* All characters after an exclamation mark are considered as comments *
!* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
program comment_test
print '(A)', 'Hello world' ! Fortran 90 introduced the option for inline comments.
end program
Haskell
Line comments in Haskell start with '--' (two hyphens) until the end of line, and multiple line comments start with '{-' and end with '-}'.
{- this is a comment
on more lines -}
-- and this is a comment on one line
putStrLn "Wikipedia" -- this is another comment
Haskell also provides a literate programming method of commenting known as "Bird Style". In this all lines starting with > are interpreted as code, everything else is considered a comment. One additional requirement is that you always leave a blank line before and after the code block:
In Bird-style you have to leave a blank before the code.
> fact :: Integer -> Integer
> fact 0 = 1
> fact (n+1) = (n+1) * fact n
And you have to leave a blank line after the code as well.
Literate programming can also be done in Haskell, using LaTeX. The code environment can be used instead of the Richard Bird's style:
In LaTeX style this is equivalent to the above example, the code environment could be defined in the LaTeX preamble. Here is a simple definition:
\usepackage{verbatim}
\newenvironment{code}{\verbatim}{\endverbatim}
later in
% the LaTeX source file
The \verb|fact n| function call computes $n!$ if $n\ge 0$, here is a definition:\\
\begin{code}
fact :: Integer -> Integer
fact 0 = 1
fact (n+1) = (n+1) * fact n
\end{code}
Here more explanation using \LaTeX{} markup
Java
This Java code fragment shows a block comment used to describe the setToolTipText method. The formatting is consistent with Sun Microsystems Javadoc standards. The comment is designed to be read by the Javadoc processor.
/**
* This is a block comment in Java.
* The setToolTipText method registers the text to display in a tool tip.
* The text is displayed when the cursor lingers over the component.
*
* @param text The string to be displayed. If 'text' is null,
* the tool tip is turned off for this component.
*/
public void setToolTipText(String text) {
// This is an inline comment in Java. TODO: Write code for this method.
}
JavaScript
JavaScript uses // to precede comments and /* */ for multi-line comments.
// A single line JavaScript comment
var iNum = 100;
var iTwo = 2; // A comment at the end of line
/*
multi-line
JavaScript comment
*/
Lua
The Lua programming language uses double-hyphens, --, for single line comments in a similar way to Ada, Eiffel, Haskell, SQL and VHDL languages. Lua also has block comments, which start with --[[ and run until a closing ]]
For example:
--[[A multi-line
long comment
]]
print(20) -- print the result
A common technique to comment out a piece of code, is to enclose the code between --[[ and
--]], as below:
--[[
print(10)
--]]
-- no action (commented out)
In this case, it's possible to reactivate the code by adding a single hyphen to the first line:
---[[
print(10)
--]]
--> 10
In the first example, the --[[ in the first line starts a long comment, and the two hyphens in the last line
are still inside that comment. In the second example, the sequence ---[[ starts an ordinary, single-line
comment, so that the first and the last lines become independent comments. In this case, the print is
outside comments. In this case, the last line becomes an independent comment, as it starts with --.
Long comments in Lua can be more complex than these, as you can read in the section called "Long strings" c.f. Programming in Lua.
MATLAB
In MATLAB's programming language, the '%' character indicates a single-line comment. Multi line comments are also available via %{ and %} brackets and can be nested, e.g.
% These are the derivatives for each term
d = [0 -1 0];
%{
%{
(Example of a nested comment, indentation is for cosmetics (and ignored).)
%}
We form the sequence, following the Taylor formula.
Note that we're operating on a vector.
%}
seq = d .* (x - c).^n ./(factorial(n))
% We add-up to get the Taylor approximation
approx = sum(seq)
Nim
Nim uses the '#' character for inline comments.
Multi-line block comments are opened with '#[' and closed with ']#'.
Multi-line block comments can be nested.
Nim also has documentation comments that use mixed Markdown and ReStructuredText markups.
The inline documentation comments use '##' and multi-line block documentation comments are opened with '##[' and closed with ']##'.
The compiler can generate HTML, LaTeX and JSON documentation from the documentation comments.
Documentation comments are part of the abstract syntax tree and can be extracted using macros.
## Documentation of the module *ReSTructuredText* and **MarkDown**
# This is a comment, but it is not a documentation comment.
type Kitten = object ## Documentation of type
age: int ## Documentation of field
proc purr(self: Kitten) =
## Documentation of function
echo "Purr Purr" # This is a comment, but it is not a documentation comment.
# This is a comment, but it is not a documentation comment.
OCaml
OCaml uses nestable comments, which is useful when commenting a code block.
codeLine(* comment level 1(*comment level 2*)*)
Pascal
In Niklaus Wirth's pascal family of languages (including Modula-2 and Oberon), comments are opened with '(*' and completed with '*)'.
for example:
(* test diagonals *)
columnDifference := testColumn - column;
if (row + columnDifference = testRow) or
.......In modern dialects of Pascal, '{' and '}' are used instead.
Perl
Line comments in Perl, and many other scripting languages, begin with a hash (#) symbol.
# A simple example
#
my $s = "Wikipedia"; # Sets the variable s to "Wikipedia".
print $s . "\n"; # Add a newline character after printing
Instead of a regular block commenting construct, Perl uses Plain Old Documentation, a markup language for literate programming, for instance:
=item Pod::List-E<gt>new()
Create a new list object. Properties may be specified through a hash
reference like this:
my $list = Pod::List->new({ -start => $., -indent => 4 });
See the individual methods/properties for details.
=cut
sub new {
my $this = shift;
my $class = ref($this) || $this;
my %params = @_;
my $self = {%params};
bless $self, $class;
$self->initialize();
return $self;
}
R
R only supports inline comments started by the hash (#) character.
# This is a comment
print("This is not a comment") # This is another comment
Raku
Raku (previously called Perl 6) uses the same line comments and POD Documentation comments as regular Perl (see Perl section above), but adds a configurable block comment type: "multi-line / embedded comments".
These start with a hash character, followed by a backtick, and then some opening bracketing character, and end with the matching closing bracketing character. The content can not only span multiple lines, but can also be embedded inline.
#`{{ "commenting out" this version
toggle-case(Str:D $s)
Toggles the case of each character in a string:
my Str $toggled-string = toggle-case("mY NAME IS mICHAEL!");
}}
sub toggle-case(Str:D $s) #`( this version of parens is used now ){
...
}
PHP
Comments in PHP can be either in C++ style (both inline and block), or use hashes. PHPDoc is a style adapted from Javadoc and is a common standard for documenting PHP code.
PowerShell
Comments in Windows PowerShell
# Single line comment
Write-Host "Hello, World!"
<# Multi
Line
Comment #>
Write-Host "Goodbye, world!"
Python
Inline comments in Python use the hash (#) character, as in the two examples in this code:
# This program prints "Hello World" to the screen
print("Hello World!") # Note the new syntax
Block comments, as defined in this article, do not technically exist in Python. A bare string literal represented by a triple-quoted string can be used, but is not ignored by the interpreter in the same way that "#" comment is. In the examples below, the triple double-quoted strings act in this way as comments, but are also treated as docstrings:
"""
Assuming this is file mymodule.py, then this string, being the
first statement in the file, will become the "mymodule" module's
docstring when the file is imported.
"""
class MyClass:
"""The class's docstring"""
def my_method(self):
"""The method's docstring"""
def my_function():
"""The function's docstring"""
Ruby
Comments in Ruby.
Single line commenting: (line starts with hash "#")
puts "This is not a comment"
# this is a comment
puts "This is not a comment"
Multi-line commenting: (comments goes between keywords "begin" and "end")
puts "This is not a comment"
=begin
whatever goes in these lines
is just for the human reader
=end
puts "This is not a comment"
SQL
Standard comments in SQL are in single-line-only form, using two dashes:
-- This is a single line comment
-- followed by a second line
SELECT COUNT(*)
FROM Authors
WHERE Authors.name = 'Smith'; -- Note: we only want 'smith'
-- this comment appears after SQL code
Alternatively, a comment format syntax identical to the "block comment" style used in the syntax for C and Java is supported by Transact-SQL, MySQL, SQLite, PostgreSQL, and Oracle.
MySQL also supports comments from the hash (#) character to the end of the line.
Swift
Single-line comments begin with two forward-slashes (//):// This is a comment.Multiline comments start with a forward-slash followed by an asterisk (/*) and end with an asterisk followed by a forward-slash (*/):/* This is also a comment
but is written over multiple lines. */Multiline comments in Swift can be nested inside other multiline comments. You write nested comments by starting a multiline comment block and then starting a second multiline comment within the first block. The second block is then closed, followed by the first block:/* This is the start of the first multiline comment.
/* This is the second, nested multiline comment. */
This is the end of the first multiline comment. */
XML (or HTML)
Comments in XML (or HTML) are introduced with <!-- and can spread over several lines until the terminator, -->
For example,
<!-- select the context here -->
<param name="context" value="public" />
For compatibility with SGML, the string "--" (double-hyphen) is not allowed inside comments.
Security issues
In interpreted languages the comments are viewable to the end user of the program. In some cases, such as sections of code that are "commented out", this may present a security vulnerability.
See also
Docstring, a specific type of comment that is parsed and retained throughout the runtime of the program.
Shebang, the use of #! as an interpreter directive in scripts on Unix-like systems
HTML comment tag
Literate programming, alternative documentation paradigm
Syntax of comments in various programming languages
Notes and references
Further reading
Movshovitz-Attias, Dana and Cohen, William W. (2013) Natural Language Models for Predicting Programming Comments. In Association for Computational Linguistics (ACL), 2013.
External links
How to Write Comments by Denis Krukovsky
Source Code Documentation as a Live User Manual by PTLogica
How to Write Comments for the Javadoc Tool
Source code
Articles with example code
Articles with example C code
Articles with example Java code
Articles with example Perl code
Metadata |
4032117 | https://en.wikipedia.org/wiki/Christa%20McAuliffe%20Space%20Education%20Center | Christa McAuliffe Space Education Center | The Christa McAuliffe Space Center (known as the McAuliffe Space Center or CMSC), in Pleasant Grove, Utah, teaches school children about space and is visited by students from around the world. It has a number of space flight simulators.
The center, named for educator Christa McAuliffe, killed in Challenger disaster, was started in 1990 by Victor Williamson, an educator at Central Elementary School. It is a building added onto Central Elementary. It aims to teach astronomy and social studies through the use of simulators; the first, Voyager, proved itself popular, and a new planetarium built in 2020. As the years passed, the demand for flights expanded and new ships were commissioned. In October 2012, the space center was temporarily closed at Central Elementary, but re-opened following several district-mandated upgrades, closures, and maintenance procedures in Spring 2013. The original simulators, along with the school that housed them, was demolished on May 5, 2020 to make way for a new space center built behind the original property. The new Space Center was built housing the 2nd largest planetarium in the State of Utah that started running shows in November of 2020. The Christa McAuliffe Space Education Center switched its name and took out the word Education from the title in 2018. In 2018, they also updated their logo to a new stylized version of the original. (New version not shown)
The simulators employed by the center have included the following (in order of original construction):
The USS Voyager (Original 1990) (Decommissioned 2012/2013, New 2018) The Voyager appears as the USS Enterprise-D. It held from nine to eleven people. The new Voyager is now located at Renaissance Academy in Utah, a separate Space Center than the Christa McAuliffe Space Center.
The USS Odyssey (Original 1995, New 2013, Current 2021) The Odyssey's appearance was created by Paul S. Cargile, an independent sci-fi artist. It takes on the appearance of the Banzai-class fighter. It holds six to eight people.
The USS Galileo (Original Mark-5: 1998, New Mark-6: 2009, Current 2021) The Galileo is a shuttle craft. It usually goes on stealth missions. It can hold five to six people. The original simulator could be physically seen from the outside.
The USS Magellan (Original Space Station: 1998, Renovated: 2006, Starship: 2012, Current 2021) – The Magellan had the appearance of Deep Space 9. The Magellan has been transformed into a starship with the appearance of a Daedalus-class starship from Stargate. The bridge crew can be anywhere from ten to twelve people.
The Falcon (Original 2000) (Decommissioned) – The Falcon showed students what space travel might be like in the future.
The USS Phoenix (Original 2005, Current 2021) – The Phoenix is a Defiant-class escort, like DS9's USS Defiant. It is the Space Center's only battleship. It could hold five to six people. It has been updated to an Astrea Class Destroyer, which can now hold six to seven people.
The IMS Falcon (New 2021) – The Falcon Is the only ship in the fleet that does not belong to the United Federation of Planets. It holds six to eight crew members.
The USS Cassini (New 2021) – The Cassini is a deep space exploration vessel. It holds nine to elevens crew members.
Each simulator has its own plaque. The plaque displays the ship's names and other things about that specific simulator. Some are inside the simulator, and some of them are hidden out of plain sight.
Most missions are based on, or at least contain aspects similar to the Star Trek universe. The Simulators themselves are replicas of Star Trek ships and various races (like the Romulans) are often involved in missions.
The center, and its founder were honored in a ceremony in its 15th year by many individuals, including Gary Herbert, the Lieutenant Governor of Utah. At that time, with its five spaceship simulators, it was educating 16,000 students a year.
The center's mission statement is A Utah Arts, Sciences, Technology Education Initiative. We Practice the Discipline of Wonder.
Teaching method
The Space Center uses it simulators in order to create interactive stories, usually applicable to historical events, in which the students are involved. Since November of 2020, they also use the planetarium that was built during their 2020 rebuild.
Students also learn and apply different aspects of astronomy and science in missions. They get the chance to learn about black holes, nebulae, asteroids, planets, planetary systems, moons, and a variety of other phenomena.
Students who attended the Space Center 15 years ago are now pursuing fields in science, technology, space exploration, programming, and electrical engineering. Students at the local Brigham Young University have the opportunity to develop consoles and equipment for the Space Center; gadgets such as Tricorders, touch panel equipment, fiber optics systems, ships, and digital/analog control interfaces all help to give a more realistic effect to the experience.
The center's staff hopes that its visitors are tomorrow's scientists.
Simulator Technology
The Space Center employs technologies and equipment to achieve its simulations. In each ship, there is a powerful sound system (including a powerful bass response to simulate the feeling of the reactor core) hooked up to an industry standard mixing board which combines input from a combination of sound sources heard through the main speakers, such as, sound effects, music, DVD players, CD players, microphones, and voice distorters.
The video system is just as complex. Each mission available has a story DVD with clips compiled for scenes in a story and other visual effects. These video sources are all controlled by a video switcher so that it appears to be a seamless video. In addition to movie clips, the Space Center also makes its own tactical screens. Tactical screens are in essence complex power points that can be networked to display real time information about the ship. This information may include information about things related to the current story such as ship systems while others may be maps or other mission information. Various programs have been used to create these screens including HyperCard, Runtime Revolution, and Thorium.
Each simulator is also equipped with a lighting system allowing both red and white lights to be displayed; red during alerts and white during normal alert levels. Each set of lights is attached to a dimmer in the control room allowing the lights to manually fluctuate in different events during a mission, such as a torpedo impact or power failure. The most advanced set of lights at the Space Center is installed in the Galileo. The lighting system in the Galileo is capable of being controlled via computer making effects seem more realistic.
In order to ensure that campers are safe, a network of closed circuit cameras is also installed at key points on the set to monitor their positions. Each simulator has part of the bridge and connected areas of the set monitored at all times.
The most complex part of each simulator is the computer systems. Each ship has several computers installed. The smallest set, the Galileo, has five, while the largest set, the Magellan, has 13. Each one of these computers (excluding sound effect computers and tactical [main viewer] computers) is connected to a network allowing communication between computers. In this way, the programs on each of the computers are also able to communicate with each other, allowing the control room to monitor the simulation and for computers on the bridge to update each other with information sent from the control room. The programming on each of the computers used to be programmed in HyperCard, which was in use on the USS Voyager until the simulator was decommissioned. Later however, the Space Center switched to Revolution by Runtime Revolution. The next generation of programs at the Space Center were programmed in Cocoa, Apple Inc's own programming language for their Macintosh computer platform. Since 2018, the space center has used the Thorium open-source starship simulator platform, developed by a former volunteer.
Private donations paid for the simulators, while the school district pays the salary of the center's director. 181 volunteers and part-timers help to operate the simulators.
Staff
The Space Center's full-time employee is the Director. Flight Directors, Set Directors, and Bridge Supervisors are part-time employees. The volunteering organization is divided into guilds and classes of volunteers as follows:
The Flight Directors – (Dark Blue Collared Shirts) The Flight Directors (FD's) "run" the mission, --- giving cues to the actors, telling the staff when to do certain things, assigning roles, etc. The FD also is the voice of the Main Computer and the Main Engineer (whom the crew cannot see), giving them hints and tips along the way. Besides the center's director, they have the most authority, along with the Set Directors.
The Set Directors – There are six Set Directors (One for each of the simulators). The Set Directors make major decisions for the simulators that they are Set Director of. They are usually the main FD for that ship.
The Supervisors – (Bright Blue Collared Shirts) The Supervisors supervise the mission. They are the FD's right-hand men and women. They relay orders, help get the story moving, coordinate volunteers, etc. They are second in command, but are only used on missions in Magellan and Cassini, and previously in the Voyager. They work with the crews to answer any questions they may have during a mission. Many FD's start out as supervisors but not all, and many FD's still supervise even after they have been passed off as a Flight Director.
The Volunteers – (Black shirts) The Volunteers are the arms and legs of the Flight Directors. They can be assigned by the Flight Director to be the ship's doctor character, be an alien actor, be Second Chair (The Second Chair switches the lights on and off, respond to sensor scans, change what is showing on the viewscreen, send messages, etc.), or pretty much anything else the FD wants them to do.
The Guilds
(Note: All of the classes of Volunteers above except for the regular Volunteers have their own guild.
The Programming Guild – The Programming Guild (Light Blue Collared Shirts) programs the ship's controls and all they other computer programs used at the Space Center. (See above)
The Maintenance Guild – The Maintenance Guild creates the simulators, does repairs, installs new features, and pretty much holds the simulators together.
The Acting Guild – The Acting Guild is a special set of volunteers that are trained in the "prestigious" art of acting at the Space Center.
Programs and Camps
The Space Center offers a variety of programs that provide varying mission lengths and experiences. Continuing the educational aim of the Space Center, there are field trip programs for school classes that provide education about science, space and teamwork/leadership. These programs also offer educational experience missions on the simulators. For the general public, there are also private missions, and summer camps. Private missions are available to be reserved in 2 lengths: 2.5 hour and 5 hour missions. These time blocks include time for briefing and training in preparation for the actual mission on the simulator. In Space Center history, they used to have Overnight Camps. Overnight camps used to start on Friday nights and end on the following morning: all missions were 'paused' for the night, campers sleep at the Space Center overnight, and then missions are resumed in the morning. These missions however, are no longer available. They also had Super Saturday camps that provided the same missions as overnight camps, but occurred during the day on Saturdays. The Leadership Camp is made for an older audience of ages 15–17. It differs from the other summer camps in the way that the whole camp is a campaign and every mission is part of a bigger picture. This camp may not be flown every summer due to the amount of planning that goes into it since it runs through multiple days. Summer camps usually happen in 1 day with a variety of activities from missions to classroom activities and planetarium shows. The Space Center provides further information on their website, http://spacecenter.alpineschools.org/
References
External links
Official website
Space organizations
Tourist attractions in Utah
Education in Utah County, Utah
1990 establishments in Utah
Educational institutions established in 1990
Buildings and structures in Pleasant Grove, Utah |
49874393 | https://en.wikipedia.org/wiki/Martin%20Casado | Martin Casado | Martín Casado is a Spanish-born American software engineer, entrepreneur, and investor. He is a general partner at Andreessen Horowitz, and was a pioneer of software-defined networking, and a co-founder of Nicira Networks.
Early life and education
Martín Casado was born around 1976 in Cartagena, Spain. He received his bachelor's degree from Northern Arizona University in 2000. In 2017, he received an honorary doctorate from the same university. He worked for Lawrence Livermore National Laboratory doing computational science followed by work with the intelligence community from December 2000 to September 2006.
Casado attended Stanford University from 2002 to 2008, earning both his Masters and PhD in computer science.
While at Stanford, he began development of OpenFlow, an open source protocol that enabled software-defined networking. During this period, he co-founded Illuminics Systems with Michael J. Freedman. His PhD thesis, "Architectural Support for Security Management in Enterprise Networks,” under advisors Nick McKeown, Scott Shenker and Dan Boneh, was published in 2008.
Career
In 2007, Casado co-founded Nicira Networks along with McKeown and Shenker, a Palo Alto, California based company working on network virtualization.
Along with McKeown and Shenker, Casado promoted software-defined networking. His PhD work at Stanford University led to the development of the OpenFlow protocol, which was promoting using the term software-defined networking (SDN).
McKeown and Shenker co-founded the Open Networking Foundation (ONF) in 2011 to transfer control of OpenFlow to a not-for-profit organization.
In July 2012, VMware acquired Nicira for $1.26 billion.
At VMware he was made a fellow and held the positions chief technology officer (CTO) for networking and security and general manager of the Networking and Security Business Unit.
Casado was a 2012 recipient of the Association for Computing Machinery (ACM) Grace Murray Hopper Award as for helping create the Software Defined Networking movement.
In 2015 Casado, McKeown and Shenker received the NEC C&C Foundation award for SDN and OpenFlow. In 2015, he was selected for Forbes’ “Next Gen Innovators 2014.”
Casado left VMware and joined venture capital firm Andreessen Horowitz in February 2016 as its ninth general partner.
Andreessen Horowitz had been one of the investors Nicira, contributing $17.7 million to the start-up venture.
References
External links
Northern Arizona University alumni
Stanford University alumni
1970s births
Living people |
19443992 | https://en.wikipedia.org/wiki/Digital%20model%20railway%20control%20systems | Digital model railway control systems | Digital model railway control systems are an alternative to control a layout and simplify the wiring and add more flexibility in operations. A number of control systems are available to operate locomotives on model railways. Analog systems where the speed and the direction of a train is controlled by adjusting the voltage on the track are still popular while they have recently given way to control systems based on computer technology.
Digital model railway control system basics
Some digital control systems provide the ability to independently control all aspects of operating a model railway using a minimum of wiring, the rails themselves can be the only wiring required. Other systems are wireless. Control is achieved by sending a digital signal as well as power down the rails or wirelessly. These digital signals can control all aspects of the model trains and accessories, including signals, turnouts, lighting, level crossings, cranes, turntables, etc.
Constant power is supplied to the track and digital signals are sent which require electronic decoders to be fitted to locomotives and other devices to interpret the commands.
Controllers
Controllers manage operation of locomotives with buttons for additional model features such as lighting and sound.
Central unit
A digital system usually requires a central unit to generate digital address and command signals, these are known as command stations. Many command stations also incorporate one or more locomotive controllers and a booster unit to generate the power necessary to run locomotives. Central units also have connections for additional controllers and accessory switch boxes, as well as connections for computer control and interfaces with other digital controllers.
Boosters
In most systems booster are available to provide additional track power for larger layouts. Boosters are connected to the central unit by special cables that relay the digital commands.
Locomotive decoders
Locomotive decoders are small electronic circuits fitted inside locomotives to interpret the digital signals and provide individual control. Although all active decoders receive commands, only the addressed decoder will respond.
Accessory decoders
Accessory decoders are used to control devices which are fixed in position, such as turnouts, signals, and level crossings. Since the devices do not move, stationary decoders can be mounted under the layout, and therefore can be significantly larger than locomotive decoders. Accessory decoders can receive their signals from an accessory data bus or from the track.
Sound and function decoders
Basic locomotive decoders provide control of speed and direction while supplemental function decoders control headlights, ditch lights, or movable non-traction components such as remote-controlled pantographs.
Sound decoders play pre-recorded sound effects which may be synchronised with the locomotive speed, so that as a diesel locomotive starts from standstill, the sound decoder plays sounds of a diesel engine starting up. Sound decoders for steam locomotives can play "chuff" sounds synchronised with the driving wheels.
Some decoders have all three functions—locomotive control, sound effects, and function control, in a single circuit.
Feedback
In some automated systems, the central unit needs to know when trains reach their destination or a certain point. This information is detected by a sensor, such as an infrared device placed between the tracks, a reed switch or a device which senses current draw in an isolated section of track.
Feedback relays an electrical signal from the sensor hardware back to the digital central unit. The central unit can then issue commands appropriate for the specific sensor, such as triggering a signal, or level crossing.
Feedback allows fully automated control of model trains.
Computer interface
Some central units allow connection to a computer, and a program can then fully automatically control all model train movements and accessories. This facility is particularly useful for display layouts.
Programs have been developed allowing mobile devices to be used as controllers, which also requires the central unit to be connected to a computer.
Systems
Digital Command Control
Digital Command Control (DCC) systems are used to operate locomotives on a model railroad (railway). Equipped with DCC, locomotives on the same electrical section of track can be independently controlled. While DCC is only one of several alternative systems for digital model train control, it is often misinterpreted to be a generic term for such systems. Several major manufacturers offer DCC systems.
Digital Command System
Digital Command System (DCS) is an electronic system developed by MTH Electric Trains and released in April 2002. DCS controls locomotives equipped with Protosound 2, Protosound 3, or Protosound 3E+ decoders. Protosound 3 locomotives are compatible with both DCS and DCC command systems. Protosound 3E+ locomotives are compatible with DCS and Märklin Digital command systems. All DCS compatible decoders are manufactured by MTH. Factory installed decoders have been offered in H0 scale, two-rail 0 scale, 3-rail 0 gauge, Gauge 1, and three-rail Standard Gauge models. MTH has announced their intention to install DCS compatible decoders in S scale trains beginning in 2013. Separate sale decoder kits have been offered for installation in all of the above noted scales except H0 and S. DCS is predominantly used in three-rail O gauge. Its chief competitors in three-rail O are Lionel's TMCC and Legacy systems.
DCS uses proprietary command codes and transmission technology covered under US patent 6,457,681. The principal differences between DCS and DCC transmission technologies include bidirectional communications and the separation of the command signal from track power. DCS command signals are transmitted at 10.7 MHz using spread spectrum technology.
DCS can operate TMCC equipped models by means of an interface cable that connects the Lionel CB-1 command base to the DCS Track Interface Unit. DCS can coexist on the same track at the same time with either Lionel TMCC or Legacy command systems. Engines with either system can be operated simultaneously as long as both command control units are installed on the track.
Direct WiFi Control
Direct WiFi Control (DWiC) is an emerging technology for model railway control utilizing the concept of "the internet of things". The availability of miniature web server modules in 2014 led to the formation of a DWiC Working group to explore the possibility of using this technology in model railways. WiFi technology is well established and proven. Although it is considerably more complex than any previous model railway control system it largely transparent to the user with tasks such as bi-directional communication being seamless. DWiC does not use any model rail specific items such as command stations and boosters and so is much lower in cost. This technology is also useful outside the model rail world as a DWiC controller could open a garage door or remotely turn on sprinklers. The web server/controller is similar to a DCC decoder in hardware and cost. The great advantage occurs on the client side where the "throttle" can be any WiFi device with a web browser. DWiC can run on DC, AC or DCC track power or a battery.
The DWiC controller has a web page loaded on board tailored to the particular "item" - loco, accessory etc. The users browser loads the page off the items web server and by pressing buttons directly controls the item via WiFi using HTML, JavaScript, JQuery and C.
Märklin Digital
Märklin Digital was one of the first digital model railway control systems. It consisted of a full system including locomotive decoders (based on a Motorola chip), central control, a computer interface, turnout decoders, digital relays and s88 feedback modules. For controlling 2-rail DC locomotives, like Märklin's Z and 1 gauge rolling stock, a special version of the system was introduced in 1988 developed by Lenz jointly for Märklin and Arnold. Arnold sold the system under name Arnold Digital while Märklin called it "Märklin Digital", this system was the predecessor of DCC-standard. Apart from the locomotive decoders and central units, all the other system components were identical between 3-rail and 2-rail versions.
Selectrix
Selectrix is an early digital model train command control system developed by German company Döhler & Haas for model railway manufacturer Trix in the early 1980s. Since 1999 Selectrix is an open system supported by several manufacturers and standardized by MOROP. Technically Selectrix differs from competing bus systems by being fully synchronized and bi-directional. The same data bus protocol and data buses are shared by the rolling stock, accessories and feedback information.
Trainmaster Command Control
Trainmaster Command Control (TMCC) is Lionel's original command control system. It was introduced exclusively in Lionel trains in 1995. Beginning in 2000, Lionel offered licenses to other manufacturers. Licensees that formerly or currently install TMCC decoders in their models include Atlas O, K-Line, Weaver, and Sunset Models 3rd Rail Division. Licensees that formerly or currently offer separate sale decoders include Train America Studios, Digital Dynamics, and Electric RR Co. TMCC decoders have mostly been installed in 3-rail O gauge models, but it has also been offered in 2-rail O scale and S scale.
TMCC utilizes the same command codes as Digital Command Control (DCC). However, unlike DCC, it uses a 455 kHz radio transmission to carry the command codes separate from track power. The locomotive decoders are dependent on AC track power (50 or 60 Hz) to synchronize the command receiver. Thus, TMCC can only operate on AC track power. Because TMCC utilizes the DCC command codes, it is possible to control TMCC with DCC compatible software. MTH Electric Trains included support to interface and control TMCC with its DCS system. Unlike DCC, TMCC-equipped locomotives can run simultaneously with non-TMCC locomotives. Lionel ceased the sale of TMCC command systems in 2010, but continues to introduce models equipped with TMCC decoders. TMCC has been superseded by Lionel's Legacy command system.
Legacy Control System
Legacy Control System (Legacy) is Lionel's current electronic control system. It was introduced as a successor to Lionel's Trainmaster Command Control (TMCC) in December 2007. Legacy is backwards compatible with all TMCC decoder equipped engines. Models with Legacy sound decoders and/or Odyssey II speed control can be operated with earlier TMCC control systems but also have additional features only accessible with Legacy. The command codes for these additional features differ from the DCC command codes. Lionel has not published or licensed access to the Legacy specific command codes.
Hornby Zero 1
Hornby Zero 1 was a forerunner to the modern digital model railway control system, developed by Hornby in the late 1970s. It was based around the TMS1000 four-bit microprocessor. The Zero 1 system offered simultaneous control of up to 16 locomotives and 99 accessories. The Hammant & Morgan digital train control system is totally compatible with the Zero One, the master controller,"HM5000 Advanced Power Transmitter" boasted two sliders, direction LEDs, a power LED bar graph, timer clocks, digital display of locos under control, readout of accessories controlled, and ability to attach two "Hi-Tec Speed Transmitter" slave controllers HM5500.
Though an important milestone, Zero 1 was not widely successful; both the controller units and the decoder modules required for the locomotives and accessories were expensive, but with a clean track and well-serviced locos the system worked more or less as advertised.
The Zero 1 system supplied the track with a 20 V square wave at the local mains frequency (50 Hz in the UK, 60 Hz in the US) with a 32-bit control word replacing every third cycle. The decoder module in the locomotive would switch either the positive or the negative half-cycle of the square wave to the motor according to the desired direction of travel. During the transmission of the control word, it would remain switched off. Speed control was achieved by pulse-width modulation, varying the width of the portion of the half-cycle, which was switched in 14 steps.
This system allowed for straightforward implementation with the semiconductor technology of the time, but had the disadvantage that the power supplied to the motor was highly discontinuous - as can be seen from the description above, it took the form of square pulses of a maximum width of 10 ms, recurring at intervals which alternated between 20 ms and 40 ms (for a 50 Hz mains supply). This caused the motor to be extremely noisy and rough. Fine control of a locomotive at low speed was also difficult, partly due to the rough running, partly due to the inherent coarseness of a 14-step speed scale, and partly because there was a significant delay between operator input to the controller and response from the locomotive.
Locomotives fitted with a Zero 1 decoder according to Hornby's instructions could not be used on conventional systems, making it difficult to run locomotives across multiple layouts. It was possible to include a miniature DPDT switch in the installation to enable the Zero 1 decoder to be switched out for use on a conventional system.
Control of points and other accessories was available in a very simple manner. For solenoid-operated accessories (e.g. points, mechanical signals) or accessories involving lights (e.g. colour light signals), track-powered accessory decoder modules, each providing four outputs, were available. Each output could be configured either for burst operation or continuous output, for use with solenoids or lights respectively. Accessories were switched by entering a numeric code on the controller. Up to 99 accessories could be controlled.
Accessories based around motors rather than solenoids or lights, such as turntables, could be fitted with a locomotive module and controlled in the same manner as a locomotive.
Zero 1 had three 'phased' introductions:
Master controller and basic system (master controller, slave controller, hand-held slave unit and loco modules)
Accessory control (points, signals etc.)
Micro Mimic display (allowed for LEDs to represent the status of points and signals on a mimic display panel)
While the main master controller unit was discontinued in 1986, the system is very reliable, the basic 1980s keyboard design being the main problem on older badly stored master units.
Loco modules were available in two types. The pre-1981 types were based on a single triac but the square-wave supply and the presence of spikes from the motor and from poor contacts rendered the dV/dt rating of the triac marginal and these units would sometimes self-trigger on the wrong polarity half-cycle, resulting in damage both to the unit itself and to the locomotive motor. The later type, made by H&M, used two SCRs, one for "forward" and one for "reverse", to avoid this problem. The system is still used today by many modellers.
Airfix Multiple Control System
Airfix Multiple Control System(MTC) was introduced in 1980 and used 20 VAC on the track with a superimposed control signal. Unfortunately it was only produced for about 18 months when Airfix went into receivership and the concept was dropped. MTC system offered simultaneous control any 4 out of up to 16 locomotives.
DYNATROL
DYNATROL is a 15-channel command control system from Power Systems Inc. The track voltage is 13.5 volts d.c. It was introduced late 1970s.
Digitrack 1600
Digitrack 1600 is one of the first generation digital model railway control system developed and marketed by Chuck Balmer and Dick Robbins in 1972. CTC-16 is a second-generation design based on the Digitrack 1600, a commercial system marketed from 1972 to 1976. The CTC-16 digital train control system is totally compatible with the Digitrack 1600.
Digitrack 1600 was analog in nature, with pulses riding on a constant DC track voltage. The width and timing of the pulse determined speed and direction.
Rail-Command 816
Introduced in the late 1970s, the RAIL-COMMAND 816 is an eight-channel digital signal system using a constant 12 VDC track voltage.
CTC-16
CTC-16 system offered simultaneous control of up to 16 locomotives. A series of 16 variable width pulses is sent out to the track 125 times each second. A receiver mounted in each locomotive is programmed to respond to only one of the 16 pulses. The voltage and polarity applied to the motor depend on the width/timing of the pulse corresponding to that particular receiver. The receiver determines the speed and direction information from that specific pulse. The receiver is essentially a transistor throttle built right into the locomotive. The command station is not expandable beyond 16 channels.
CTC-16 was completely compatible with the Digitrack 1600 receivers, as it was an improved and cost reduced version of the Digitrack 1600. It was presented as a 'build it yourself' project, commercial versions would appear as well. At the time, the project was estimated to cost US$200 for the parts.
PROTRAC
PROTRACR/C system 9000 offers 8-channel command control. It was introduced in the late 1970s.
SALOTA 5300
SALOTA 5300 offers 5-channel command control with a 16-18 VDC constant track voltage.
It was introduced in the late 1970s.
PMP-112
PMP-112 system offered simultaneous control of up to 112 locomotives. It was based CTC-16.
RFPT
RFPT offers 9-channel command control system using high-frequency control signals and a 12 VAC constant track voltage.
KATO Digital
Introduced in the late 1980s, KATO Digital is KATO's electronic control system for H0 scale model trains that is conceptually similar to Digital Command Control (DCC).
Software
Digital model railway control systems are often connected with an external computer where special software for controlling the train layout is running. This allows more options for operating trains from fully automatic system where the computer is in control of everything in a layout to a computer based control console for controlling signals and points on the layout and leaving the role of the train engineer to a human.
Hornby RailMaster
Introduced in late 2010, the RailMaster is a model railway control software package by Hornby. The software connects to the Hornby Elite DCC controller or the later eLink controller, which is an interface between the laptop or PC which runs RailMaster and the layout and allows for the controls of trains, points, signals, turntables and uncouplers from a single screen. Although usable with a normal mouse, it has been optimised for touch-screen PCs where you just touch a point, a signal or slide a locomotive throttle.
The eLink unit comes with RailMaster as one package and the latter is regularly, and automatically, updated from Hornby itself.
Rocrail
Rocrail is previously open source, now proprietary software that can control a model train layout from one or more computers. Users can run trains directly from their computer or have it run the trains automatically. Some of the trains can be set to run automatically allowing manual control for others.
JMRI
JMRI is an open source project that can control a model layout including accessories from a computer.
TrainController
TrainController by Freiwald, also known as Railroad & Co is a high-end proprietary software package which comes in three versions of increasing functionality, Bronze, Silver and Gold. More information from the TC wiki at http://www.tc-wiki.de/index.php/Hauptseite (in German and English).
References
External links
DCCWiki - Community DCC site for model railroad.
members.iinet.net.au - Direct WiFi Control (DWiC) Working Group
Digital model train control |
5718913 | https://en.wikipedia.org/wiki/WaveLAN | WaveLAN | WaveLAN was a brand name for a family of wireless networking technology sold by NCR, AT&T, Lucent Technologies, and Agere Systems as well as being sold by other companies under OEM agreements. The WaveLAN name debuted on the market in 1990 and was in use until 2000, when Agere Systems renamed their products to ORiNOCO. WaveLAN laid the important foundation for the formation of IEEE 802.11 working group and the resultant creation of Wi-Fi.
WaveLAN has been used on two different families of wireless technology:
Pre-IEEE 802.11 WaveLAN, also called Classic WaveLAN
IEEE 802.11-compliant WaveLAN, also known as WaveLAN IEEE and ORiNOCO
History
WaveLAN was originally designed by NCR Systems Engineering, later renamed into WCND (Wireless Communication and Networking Division) at Nieuwegein, in the province Utrecht in the Netherlands, a subsidiary of NCR Corporation, in 1986–7, and introduced to the market in 1990 as a wireless alternative to Ethernet and Token Ring. The next year NCR contributed the WaveLAN design to the IEEE 802 LAN/MAN Standards Committee. This led to the founding of the 802.11 Wireless LAN Working Committee which produced the original IEEE 802.11 standard, which eventually became the basis of the certification mark Wi-Fi. When NCR was acquired by AT&T in 1991, becoming the AT&T GIS (Global Information Solutions) business unit, the product name was retained, as happened two years later when the product was transferred to the AT&T GBCS (Global Business Communications Systems) business unit, and again when AT&T spun off their GBCS business unit as Lucent in 1995. The technology was also sold as WaveLAN under an OEM agreement by Epson, Hitachi, and NEC, and as the RoamAbout DS by DEC. It competed directly with Aironet's non-802.11 ARLAN lineup, which offered similar speeds, frequency ranges and hardware.
Several companies also marketed wireless bridges and routers based on the WaveLAN ISA and PC cards, like the C-Spec OverLAN, KarlNet KarlBridge, Persoft Intersect Remote Bridge, and Solectek AIRLAN/Bridge Plus. Lucent's WavePoint II access point could accommodate both the classic WaveLAN PC cards as well as the WaveLAN IEEE cards. Also, there were a number of compatible third-party products available to address niche markets such as: Digital Ocean's Grouper, Manta, and Starfish offerings for the Apple Newton and Macintosh; Solectek's 915 MHz WaveLAN parallel port adapter; Microplex's M204 WaveLAN-compatible wireless print server; NEC's Japanese-market only C&C-Net 2.4 GHz adapter for the NEC-bus; Toshiba's Japanese-market only WaveCOM 2.4 GHz adapter for the Toshiba-Bus; and Teklogix's WaveLAN-compatible Pen-based and Notebook terminals.
During this time frame, networking professionals also realized that since NetWare 3.x and 4.x supported the WaveLAN cards and came with a Multi Protocol Router module that supported the IP/IPX RIP and OSPF routing protocols, one could construct a wireless routed network using NetWare servers and WaveLAN cards for a fraction of the cost of building a wireless bridged network using WaveLAN access points. Many NetWare classes and textbooks of the time included a NetWare OS CD with a 2-person license, so potentially the only cost incurred came from hardware.
When the 802.11 protocol was ratified, Lucent began producing chipsets and PC-cards to support this new standard under the name of WaveLAN IEEE. WaveLAN was among the first products certified by the Wi-Fi Alliance, originally called the Wireless Ethernet Compatibility Association (WECA). Shortly thereafter, Lucent spun off its semiconductor division that also produced the WaveLAN chipsets as Agere Systems. On June 17, 2002 Proxim acquired the IEEE 802.11 LAN equipment business including the trademark ORiNOCO from Agere Systems. Proxim later renamed its entire 802.11 wireless networking lineup to ORiNOCO, including products based on Atheros chipsets.
Specifications
Classic WaveLAN operates in the 900 MHz or 2.4 GHz ISM bands. Being a proprietary pre-802.11 protocol, it is completely incompatible with the 802.11 standard. Soon after the publication of the IEEE 802.11 standard on November 18, 1997, WaveLAN IEEE was placed on the market.
Hardware
The pre-802.11 standard WaveLAN cards were based on the Intel 82586 Ethernet PHY controller, which was a commonly used controller in its time and was found in many ISA and MCA Ethernet cards, such as the Intel EtherExpress 16 and the 3COM 3C523. The WaveLAN IEEE ISA, MCA and PCMCIA cards used Medium Access Controller (MAC), HERMES, designed specifically for 802.11 protocol support. The radio modem section was hidden from the OS, thus making the WaveLAN card appear to be a typical Ethernet card, with the radio-specific features taken care of behind the scenes.
While the 900 MHz models and the early 2.4 GHz models operated on one fixed frequency, the later 2.4 GHz cards as well as some 2.4 GHz WavePoint access points had the hardware capacity to operate over ten channels, ranging from 2.412 GHz to 2.484 GHz, with the channels available being determined by the region-specific firmware.
Security
For security, WaveLAN used a 16-bit NWID (NetWork ID) field, which yielded 65,536 potential combinations; the radio portion of the device could receive radio traffic tagged with another NWID, but the controller would discard the traffic. DES encryption (56-bit) was an option in some of the ISA and MCA cards and all of the WavePoint access points. The full-length ISA and MCA cards had a socket for an encryption chip, the half-length 915 MHz ISA cards had solder pads for a socket which was never added, and the 2.4 GHz half-length ISA cards had the chip soldered directly to the board.
For the IEEE 802.11 standard the goal was to provide data confidentiality comparable to that of a traditional wired network, using 64- and 128-bit data encryption technology. This first implementation was called “Wired Equivalent Privacy” (WEP).
There are shortcomings in WaveLAN & initial 802.11 compatible devices security strategy:
The initial IEEE 802.11 security WEP implementation, was shown to be vulnerable to attack.
This was addressed by the 802.11i Wi-Fi Protected Access (WPA) that replaced WEP in the standard.
Official specifications
Support
Officially released drivers
Windows 3.11, 95, and NT 3.5/4.0
Windows 3.11, Windows 95, and 98 supported the ISA and MCA cards natively but did not provide any configuration or link diagnostics utilities.
Windows NT 3.51 did not natively support the WaveLAN cards, but additional drivers from Microsoft's Windows NT Driver Library were available.
OS/2 NDIS and NetWare Requester
LAN Manager/IBM LAN Server
Artisoft LANtastic
PC-TCP for DOS
NetWare Lite, NetWare 2, 3, and 4. Netware 4.11 through 5.x supported the ISA and MCA cards natively but did not provide any configuration or link diagnostics utilities.
ODI/VLM NetWare client for DOS. The DOS drivers came with configuration and link diagnostics utilities.
SCO UNIX version 1.00.00.00
UnixWare version 1.1
NCR's documentation stated that drivers for Banyan Vines 5.05 were available on Banyan's BBS, but it is unclear if they ever materialized
Volunteer-developed drivers
Linux has included support for ISA Classic WaveLAN cards since the 2.0.37 kernel, while full support for the PC card Classic WaveLAN cards came later. Status of support for MCA Classic Wavelan cards is unknown.
FreeBSD version 2.2.1-up and the Mach4 kernel have had native support for the ISA Classic WaveLAN cards for several years. OpenBSD and NetBSD do not natively support any of the Classic WaveLAN cards.
Several open-source projects, such as NdisWrapper and Project Evil, currently exist that allow the use of NDIS drivers via a "wrapper". This allows non-Windows OS' to utilize the near-universal nature of drivers written for the Windows platform to the benefit of other operating systems, such as Linux, FreeBSD, and ZETA.
Examples
Classic WaveLAN technology was available for the MCA, ISA/EISA, and PCMCIA interfaces:
915 MHz
Full-length ISA card
F connector
RG-59/U antenna cable
NCR 008-0126998 HOLI (HOst Lan Interface) chip
NCR 008-0126999 Icarus or NCR 008-0127211 Daedalus chip
Intel N82586 PHY controller chip
IRQ, boot ROM, and boot ROM base address configured with a four-position DIP switch block at top of card
NCR part number 601-0068991
AT&T part number 3399-F170
Half-length ISA card
SMB connector
NCR 008-0126998 HOLI chip
Intel N82586 PHY controller chip
IRQ, boot ROM, and boot ROM base address configured with a four-position DIP switch block at top of card
AT&T part number 3399-K602.
Full-length MCA card
F connector
NCR 008-0127216 HOLI chip
NCR 008-0126999 Icarus chip
NCR 8-127000A socketed DES encryption chip
Intel N82586 PHY controller chip
MCA id number 6A14.
PC card
Large EAM (External Antenna Module)
Intel i82593 PHY controller chip
AT&T part number 3399-K080
Compaq/DEC Roamabout part number: DEINA-AA.
2.4 GHz
Full-length ISA card
Fixed frequency
IRQ, boot ROM, and boot ROM base address configured with a four-position DIP switch block at top of card
Half-length ISA card
SMB connector
Selectable frequency
Symbios Logic 008-0126998 HOLI chip
Intel N82586 PHY controller chip
IRQ, boot ROM, and boot ROM base address configured with a four-position DIP switch block at top of card
AT&T part number 3399-K635.
Full-length MCA card
SMB connector
NCR 008-0127216 HOLI chip
NCR 008-0127211 Daedalus chip
NCR 8-127000A socketed DES encryption chip
Intel N82586 PHY controller chip
AT&T part number 3399-K066
MCA id number 6A14.
PC card - 2.4 GHz, selectable frequency, large EAM (External Antenna Module).
Intel N82593 PHY controller chip
AT&T part number: AT&T 3399-K624.
Lucent part number: LUC 3399-K644.
Compaq/DEC Roamabout part number: DEIRB-xx.
Options
DES encryption chip. Part number 3399-K972.
Boot ROM chip. Part number 3399-K973.
Citations
References
NCR WaveLAN PC-AT Installation and Operations manual, part number ST-2119-09, revision number 008-0127167 Rev. B, copyright 1990, 1991 by NCR Corporation.
External links
NCR's HTTP site with a selection of WaveLAN drivers and documentation
FTP mirror site of DEC's ftp server with a selection of RoamAbout drivers and documentation
Detailed analysis of WaveLAN ISA cards
Wayback machine archive of documentation on an NCR WaveLAN backbone built in Latvia
Wayback machine archive of Byte Magazine's review of WaveLAN
Wayback machine archive for Wavelan Classic products
Detailed analysis of Wavelan MCA cards
Wireless networking
Network access
NCR Corporation products |
25725839 | https://en.wikipedia.org/wiki/Impulse%20Tracker | Impulse Tracker | Impulse Tracker is a multi-track music tracker (music sequencer). Originally released in 1995 by Jeffrey Lim as freeware with commercial extensions, it was one of the last tracker programs for the DOS platform.
In 2014, on its 20th anniversary, Impulse Tracker became open-source software and the source code was released.
History
Impulse Tracker was authored by Jeffrey "Pulse" Lim for the DOS/x86-PC platform. Impulse Tracker was coded in Assembly language, and the GUI was heavily influenced by that of Scream Tracker 3.
The first version was released in 1995 and included example music, provided by Jeffrey Lim and Chris Jarvis. The software was distributed as freeware, though extra features, such as support for stereo WAV output and a personalized version of the driver for co-editing songs over IPX networks, were provided for a fee. After the stereo WAV writer plugin was publicly pirated, the original author announced that he would discontinue development after version 2.14. The latest version was v2.14 Patch #5 released on April 8, 1999.
On February 16, 2014, Jeffrey Lim announced that he would release the complete source code of Impulse Tracker as part of its 20-year anniversary. On October 19, 2014, the first part of the source code was released on a Bitbucket repository. On December 25, 2014, the missing parts (sound drivers) were added and the code was officially released under the BSD license.
Functionality
Like in most module editors, music is arranged on a grid of channels. Each supports note on and note off instructions similar to MIDI. Impulse Tracker modules use the .IT file extension.
New Note Actions (NNAs) is a feature that handles commands received on the same channel as another instrument which is still playing. NNAs allow the user to customize the subsequent action:
Cut: The new instrument replaces the current instrument.
Continue: The old instrument continues to play using its ADSR curve.
Off: The old instrument begins the release section of its ADSR curve.
Fade: The old instrument fades out to 0 volume at a designated rate overriding the ADSR curve.
Impulse Tracker supports hardware MIDI channels on the Gravis Ultrasound, InterWave and Sound Blaster 32 card families (provided enough RAM is available).
IT file format
The .IT file format is the format native to Impulse Tracker. It is similar to older formats such as .MOD, but features new additions such as new note actions which allow the user to customize subsequent actions on receiving commands from the same channel as the one playing.
Some player software supports the .ITZ format, which is a renamed zip file that contains a .IT file.
Compatible software
Other music-playing software that supports the IT file format include Cowon jetAudio, Windows Media Player*¹, MikMod, ModPlug Tracker, OpenMPT, Renoise, Schism Tracker, ChibiTracker, XMPlay, TiMidity, VLC, Winamp, and XMMS.
*¹ - Supported only on x86(32bit) versions of application.
Usage and impact
Erez Eizen of Infected Mushroom and Shiva Shidapu composed his first trance music on Impulse Tracker. Ian Stocker used IT with other software in his collaboration for the music in the Nintendo DS version of The Sims 2.
The video games Pocket Tanks and Grid Wars use the IT format for some of their songs. Various games by Epic Games such as the first Unreal and Unreal Tournament as well as Deus Ex used the IT format in a "UMX" container format.
The video game composer and demoscener Andrew Sega (Necros) used Impulse Tracker extensively in his demoscene days.
Trance producer Sean Tyas began his music production career using Impulse Tracker. Electronic rock musician Blue Stahli has revealed to have used Impulse Tracker and other trackers in the past.
Deadmau5's career began in the mid 1990s with a chiptune and demoscene movements-influenced sound with Impulse Tracker.
Machinedrum used Impulse Tracker for many years before switching to Ableton Live.
See also
ScreamTracker
FastTracker 2
References
External links
Sound examples
Pale Dreams (by Chris Jarvis) - included with an early release of Impulse Tracker (.IT module)
IndusTree's Homesick (ogg)
Come To Dreamland (MP3)(.IT)
Free audio software
Audio trackers
1995 software
Assembly language software
Formerly proprietary software
Software using the BSD license |
11545 | https://en.wikipedia.org/wiki/Feedback | Feedback | Feedback occurs when outputs of a system are routed back as inputs as part of a chain of cause-and-effect that forms a circuit or loop. The system can then be said to feed back into itself. The notion of cause-and-effect has to be handled carefully when applied to feedback systems:
History
Self-regulating mechanisms have existed since antiquity, and the idea of feedback had started to enter economic theory in Britain by the 18th century, but it was not at that time recognized as a universal abstraction and so did not have a name.
The first ever known artificial feedback device was a float valve, for maintaining water at a constant level, invented in 270 BC in Alexandria, Egypt. This device illustrated the principle of feedback: a low water level opens the valve, the rising water then provides feedback into the system, closing the valve when the required level is reached. This then reoccurs in a circular fashion as the water level fluctuates.
Centrifugal governors were used to regulate the distance and pressure between millstones in windmills since the 17th century. In 1788, James Watt designed his first centrifugal governor following a suggestion from his business partner Matthew Boulton, for use in the steam engines of their production. Early steam engines employed a purely reciprocating motion, and were used for pumping water – an application that could tolerate variations in the working speed, but the use of steam engines for other applications called for more precise control of the speed.
In 1868, James Clerk Maxwell wrote a famous paper, "On governors", that is widely considered a classic in feedback control theory. This was a landmark paper on control theory and the mathematics of feedback.
The verb phrase to feed back, in the sense of returning to an earlier position in a mechanical process, was in use in the US by the 1860s, and in 1909, Nobel laureate Karl Ferdinand Braun used the term "feed-back" as a noun to refer to (undesired) coupling between components of an electronic circuit.
By the end of 1912, researchers using early electronic amplifiers (audions) had discovered that deliberately coupling part of the output signal back to the input circuit would boost the amplification (through regeneration), but would also cause the audion to howl or sing. This action of feeding back of the signal from output to input gave rise to the use of the term "feedback" as a distinct word by 1920.
The development of cybernetics from the 1940s onwards was centred around the study of circular causal feedback mechanisms.
Over the years there has been some dispute as to the best definition of feedback. According to cybernetician Ashby (1956), mathematicians and theorists interested in the principles of feedback mechanisms prefer the definition of "circularity of action", which keeps the theory simple and consistent. For those with more practical aims, feedback should be a deliberate effect via some more tangible connection.
Focusing on uses in management theory, Ramaprasad (1983) defines feedback generally as "...information about the gap between the actual level and the reference level of a system parameter" that is used to "alter the gap in some way". He emphasizes that the information by itself is not feedback unless translated into action.
Types
Positive and negative feedback
Positive feedback: If the signal feedback from output is in phase with the input signal, the feedback is called positive feedback.
Negative feedback: If the signal feedback is of opposite polarity or out of phase by 180° with respect to input signal, the feedback is called negative feedback.
As an example of negative feedback, the diagram might represent a cruise control system in a car, for example, that matches a target speed such as the speed limit. The controlled system is the car; its input includes the combined torque from the engine and from the changing slope of the road (the disturbance). The car's speed (status) is measured by a speedometer. The error signal is the departure of the speed as measured by the speedometer from the target speed (set point). This measured error is interpreted by the controller to adjust the accelerator, commanding the fuel flow to the engine (the effector). The resulting change in engine torque, the feedback, combines with the torque exerted by the changing road grade to reduce the error in speed, minimizing the road disturbance.
The terms "positive" and "negative" were first applied to feedback prior to WWII. The idea of positive feedback was already current in the 1920s with the introduction of the regenerative circuit. Friis and Jensen (1924) described regeneration in a set of electronic amplifiers as a case where the "feed-back" action is positive in contrast to negative feed-back action, which they mention only in passing. Harold Stephen Black's classic 1934 paper first details the use of negative feedback in electronic amplifiers. According to Black:
According to Mindell (2002) confusion in the terms arose shortly after this:
Even prior to the terms being applied, James Clerk Maxwell had described several kinds of "component motions" associated with the centrifugal governors used in steam engines, distinguishing between those that lead to a continual increase in a disturbance or the amplitude of an oscillation, and those that lead to a decrease of the same.
Terminology
The terms positive and negative feedback are defined in different ways within different disciplines.
the altering of the gap between reference and actual values of a parameter, based on whether the gap is widening (positive) or narrowing (negative).
the valence of the action or effect that alters the gap, based on whether it has a happy (positive) or unhappy (negative) emotional connotation to the recipient or observer.
The two definitions may cause confusion, such as when an incentive (reward) is used to boost poor performance (narrow a gap). Referring to definition 1, some authors use alternative terms, replacing positive/negative with self-reinforcing/self-correcting, reinforcing/balancing, discrepancy-enhancing/discrepancy-reducing or regenerative/degenerative respectively. And for definition 2, some authors advocate describing the action or effect as positive/negative reinforcement or punishment rather than feedback.
Yet even within a single discipline an example of feedback can be called either positive or negative, depending on how values are measured or referenced.
This confusion may arise because feedback can be used for either informational or motivational purposes, and often has both a qualitative and a quantitative component. As Connellan and Zemke (1993) put it:
Limitations of negative and positive feedback
While simple systems can sometimes be described as one or the other type, many systems with feedback loops cannot be so easily designated as simply positive or negative, and this is especially true when multiple loops are present.
Other types of feedback
In general, feedback systems can have many signals fed back and the feedback loop frequently contain mixtures of positive and negative feedback where positive and negative feedback can dominate at different frequencies or different points in the state space of a system.
The term bipolar feedback has been coined to refer to biological systems where positive and negative feedback systems can interact, the output of one affecting the input of another, and vice versa.
Some systems with feedback can have very complex behaviors such as chaotic behaviors in non-linear systems, while others have much more predictable behaviors, such as those that are used to make and design digital systems.
Feedback is used extensively in digital systems. For example, binary counters and similar devices employ feedback where the current state and inputs are used to calculate a new state which is then fed back and clocked back into the device to update it.
Applications
Mathematics and dynamical systems
By using feedback properties, the behavior of a system can be altered to meet the needs of an application; systems can be made stable, responsive or held constant. It is shown that dynamical systems with a feedback experience an adaptation to the edge of chaos.
Biology
In biological systems such as organisms, ecosystems, or the biosphere, most parameters must stay under control within a narrow range around a certain optimal level under certain environmental conditions. The deviation of the optimal value of the controlled parameter can result from the changes in internal and external environments. A change of some of the environmental conditions may also require change of that range to change for the system to function. The value of the parameter to maintain is recorded by a reception system and conveyed to a regulation module via an information channel. An example of this is insulin oscillations.
Biological systems contain many types of regulatory circuits, both positive and negative. As in other contexts, positive and negative do not imply that the feedback causes good or bad effects. A negative feedback loop is one that tends to slow down a process, whereas the positive feedback loop tends to accelerate it. The mirror neurons are part of a social feedback system, when an observed action is "mirrored" by the brain—like a self-performed action.
Normal tissue integrity is preserved by feedback interactions between diverse cell types mediated by adhesion molecules and secreted molecules that act as mediators; failure of key feedback mechanisms in cancer disrupts tissue function.
In an injured or infected tissue, inflammatory mediators elicit feedback responses in cells, which alter gene expression, and change the groups of molecules expressed and secreted, including molecules that induce diverse cells to cooperate and restore tissue structure and function. This type of feedback is important because it enables coordination of immune responses and recovery from infections and injuries. During cancer, key elements of this feedback fail. This disrupts tissue function and immunity.
Mechanisms of feedback were first elucidated in bacteria, where a nutrient elicits changes in some of their metabolic functions.
Feedback is also central to the operations of genes and gene regulatory networks. Repressor (see Lac repressor) and activator proteins are used to create genetic operons, which were identified by François Jacob and Jacques Monod in 1961 as feedback loops. These feedback loops may be positive (as in the case of the coupling between a sugar molecule and the proteins that import sugar into a bacterial cell), or negative (as is often the case in metabolic consumption).
On a larger scale, feedback can have a stabilizing effect on animal populations even when profoundly affected by external changes, although time lags in feedback response can give rise to predator-prey cycles.
In zymology, feedback serves as regulation of activity of an enzyme by its direct or downstream in the metabolic pathway (see Allosteric regulation).
The hypothalamic–pituitary–adrenal axis is largely controlled by positive and negative feedback, much of which is still unknown.
In psychology, the body receives a stimulus from the environment or internally that causes the release of hormones. Release of hormones then may cause more of those hormones to be released, causing a positive feedback loop. This cycle is also found in certain behaviour. For example, "shame loops" occur in people who blush easily. When they realize that they are blushing, they become even more embarrassed, which leads to further blushing, and so on.
Climate science
The climate system is characterized by strong positive and negative feedback loops between processes that affect the state of the atmosphere, ocean, and land. A simple example is the ice–albedo positive feedback loop whereby melting snow exposes more dark ground (of lower albedo), which in turn absorbs heat and causes more snow to melt.
Control theory
Feedback is extensively used in control theory, using a variety of methods including state space (controls), full state feedback, and so forth. In the context of control theory, "feedback" is traditionally assumed to specify "negative feedback".
The most common general-purpose controller using a control-loop feedback mechanism is a proportional-integral-derivative (PID) controller. Heuristically, the terms of a PID controller can be interpreted as corresponding to time: the proportional term depends on the present error, the integral term on the accumulation of past errors, and the derivative term is a prediction of future error, based on current rate of change.
Education
For feedback in the educational context, see corrective feedback.
Mechanical engineering
In ancient times, the float valve was used to regulate the flow of water in Greek and Roman water clocks; similar float valves are used to regulate fuel in a carburettor and also used to regulate tank water level in the flush toilet.
The Dutch inventor Cornelius Drebbel (1572-1633) built thermostats (c1620) to control the temperature of chicken incubators and chemical furnaces. In 1745, the windmill was improved by blacksmith Edmund Lee, who added a fantail to keep the face of the windmill pointing into the wind. In 1787, Tom Mead regulated the rotation speed of a windmill by using a centrifugal pendulum to adjust the distance between the bedstone and the runner stone (i.e., to adjust the load).
The use of the centrifugal governor by James Watt in 1788 to regulate the speed of his steam engine was one factor leading to the Industrial Revolution. Steam engines also use float valves and pressure release valves as mechanical regulation devices. A mathematical analysis of Watt's governor was done by James Clerk Maxwell in 1868.
The Great Eastern was one of the largest steamships of its time and employed a steam powered rudder with feedback mechanism designed in 1866 by John McFarlane Gray. Joseph Farcot coined the word servo in 1873 to describe steam-powered steering systems. Hydraulic servos were later used to position guns. Elmer Ambrose Sperry of the Sperry Corporation designed the first autopilot in 1912. Nicolas Minorsky published a theoretical analysis of automatic ship steering in 1922 and described the PID controller.
Internal combustion engines of the late 20th century employed mechanical feedback mechanisms such as the vacuum timing advance but mechanical feedback was replaced by electronic engine management systems once small, robust and powerful single-chip microcontrollers became affordable.
Electronic engineering
The use of feedback is widespread in the design of electronic components such as amplifiers, oscillators, and stateful logic circuit elements such as flip-flops and counters. Electronic feedback systems are also very commonly used to control mechanical, thermal and other physical processes.
If the signal is inverted on its way round the control loop, the system is said to have negative feedback; otherwise, the feedback is said to be positive. Negative feedback is often deliberately introduced to increase the stability and accuracy of a system by correcting or reducing the influence of unwanted changes. This scheme can fail if the input changes faster than the system can respond to it. When this happens, the lag in arrival of the correcting signal can result in over-correction, causing the output to oscillate or "hunt". While often an unwanted consequence of system behaviour, this effect is used deliberately in electronic oscillators.
Harry Nyquist at Bell Labs derived the Nyquist stability criterion for determining the stability of feedback systems. An easier method, but less general, is to use Bode plots developed by Hendrik Bode to determine the gain margin and phase margin. Design to ensure stability often involves frequency compensation to control the location of the poles of the amplifier.
Electronic feedback loops are used to control the output of electronic devices, such as amplifiers. A feedback loop is created when all or some portion of the output is fed back to the input. A device is said to be operating open loop if no output feedback is being employed and closed loop if feedback is being used.
When two or more amplifiers are cross-coupled using positive feedback, complex behaviors can be created. These multivibrators are widely used and include:
astable circuits, which act as oscillators
monostable circuits, which can be pushed into a state, and will return to the stable state after some time
bistable circuits, which have two stable states that the circuit can be switched between
Negative feedback
A Negative feedback occurs when the fed-back output signal has a relative phase of 180° with respect to the input signal (upside down). This situation is sometimes referred to as being out of phase, but that term also is used to indicate other phase separations, as in "90° out of phase". Negative feedback can be used to correct output errors or to desensitize a system to unwanted fluctuations. In feedback amplifiers, this correction is generally for waveform distortion reduction or to establish a specified gain level. A general expression for the gain of a negative feedback amplifier is the asymptotic gain model.
Positive feedback
Positive feedback occurs when the fed-back signal is in phase with the input signal. Under certain gain conditions, positive feedback reinforces the input signal to the point where the output of the device oscillates between its maximum and minimum possible states. Positive feedback may also introduce hysteresis into a circuit. This can cause the circuit to ignore small signals and respond only to large ones. It is sometimes used to eliminate noise from a digital signal. Under some circumstances, positive feedback may cause a device to latch, i.e., to reach a condition in which the output is locked to its maximum or minimum state. This fact is very widely used in digital electronics to make bistable circuits for volatile storage of information.
The loud squeals that sometimes occurs in audio systems, PA systems, and rock music are known as audio feedback. If a microphone is in front of a loudspeaker that it is connected to, sound that the microphone picks up comes out of the speaker, and is picked up by the microphone and re-amplified. If the loop gain is sufficient, howling or squealing at the maximum power of the amplifier is possible.
Oscillator
An electronic oscillator is an electronic circuit that produces a periodic, oscillating electronic signal, often a sine wave or a square wave. Oscillators convert direct current (DC) from a power supply to an alternating current signal. They are widely used in many electronic devices. Common examples of signals generated by oscillators include signals broadcast by radio and television transmitters, clock signals that regulate computers and quartz clocks, and the sounds produced by electronic beepers and video games.
Oscillators are often characterized by the frequency of their output signal:
A low-frequency oscillator (LFO) is an electronic oscillator that generates a frequency below ≈20 Hz. This term is typically used in the field of audio synthesizers, to distinguish it from an audio frequency oscillator.
An audio oscillator produces frequencies in the audio range, about 16 Hz to 20 kHz.
An RF oscillator produces signals in the radio frequency (RF) range of about 100 kHz to 100 GHz.
Oscillators designed to produce a high-power AC output from a DC supply are usually called inverters.
There are two main types of electronic oscillator: the linear or harmonic oscillator and the nonlinear or relaxation oscillator.
Latches and flip-flops
A latch or a flip-flop is a circuit that has two stable states and can be used to store state information. They typically constructed using feedback that crosses over between two arms of the circuit, to provide the circuit with a state. The circuit can be made to change state by signals applied to one or more control inputs and will have one or two outputs. It is the basic storage element in sequential logic. Latches and flip-flops are fundamental building blocks of digital electronics systems used in computers, communications, and many other types of systems.
Latches and flip-flops are used as data storage elements. Such data storage can be used for storage of state, and such a circuit is described as sequential logic. When used in a finite-state machine, the output and next state depend not only on its current input, but also on its current state (and hence, previous inputs). It can also be used for counting of pulses, and for synchronizing variably-timed input signals to some reference timing signal.
Flip-flops can be either simple (transparent or opaque) or clocked (synchronous or edge-triggered). Although the term flip-flop has historically referred generically to both simple and clocked circuits, in modern usage it is common to reserve the term flip-flop exclusively for discussing clocked circuits; the simple ones are commonly called latches.
Using this terminology, a latch is level-sensitive, whereas a flip-flop is edge-sensitive. That is, when a latch is enabled it becomes transparent, while a flip flop's output only changes on a single type (positive going or negative going) of clock edge.
Software
Feedback loops provide generic mechanisms for controlling the running, maintenance, and evolution of software and computing systems. Feedback-loops are important models in the engineering of adaptive software, as they define the behaviour of the interactions among the control elements over the adaptation process, to guarantee system properties at run-time. Feedback loops and foundations of control theory have been successfully applied to computing systems. In particular, they have been applied to the development of products such as IBM's Universal Database server and IBM Tivoli. From a software perspective, the autonomic (MAPE, monitor analyze plan execute) loop proposed by researchers of IBM is another valuable contribution to the application of feedback loops to the control of dynamic properties and the design and evolution of autonomic software systems.
Software Development
User interface design
Feedback is also a useful design principle for designing user interfaces.
Video feedback
Video feedback is the video equivalent of acoustic feedback. It involves a loop between a video camera input and a video output, e.g., a television screen or monitor. Aiming the camera at the display produces a complex video image based on the feedback.
Human resource management
See also
References
Further reading
Katie Salen and Eric Zimmerman. Rules of Play. MIT Press. 2004. . Chapter 18: Games as Cybernetic Systems.
Korotayev A., Malkov A., Khaltourina D. Introduction to Social Macrodynamics: Secular Cycles and Millennial Trends. Moscow: URSS, 2006.
Dijk, E., Cremer, D.D., Mulder, L.B., and Stouten, J. "How Do We React to Feedback in Social Dilemmas?" In Biel, Eek, Garling & Gustafsson, (eds.), New Issues and Paradigms in Research on Social Dilemmas, New York: Springer, 2008.
External links
Control theory |
67703258 | https://en.wikipedia.org/wiki/Kim%20Bruce | Kim Bruce | Kim B. Bruce is an American computer scientist. He is the Reuben C. and Eleanor Winslow Professor of Computer Science at Pomona College, and was previously the Frederick Latimer Wells Professor of Computer Science at Williams College. He helped establish the computer science departments at both institutions. His work focuses on the design of programming languages.
Early life and education
Bruce attended Pomona College. He then received his doctorate from the University of Wisconsin–Madison.
Career
Bruce was the Frederick Latimer Wells Professor of Computer Science at Williams College for 28 years. He then moved to teach at his alma mater, Pomona.
References
External links
Faculty page at Pomona College
Year of birth missing (living people)
Living people
Pomona College faculty
American computer scientists
Williams College faculty
Pomona College alumni
University of Wisconsin–Madison alumni |
563056 | https://en.wikipedia.org/wiki/List%20of%20software%20patents | List of software patents | This is a list of software patents, which contains notable patents and patent applications involving computer programs (also known as a software patent). Software patents cover a wide range of topics and there is therefore important debate about whether such subject-matter should be excluded from patent protection. However, there is no official way of identifying software patents and different researchers have devised their own ways of doing so.
This article lists patents relating to software which have been the subject of litigation or have achieved notoriety in other ways. Notable patent applications are also listed and comparisons made between corresponding patents and patent applications in different countries. The patents and patent applications are categorised according to the subject matter of the patent or the particular field in which the patent had an effect that brought it into the public view.
Business methods
Data compression
Data compression in general
- (Main article: Stac Electronics)
- also granted as - now expired
Stac Electronics sued Microsoft for patent infringement when Microsoft introduced the DoubleSpace data compression scheme into MS-DOS. Stac was awarded $120 million by a jury in 1994 and Microsoft was ordered to recall versions of MS-DOS with the infringing technology.
Audio compression
- (Main article: MP3)
One of several patents covering the MP3 format owned by the Fraunhofer Society which led to the development of the Ogg Vorbis format as an alternative to MP3.
- (Main article: Alcatel-Lucent v. Microsoft)
Two patents owned by Alcatel-Lucent relating to MP3 technology under which they sued Microsoft for $1.5 billion. Microsoft thought they had already licensed the technology from Fraunhofer, and this case illustrates one of the basic principles of patents that a license does not necessarily permit the licensee to work the technology, but merely prevents the licensee from being sued by the licensor.
Image compression
(Main article: GIF)
Unisys's patent on LZW compression, a fundamental part of the widely used GIF graphics format.
and its EP equivalent
(Main article: Forgent Networks)
Forgent Networks claimed this patent, granted in 1987, covered the JPEG image compression format. The broadest claims of the US patent were found to be invalid in 2005 following re-examination by the US Patent and Trademark Office.
This patent, owned by Lizardtech, Inc., was the subject of infringement proceedings against companies including Earth Resource Mapping, Inc. However, Lizardtech lost the trial on the grounds that an important part of their invention was the step of "maintaining updated sums of discrete wavelet transform coefficients from the discrete tile image to form a seamless discrete wavelet transform of the image". Claim 21 of the patent lacked this feature and was therefore obvious. The remaining claims contained this feature, but were not infringed by ERM. Internet buzz suggested the patent covered the JPEG 2000 image compression format but the additional feature of the valid claims appears not to be a JPEG 2000 requirement.
Video compression
Data encryption
Gaming systems
(Main article: Menashe v. William Hill)
A patent for a gaming system that has particular importance regarding Internet usage. A server running the game was located outside the UK but could be used within the UK. The Court of Appeal of England and Wales judged that the patent was being infringed by virtue of the sale of CDs in the UK containing software intended to put the invention into effect in the UK.
Image processing
also granted as - (Main article: Photographic mosaic)
Robert Silver's patent on his photographic mosaicing technique. The UK part of the European patent is currently undergoing revocation proceedings, the results of which may be important in comparing the practice of the UK Patent Office with that of the European Patent Office.
(Main article: Shadow volume)
A patent covering the technique commonly known as Carmack's Reverse
Internet tools
Fair division
- (Main article: Adjusted winner procedure)
An algorithm to divide n divisible goods between two parties as fairly as possible.
Search engines
(Main article: Yahoo! Search Marketing)
A patent relating to pay-per-click Internet search engine advertising. Originally filed by Goto.com, Inc. (renamed Overture Services, Inc.), Google and FindWhat were both sued for infringement prior to Overture's acquisition by Yahoo!
Telecommunications
Washington Research Foundation asserted this patent in December 2006 against Matsushita (owners of the Panasonic brand), Nokia and Samsung. Granted in October 2006 (originating from a 1996 filing) it relates to dynamically varying the passband bandwidth of a tuner. If the claims had been upheld, CSR plc (previously known as Cambridge Silicon Radio), who supply the defendants with Bluetooth chips, could have lost market share to Broadcom who already had a license under the patent.
One of three patents granted in respect of Karmarkar's algorithm, which relates to linear programming problems. Claim 1 of this patent suggests the algorithm should be applied to the allocation of telecommunication transmission facilities among subscribers.
User interfaces
and related to
Immersion Corporation sued Sony under these US patents in 2002. They relate to force-feedback technology such as that used in PlayStation 2 DualShock controllers. Sony lost the case and Immersion were awarded $90.7 million, an injunction (stayed pending appeal), and a compulsory license. The claims of the related European patent application require the device to be attached to a body part and were, in any event, refused by the examining division of the European Patent Office for lacking an inventive step.
The patent relates to a progress bar. Filed in 1989, it was highlighted in 2005 by Richard Stallman in New Scientist and The Guardian as an example of a software patent granted by the European Patent Office, that would impede software development and would be dangerous. The claims as granted describe a process of breaking down a task to be performed by a computer into a number of equal task units and updating a display each time a unit is completed and therefore does not cover progress bars which operate in different ways.
Miscellaneous
Notable due to proprietor hyperbole
Owned at various times by Encyclopædia Britannica, Inc. and Compton's NewMedia, Inc. this patent was granted in August 1993. Just a few months later, in November 1993, Compton's announced that "Everything that is now multimedia and computer-based utilizes this invention" and tried to use the patent to ensure that everyone licensed their software. Although a cursory review of the granted claims showed this statement to be mere hyperbole, there was nonetheless an outcry from the industry and the patent was revoked following re-examination.
and
Patents owned by Scientigo and claimed by them to cover the markup language XML, a notion rejected by patent attorneys and other commentators including Microsoft.
Notable due to misconception
- Emoticon keyboard button patent application.
Early in 2006, rumours circulated on the Internet that Cingular Wireless had patented the emoticon and, in particular, had patented the concept of using emoticons on mobile phones. This resulted in a great deal of anger directed at the US Patent Office that such patents should never have been granted. Ultimately, it was pointed out that it was only a published patent application, not a granted patent, and that the claims of the patent application actually related to a mobile phone with a dedicated button for inserting emoticons.
This patent application is currently being examined by the US patent office. All of the originally filed claims were rejected on 22 February 2007 as being known or obvious, although the rejection was not final. Examination of the corresponding European patent application also suggested that the claims lacked an inventive step, and the application lapsed in 2010.
This design patent was granted to Google on 1 September 2009 for the simple and clean appearance of their homepage from five years earlier. Referred to in the media as a patent, it received criticism for not being as original as Google's web search technology and was held up as evidence that the US patent system was broken. The New York Post said that Google now had the right to sue anyone who used a similarly no-frills website. However, a "design patent" is not the same as a "patent" (sometimes referred to as a "utility patent") since it provides only limited protection for ornamental appearance. Design patents are typically hard to infringe and even Google's own homepage at the time the design patent was granted was almost certainly different enough from the design patent that it did not infringe it.
References
Software patent law
Software
Software patents |
61248420 | https://en.wikipedia.org/wiki/Multicast%20routing | Multicast routing | Multicast routing is one of the routing protocols in IP networking.
There are several multicast routing protocols supporting communications where data transmission is addressed to a group of destination computers simultaneously: Multicast Source Discovery Protocol, Multicast BGP, Protocol Independent Multicast.
Overview
Multicast routing is a method of transmitting to all subscribers registered in a group by one transmission unlike unicast routing (i.e. OSPF, RIP) which transmits 1: 1 necessary data.
To implement the multicast routing, IGMP protocol and multicast routing protocol (Reverse-path forwarding, PIM-SM) for registration subscriber grouping and control traffic are required for multicast transmission. Regarding IP multicast, it is a technique for one-to-many communication over an IP network. IP multicast covers some part of common multicast routing protocol. IP multicast also describe IP multicast software (i.e. VideoLAN, — PIM module for the Quagga Routing Suite, UFTP, etc.). The multicast routing is specific and broad range of protocols for layer-3 routing protocol for multicast feature and it is defined in RFC 5110.
Routing mechanism
A multicast routing protocol is a mechanism for constructing a loop-free shortest path from a source host that sends data to the multiple destinations that receives the data. IPV4 uses Class D address (224.0.0.0 ~ 239.255.255.255)
IPv6 multicast provides the previous feature of IPV4 and a new IPv6 feature, allowing a host to send a single data stream to a subset of all hosts (group transmission) concurrently. There are four types of Well-Known IPv6 Multicast address range : ff02::1: All IPv6 devices,•ff02::2: All IPv6 routers,•ff02::5: All OSPFv3 routers,•ff02::a: All EIGRP (IPv6) routers.
The Multicast tree classification
There are two types of Multicast trees which are the Source based tree and Group Shared tree.
Source based tree (SBT)
Its SSM (Source Specific Multicast) protocol. The maximum delay is short between End-to-end communication. It has poor scalability. (it is difficult to apply large network) Supported protocols include DVMRP, MOSPF, PIM-DM
Group Shared tree
It is Core-Based Tree, selecting one router in the network as the root and transmitting information through the root router. Maximum delay in the tree is longer than SBT(Source-based tree), The core router manages all the information, and the remaining routers manage the direction of the core and the multicast information requested by the current neighboring router. it has a Good Scalability (applicable to large networks). Supported protocols include CBT, PIM-SM, etc.
See also
Anycast
Any-source multicast
Broadcast address
Comparison of streaming media systems
Content delivery network
Flooding algorithm
Network speaker
Internet television
List of streaming media systems
Mbone, experimental multicast backbone network
Multicast address
Multicast lightpaths
Non-broadcast multiple-access network
Packet forwarding
Push technology
Session Announcement Protocol
Source-specific multicast
Broadcast, Unknown-Unicast and Multicast traffic
References
Internet architecture
Internet broadcasting
Television terminology |
236840 | https://en.wikipedia.org/wiki/Ardour%20%28software%29 | Ardour (software) | Ardour is a hard disk recorder and digital audio workstation application that runs on Linux, macOS, FreeBSD and Microsoft Windows. Its primary author is Paul Davis, who was also responsible for the JACK Audio Connection Kit. It is intended as a digital audio workstation suitable for professional use.
It is free software, released under the terms of the GPL-2.0-or-later.
Features
Recording
Ardour's recording abilities are limited by only the hardware it is run on; there are no built-in limits in its capabilities. When recording on top of existing media, it can perform latency compensation, positioning recorded material where it was intended to be when recording it. Monitoring options include self-monitoring, external hardware support (dependent on sound card support), and specialized hardware support (e.g. JACK Audio Connection Kit). Self-monitoring makes it possible to apply plug-in effects while recording. Using the JACK audio, Ardour can record concurrently from both the audio card and compatible software.
Mixing
Ardour supports an arbitrary number of tracks and buses through an "anything to anywhere" routing system. All gain, panning and plug-in parameters can be automated. All sample data is mixed and maintained internally in 32-bit floating point format.
Editing
Ardour supports dragging, trimming, splitting and time-stretching recorded regions with sample-level resolution, and supports layer regions. It includes a crossfade editor and beat detection, unlimited undo/redo, and a "snapshot" feature for saving the current state of a session to a file.
Mastering
Ardour can be used as an audio mastering environment. Its integration with the JACK Audio Connection Kit makes it possible to use mastering tools such as JAMin. Its mixer's output can be sent to third-party audio processing software for processing and/or recording. It can also export TOC and CUE files for creating audio CDs.
Compatibility
Ardour attempts to adhere to industry standards, such as SMPTE/MTC, Broadcast Wave Format, MIDI Machine Control and XML.
It has been tested on Linux, x86-64, x86, PowerPC and ARM (for at least version 3) architectures; Solaris, macOS on Intel and PowerPC, Windows on Intel architectures and FreeBSD. It takes advantage of all of these systems' multiprocessor, multicore SMP and real-time features.
Pre-built binaries of Ardour 6.x are available for Linux, macOS and Windows.
Plug-ins
Ardour relies on plug-ins for many features, from audio effects processing to dynamic control. It supports the following plugin format and platform combinations: LV2 on Linux, FreeBSD, macOS and Windows; AudioUnits on macOS; Steinberg's VST2 on Linux, macOS and Windows; LADSPA on Linux, FreeBSD, macOS and Windows. It is theoretically possible to use plugins created for Windows in the VST2 format on Linux with the help of Wine, but the project team does not recommend it.
Since version 6.5, it also supports VST3 plugins on all supported platforms.
Import and export
Ardour can export whole sessions or parts of sessions, and import audio clips into sessions from more than 30 different audio file formats, using its built-in audio file database manager, or directly from an ordinary file browser.
Supporting companies and future
The SAE Institute provided corporate support for Ardour until February 2009, an initiative for providing a more integrated experience on Mac OS X and the development of a simpler version for students and others new to audio processing.
Solid State Logic employed Paul Davis to work full-time on Ardour during the development of version 2, until the end of 2006.
Harrison Audio Consoles has supported the Ardour project since early 2005. Harrison's "Mixbus" DAW and their destructive film dubber, the Xdubber, are based on Ardour. Mixbus extends Ardour to add Harrison's own DSP and a more console-like workflow. The Xdubber is a customizable platform for enterprise-class digital audio workstation (DAW) users.
Waves Audio privately supported Ardour development in 2009. It also developed the Waves Track Live software in collaboration with Ardour developers
, with most of the source code changes becoming part of Ardour's codebase.
See also
JACK Audio Connection Kit, a real-time low latency audio server.
Comparison of digital audio editors
Comparison of free software for audio
Linux audio software
List of free and open source digital audio workstation software
List of music software
References
Articles
External links
Official website
Manual
2005 software
Audio editing software that uses GTK
Audio software that uses GTK
Audio software with JACK support
Cross-platform free software
Digital audio editors for Linux
Digital audio workstation software
Free audio editors
Free music software
Free software programmed in C++
MacOS audio editors
Software that uses GStreamer |
3894939 | https://en.wikipedia.org/wiki/Ekiga | Ekiga | Ekiga (formerly called GnomeMeeting) is a VoIP and video conferencing application for GNOME and Microsoft Windows. It is distributed as free software under the terms of the GNU GPL-2.0-or-later. It was the default VoIP client in Ubuntu until October 2009, when it was replaced by Empathy. Ekiga supports both the SIP and H.323 (based on OPAL) protocols and is fully interoperable with any other SIP compliant application and with Microsoft NetMeeting. It supports many high-quality audio and video codecs.
Ekiga was initially written by Damien Sandras in order to graduate from the University of Louvain (UCLouvain). It is currently developed by a community-based team led by Sandras. The logo was designed based on his concept by Andreas Kwiatkowski.
Ekiga.net was also a free and private SIP registrar, which enabled its members to originate and terminate (receive) calls from and to each other directly over the Internet.
The service was discontinued end of 2018.
Features
Features of Ekiga include:
Integration
Ekiga is integrated with a number of different software packages and protocols such as LDAP directories registration and browsing along with support for Novell Evolution so that contacts are shared between both programs and zeroconf (Apple Bonjour) support. It auto-detects devices including USB, ALSA and legacy OSS soundcards, Video4linux and FireWire camera.
User interface
Ekiga supports a Contact list based interface along with Presence support with custom messages. It allows for the monitoring of contacts and viewing call history along with an addressbook, dialpad, and chat window. SIP URLs and H.323/callto support is built-in along with full-screen videoconferencing (accelerated using a graphics card).
Technical features
Call forwarding on busy, no answer, always (SIP and H.323)
Call transfer (SIP and H.323)
Call hold (SIP and H.323)
DTMF support (SIP and H.323)
Basic instant messaging (SIP)
Text chat (SIP and H.323)
Register with several registrars (SIP) and gatekeepers (H.323) simultaneously
Ability to use an outbound proxy (SIP) or a gateway (H.323)
Message waiting indications (SIP)
Audio and video (SIP and H.323)
STUN support (SIP and H.323)
LDAP support
Audio codec algorithms: iLBC, GSM 06.10, MS-GSM, G.711 A-law, G.711 μ-law, G.726, G.721, Speex, G.722, CELT (also G.723.1, G.728, G.729, GSM 06.10, GSM-AMR, G.722.2 [GSM‑AMR-WB] using Intel IPP)
Video codec algorithms: H.261, H.263+, H.264, Theora, MPEG-4
History
Ekiga was originally started over Christmas in the year 2000. Originally written by Damien Sandras it grew to being maintained by a team of nine regular contributors by 2011. Sandras wanted to create a Netmeeting clone for Linux as his graduating project at UCLouvain.
Ekiga was referred to as GnomeMeeting until 2004 when a name change was thought necessary by the developers. Concerns were cited that the original name was associated with a dead Microsoft product called NetMeeting, and not always recognized as VoIP software. It was also proposed that some people assumed they needed to run GNOME to run GnomeMeeting, which was no longer the case. Eventually on January 18, 2006 the name Ekiga was chosen based on an old way of communicating between villages in Cameroon. Around that the time the direction of the software project was changed and it turned into a SIP client.
The following shows major version releases:
March 2004 – Version 1.0 under the name GnomeMeeting
March 2006 – Version 2.0 was released under name Ekiga, it was bundled with GNOME 2.14
April 2007 – Version 2.0.9 was the first version to include support for Microsoft Windows
September 2008 – Version 3.0.0
March 2009 - First release of the 3.2.x series. Added support for G.722 audio as well as unified the support for H.263 and H.263.
November 2012 - Ekiga 4.0.0, "The Victory Release", is a major release with many major improvements.
February 2013 - Ekiga 4.0.1, "The Victory Release", this version has many improvements.
2015; - Ekiga 5.0, new Version with GTK+ 3 and new codecs announced.
See also
Comparison of VoIP software
Blink
QuteCom
Jitsi
List of free and open-source software packages
Twinkle
SFLphone
Tox
References
External links
VoIP software
Free VoIP software
Teleconferencing
Groupware
Social networking services
Online chat
Free instant messaging clients
GNOME Applications
Videotelephony
2000 software
Videoconferencing software that uses GTK
Instant messaging clients that use GTK
Voice over IP clients that use GTK
Université catholique de Louvain |
4173172 | https://en.wikipedia.org/wiki/Mobile%20app%20development | Mobile app development | Mobile app development is the act or process by which a mobile app is developed for mobile devices, such as personal digital assistants, enterprise digital assistants or mobile phones. These software applications are designed to run on mobile devices, such as a smartphone or tablet computer. These applications can be pre-installed on phones during manufacturing platforms, or delivered as web applications using server-side or client-side processing (e.g., JavaScript) to provide an "application-like" experience within a web browser. Application software developers also must consider a long array of screen sizes, hardware specifications, and configurations because of intense competition in mobile software and changes within each of the platforms. Mobile app development has been steadily growing, in revenues and jobs created. A 2013 analyst report estimates there are 529,000 direct app economy jobs within the EU then 28 members (including the UK), 60 percent of which are mobile app developers.
As part of the development process, mobile user interface (UI) design is also essential in the creation of mobile apps. Mobile UI considers constraints, contexts, screen, input, and mobility as outlines for design. The user is often the focus of interaction with their device, and the interface entails components of both hardware and software. User input allows for the users to manipulate a system, and device's output allows the system to indicate the effects of the users' manipulation. Mobile UI design constraints include limited attention and form factors, such as a mobile device's screen size for a user's hand(s). Mobile UI contexts signal cues from user activity, such as location and scheduling that can be shown from user interactions within a mobile app. Overall, mobile UI design's goal is mainly for an understandable, user-friendly interface. Functionality is supported by mobile enterprise application platforms or integrated development environments (IDEs).
Mobile UIs, or front-ends, rely on mobile back-ends to support access to enterprise systems. The mobile back-end facilitates data routing, security, authentication, authorization, working off-line, and service orchestration. This functionality is supported by a mix of middleware components including mobile app server, mobile backend as a service (MBaaS), and service-oriented architecture (SOA) infrastructure.
Platform
The platform organizations needed to develop, deploy and manage mobile apps are made from many components and tools which allow a developer to write, test and deploy applications into the target platform environment.
Front-end development tools
Front-end development tools are focused on the user interface and user experience (UI-UX) and provide the following abilities:
UI design tools
SDKs to access device features
Cross-platform accommodations/support
Notable tools are listed below.
Back-end servers
Back-end tools pick up where the front-end tools leave off, and provide a set of reusable services that are centrally managed and controlled and provide the following abilities:
Integration with back-end systems
User authentication-authorization
Data services
Reusable business logic
Available tools are listed below.
Security add-on layers
With bring your own device (BYOD) becoming the norm within more enterprises, IT departments often need stop-gap, tactical solutions that layer atop existing apps, phones, and platform component. Features include
App wrapping for security
Data encryption
Client actions
Reporting and statistics
System software
Many system-level components are needed to have a functioning platform for developing mobile apps.
Criteria for selecting a development platform usually contains the target mobile platforms, existing infrastructure and development skills. When targeting more than one platform with cross-platform development it is also important to consider the impact of the tool on the user experience. Performance is another important criteria, as research on mobile apps indicates a strong correlation between application performance and user satisfaction. Along with performance and other criteria, the availability of the technology and the project's requirement may drive the development between native and cross-platform environments. To aid the choice between native and cross-platform environments, some guidelines and benchmarks have been published. Typically, cross-platform environments are reusable across multiple platforms, leveraging a native container while using HTML, CSS, and JavaScript for the user interface. In contrast, native environments are targeted at one platform for each of those environments. For example, Android development occurs in the Eclipse IDE using Android Developer Tools (ADT) plugins, Apple iOS development occurs using Xcode IDE with Objective-C and/or Swift, Windows and BlackBerry each have their own development environments.
Mobile app testing
Mobile applications are first tested within the development environment using emulators and later subjected to field testing. Emulators provide an inexpensive way to test applications on mobile phones to which developers may not have physical access. The following are examples of tools used for testing application across the most popular mobile operating systems.
Google Android Emulator - an Android emulator that is patched to run on a Windows PC as a standalone app, without having to download and install the complete and complex Android SDK. It can be installed and Android compatible apps can be tested on it.
The official Android SDK Emulator - a mobile device emulator which mimics all of the hardware and software features of a typical mobile device (without the calls).
TestiPhone - a web browser-based simulator for quickly testing iPhone web applications. This tool has been tested and works using Internet Explorer 7, Firefox 2 and Safari 3.
iPhoney - gives a pixel-accurate web browsing environment and it is powered by Safari. It can be used while developing web sites for the iPhone. It is not an iPhone simulator but instead is designed for web developers who want to create 320 by 480 (or 480 by 320) websites for use with iPhone. iPhoney will only run on OS X 10.4.7 or later.
BlackBerry Simulator - There are a variety of official BlackBerry simulators available to emulate the functionality of actual BlackBerry products and test how the device software, screen, keyboard and trackwheel will work with application.
Windows UI Automation - To test applications that use the Microsoft UI Automation technology, it requires Windows Automation API 3.0. It is pre-installed on Windows 7, Windows Server 2008 R2 and later versions of Windows. On other operating systems, you can install using Windows Update or download it from the Microsoft Web site.
MobiOne Developer - a mobile Web integrated development environment (IDE) for Windows that helps developers to code, test, debug, package and deploy mobile Web applications to devices such as iPhone, BlackBerry, Android, and the Palm Pre. MobiOne Developer was officially declared End of Life by the end of 2014.
Tools include
eggPlant: A GUI-based automated test tool for mobile app across all operating systems and devices.
Ranorex: Test automation tools for mobile, web and desktop apps.
Testdroid: Real mobile devices and test automation tools for testing mobile and web apps.
Patents
Many patent applications are pending for new mobile phone apps. Most of these are in the technological fields of business methods, database management, data transfer, and operator interface.
On 31 May 2011, Lodsys asserted two of its four patents: U.S. Patent No. 7,620,565 ("the '565 patent") on a "customer-based design module" and U.S. Patent No. 7,222,078 ("the '078 patent") on "Methods and Systems for Gathering Information from Units of a Commodity Across a Network." against the following application developers:
Combay
The Iconfactory
Illusion Labs
Shovelmate
Quickoffice
Richard Shinderman of Brooklyn, New York
Wulven Game Studios of Hanoi, Vietnam
See also
List of digital distribution platforms for mobile devices
List of mobile software distribution platforms
Lazy user model
Mobile application management
Mobile backend as a service
Mobile business intelligence
Mobile computing
Mobile-device testing
Mobile enterprise application platform
Mobile games
Mobile interaction
Mobile marketing
Mobile web development
Mobile workflow
Multi-channel app development
MoSoSo, mobile social software
On-Device Portal
WURFL and WALL
JQuery Mobile
HTML5
References
fr:Application mobile
pl:Aplikacje mobilne
fi:Mobiiliohjelmisto |
34627763 | https://en.wikipedia.org/wiki/United%20States%20v.%20Kramer | United States v. Kramer | United States v. Neil Scott Kramer, 631 F.3d 900 (8th Cir. 2011), is a court case where a cellphone was used to coerce a minor into engaging in sex with an adult. Central to the case was whether a cellphone constituted a computer device. Under United States law, specifically U.S.S.G.§ 2G1.3(b)(3), the use of computers to persuade minors for illicit ends carriers extra legal ramifications. The opinion written by the United States Court of Appeals for the Eighth Circuit begins by citing Apple co-founder Steve Wozniak's musing that "Everything has a computer in it nowadays." Ultimately, the court found that a cell phone can be considered a computer if "the phone perform[s] arithmetic, logical, and storage functions," paving the way for harsher consequences for criminals engaging with minors over cellphones.
Background
In April, 2008, a 15-year-old female Missouri resident inadvertently sent a text message to Kramer, an adult in Louisiana. Kramer replied to the message, which began a seven-month period in which he and the female victim regularly corresponded with one another through text messaging. During their communications, the victim revealed to Kramer that she was 15 years of age.
On November 10, 2008, the victim contacted Kramer and the two arranged to meet. The pair drove to the Comfort Inn in Willow Springs, Missouri, where Kramer "plied the victim with illegal narcotics and then engaged in sexual intercourse with her." The following morning, Kramer and the victim drove to Kramer's trailer in Violet, Louisiana. Upon their arrival, Kramer gave the victim more narcotics and again had sexual intercourse with her. On Friday November 14, Kramer took the victim to a bar in Poydras, Louisiana. After several alcoholic drinks, the victim went to the restroom where she was able to text the police. Kramer was arrested in the bar's parking lot, while the victim was eventually reunited with her family.
In court, Kramer was charged with transporting a minor across state lines in order to engage in illegal sexual activity, a violation of 18 U.S.C. § 2423(a). The state also sought a harsher sentencing for Kramer for using his cellphone to make voice calls and send text messages to the victim. In particular, the state argued that a cellphone falls under the definition of a computer under U.S.S.G.§ 2G1.3(b)(3), which states that "the use of a computer or an interactive computer service to ... persuade, induce, entice, coerce, or facilitate the travel of, the minor to engage in prohibited sexual conduct" will result in longer prison sentences.
Court findings
The district court concluded that Kramer's phone did constitute a "computer", and applied a two-level enhancement, see U.S. Sentencing Guidelines Manual § 2G1.3(b)(3) (2009), for its use to facilitate the offense, and sentenced Kramer to 168 months' imprisonment. Without the enhancement, the district court would have sentenced Kramer to 140 months' imprisonment. The case was appealed, where the United States Court of Appeals for the Eighth Circuit upheld the lower court's ruling.
At the heart of the case was whether a cellphone constituted a computer. The Court of Appeals defined a computer to have the meaning given by 18 U.S.C. § 1030(e)(1) (the Computer Fraud and Abuse Act), which states a computer is an:
electronic, magnetic, optical, electrochemical, or other high speed data processing device performing logical, arithmetic, or storage functions, and includes any data storage facility or communications facility directly related to or operating in conjunction with such device
The Court of Appeals acknowledged that the language of 18 U.S.C. § 1030(e)(1) is "exceedingly broad", and that a "basic cellular phone might not easily fit within the colloquial definition of computer." However, the court stated that it was bound "not by the common understanding of that word, but by the specific--if broad--definition set forth in § 1030(e)(1)." The court left the potential for correcting the statute as a matter for the United States Sentencing Commission or for Congress to address.
Kramer's first contention was that the district court erred in applying the enhancement "because a cellular telephone, when used only to make voice calls and send text messages, cannot be a computer as defined in 18 U.S.C. § 1030(e)(1)." In effect, Kramer argued that United States v. Lay "implicitly distinguished [the] use of a cellular telephone from use of a traditional computer" thus, the enhancement should apply only when a device is used to access the Internet. The Court of Appeals disagreed; however, concluding that Kramer's reliance on United States v. Lay was misplaced for "there is nothing in the statutory definition that purports to exclude devices because they lack a connection to the Internet."
Kramer's second contention was "that the government's evidence was insufficient to demonstrate that his cellular phone was a computer." The government referenced the phone's user's manual and documentation from Motorola's website describing the phone's features.
The court used the following facts contained in these materials to demonstrate that Kramer's cellular phone was a computer:.
"The phone may include copyrighted Motorola and third-party software stored in semiconductor memories or other media." The court used this as evidence that the phone makes use of an electronic data processor
"The phone keeps track of the 'Network connection time,' which is 'the elapsed time from the moment [the user] connect[s] to [the] service provider's network to the moment [the user] end[s] the call by pressing [the end key].'" The court used this as evidence that the phone performs logical and arithmetic operations when placing calls.
"The phone stores sets of characters that are available to a user when typing a message." The court used this as evidence that the phone performs storage functions.
These materials "were sufficient to show by a preponderance of the evidence that Kramer's phone was an 'electronic ... or other high speed data processing device' that 'perform[ed] logical, arithmetic, or storage functions' when Kramer used it to call and text message the victim."
For the reasons enumerated above, The Court of Appeals affirmed Kramer's sentence.
See also
Computer Fraud and Abuse Act
References
External links
"8th Circuit deems cellphone a computer" New Orleans City Business
"Apple's Steve Wozniak: 'We've lost a lot of control'" CNN Tech
United States Internet case law
United States Court of Appeals for the Eighth Circuit cases |
50664247 | https://en.wikipedia.org/wiki/Nicolet%201080 | Nicolet 1080 | The Nicolet 1080 computer was the successor of the Nicolet 1070/PDP-8 computer, released in 1971 by Nicolet Instrument Corporation, which operated between 1966 and 1992 in Madison, Wisconsin. As a part of a data processing mainframe, model 1080 allowed NMR spectrum analysis by the use of fast Fourier transform (FFT) algorithms. The processing of big amounts of data at a fast rate (it was possible to compute the FFT of 32000 points in just 100 seconds) was possible thanks to the uncommon 20 bits architecture, which was a significant performance advantage over other systems based on 8 and 16 bits architectures.
Technical specifications
Architecture
The computer was formed by dozens of integrated circuits containing simple logic gates (AND, NAND, OR, NOT, etc.), transistors, diodes, and passive electronic components like resistors, capacitors and coils. The analog-to-digital converter (ADC) had a sample rate of 100 kHz, allowing the measure of 50 kHz signals (see Nyquist frequency). Beside this, digitalized signals could be averaged "by hardware", which increased signal to noise ratio (SNR) improving processed data quality. Computer clock frequency was 2 MHz, and some complex functions like multiplication and division between 20 and 40 bits registers could be performed in one instruction cycle thanks to the complexity of the arithmetic module, in a similar way to the more recent ALUs. The standard instruction set could address a 1K page in direct mode. Program code outside the current page was reachable in indirect mode, using pointers. Program code used to process digitized data points always had to use pointers.
The 1080 computer did not have a stack. When executing a subroutine, the return address was stored in the first location of the subroutine.
Exotic Instructions
The NIC 1080 had an instruction called BITINV to reverse bits in the accumulator, swapping the most significant bit with the least significant and so on. There was also a special shift instruction (VDLSH), where the number of shifts was taken from a rotary switch on the front panel, instead of from the instruction code. This was used to change the vertical scale during data display.
Memory
The Nicolet 1080's main memory was a magnetic-core memory, with 1 to 10 modules with 4K 20-bit words per module, for a maximum of 40K words. This would be, in modern terms, between 10 and 100 kilobytes of memory (8 bits/byte). The memory was divided into a 4K stack intended to store software, and a data block starting at an address offset of 32K. One program memory stack plus two data memory stacks of 4K could be installed inside the main chassis.
Peripheral
The computer included a Teletype Model 33 ASR, used for entering or modifying programs, as well as reading memory contents. Two RS-232 serial ports allowed, on the other hand, the use of devices such as dot matrix printers. Although the second serial port (RS232-B) didn't have assigned functions on the original system, it could be used to achieve connectivity with other computers. The 1080 could also support hard drives, such as the Diablo Series 30, and NIC 298 8" floppy disk drive. The default media for program loading was, however, paper tape. Standard system and FFT programs were included on paper tape format.
Instead of today's mouse control, the computer was equipped with two 10-turn potentiometers where the actual voltage could be digitized, and the value used as parameter in the software.
The 1080 could drive a voltage-controlled XY pen plotter. Two digital-to-analog converters (DAC) were normally connected to an XY oscilloscope for data display. The same DAC's could be connected to the pen plotter, using a software controlled relay.
Front panel
The front panel had three rows of red LEDs, displaying the contents of the accumulator, instruction register, and program counter (PC). A group of twenty switches and buttons were used to read or modify any selected register. Some of the Nicolet 1080 computers were sold as part of Bruker NMR Spectrometers, and hence labeled BNC 12.
Specific programming techniques
The 1080 computer operated only on integer numbers. Floating point arithmetic was possible using a software package, and hence slow.
For fast fourier transform, the sin() and cos() functions were realized by table lookup, not by direct computation. The value of the trigonometric functions was represented as binary fraction, i.e. a value of 1.0 was represented as the largest positive number in a 20-bit word, assuming a decimal point "left" of the number.
When data points became too large to be represented in 20 bits during transform, the whole data set was scaled down by a factor of 2 in order to prevent overflow.
References
External links
Nicolet Computers and the Fourier Revolution, by Jack G. Kisslinger
Minicomputers |
6370788 | https://en.wikipedia.org/wiki/RemoteAccess | RemoteAccess | RemoteAccess is a DOS Bulletin Board System (BBS) software package written by Andrew Milner and published by his company Wantree Development in Australia. RemoteAccess was written in Turbo Pascal with some Assembly Language routines. RemoteAccess (commonly called RA) began in 1989 as a clone of QuickBBS by Adam Hudson. It was released under the shareware concept in 1990 and became popular in North America, Europe, UK, South Africa, and the South Pacific. Initially the main advantage over QuickBBS was its ability to run multiple nodes under Microsoft Windows, Quarterdeck's DESQview and OS/2. RA could also operate over a network or even a combination of network and multitasking operating systems to provide multiple "nodes per station" capabilities.
RA's features quickly grew to become considerably more advanced than the QuickBBS software of which it was a clone. A number of other QuickBBS clones appeared shortly afterwards including ProBoard, SuperBBS and EzyCom, though they never gained as much support or popularity. RA was the first BBS software to support the popular JAM Message Base Format, which was partly conceived by RA's author, Andrew Milner. RA was also the first shareware BBS software to support a FDB (file database), rather than using files.bbs text files to describe files in each directory. RA interfaced with message relaying systems such as FidoNet through 3rd party utilities such as FrontDoor (Joaquim Homrighausen), MainDoor (Francisco Sedano) and FastEcho (Tobias Burchhardt), which were developed by people who eventually became members of the RA beta team.
With over 1500 titles, there were more third party utilities written for RA than for any other shareware BBS software. While RA was initially shareware, Andrew also released a commercial edition - "RemoteAccess Professional" - that was bundled with utilities to allow remote control of nodes over a network (RANETMGR, and RATSR).
Andrew Milner released his final version of RA (2.50) in May 1996. By that time, many System Operators switched over from running Bulletin Boards to becoming Internet Service Providers. Milner was one such System Operator, and after version 2.50 he stopped development. In April 1997, Milner put the rights and source code up for sale to the highest bidder and it was sold in December 1997 to Bruce Morse in the USA. Morse released some minor updates including a Y2K fix, but did not add any new features to the code. Morse's final version (2.62) was released in August 2000. Bruce Morse continues to own the code today and RA is still available as shareware, as well as a commercial version known as RemoteAccess-Professional.
RemoteAccess was never ported to a 32-bit version, but there were two clones of RA in the later years which did include 32-bit versions: EleBBS in the late 1990s which included DOS, Windows, OS/2 and Linux flavors, and MBSE, a few years later, which focused mainly on the Linux operating system.
There were numerous conversations about creating Windows and OS/2 32-bit versions of RA around 1995. Joel Ricketts of Interscape Development, who was the lead programmer, answered questions in the RA echomail forum about the potential development of RA for Windows during this time. However, due to RA being put up for sale, as well as lack of funding, the project was scrapped in 1996.
Around the same time, Niels Schoot from the Netherlands began writing a Visual Basic version of RA called tcRA32, which was to be fully RA compatible. The project was never finished, and within a couple of years, it was abandoned.
While RemoteAccess never included internal telnet support, it can be run as a telnet BBS by using a telnet-FOSSIL driver such as NetFoss, or a Virtual COM port engine such as NetSerial under Windows, or using SIO/VMODEM under OS/2.
BBSs running RemoteAccess
Cosmo's Castle, a RemoteAccess BBS started in 1993 in West Virginia
Bytronix (started using OPUS-CBCS at a 315 area code telephone number, programming on a Radio Shack TRS-80 during the 1970s).
Dark Systems BBS (started in 1992 in the 705 Canadian area code). Telnet bbs.dsbbs.ca:23
See also
List of BBS software
External links
Wantree's 1996
Bruce Morse's RA-Pro Site - Download The final version 2.62.2 or view the Documentation
PC Micro's RemoteAccess Archives (Former RA beta site, North American RemoteAccess support Site)
PC Micro's RemoteAccess Support Site Includes links to many RA utilities
Waldos's Place USA RemoteAccess Archives (RA beta site, North American RemoteAccess Support Site)
The BBS Documentary RemoteAccess Archives
The official JAM Messagebase specifications
The BBS Archives containing over 2000 RemoteAccess third party utilities.
16 RA Underground RAForce Group Releases Archived by RAForce, NL.
The EleBBS Homepage An almost exact 32-bit clone of RemoteAccess BBS.
Bulletin board system software
DOS software
FidoNet
Computer-related introductions in 1989 |
654742 | https://en.wikipedia.org/wiki/Jimmy%20Neutron%3A%20Boy%20Genius | Jimmy Neutron: Boy Genius | Jimmy Neutron: Boy Genius is a 2001 American computer-animated science fiction comedy film produced by Nickelodeon Movies, O Entertainment and DNA Productions, and distributed by Paramount Pictures. The film was directed by John A. Davis and written by Davis and producer Steve Oedekerk. Its voice cast includes Debi Derryberry, Patrick Stewart, Martin Short, Rob Paulsen, and Jeffrey Garcia. The film follows the title character, a schoolboy with super-genius intelligence, who must save all of the parents of his hometown from a race of egg-like aliens known as the Yolkians.
The idea for Jimmy Neutron was first created by Davis in the 1980s, in which he wrote a script for a short film titled Runaway Rocketboy and starring a prototype character for Jimmy named Johnny Quasar. After coming across the abandoned script several years later, Davis decided that it would be a good idea to revisit it and retool it as a computer-animated short and potential TV series. A 40-second demo was animated using LightWave 3D and gained popularity at the 1995 SIGGRAPH convention where it was shown off, grabbing the attention of Oedekerk and leading DNA Productions to develop an extended TV Pilot. After a successful pitch to Nickelodeon, a 13-minute-long TV episode was developed, and Nickelodeon, impressed with both the character and the 3D technology, raised the possibility of making both a TV series and a full-length feature film. Davis, in turn, suggested that the film be made first, so that the development team could create the assets at theatrical quality and reuse them in the TV series. Production officially began in early 2000 and was completed in roughly 24 months, with the studio considerably raising its staff count and expanding its studio space. Animation was done entirely using commercial software, including LightWave and project:messiah.
Jimmy Neutron: Boy Genius was released on December 21, 2001. Backed by a strong pre-release campaign, the film was a box office success, grossing $103 million worldwide against a $30 million budget. It earned generally positive reviews for its animation. It was nominated for the inaugural Academy Award for Best Animated Feature in 2001, ultimately losing to Shrek. It was the only animated Nickelodeon film to be nominated in that category for nearly a decade until Rango (2011) was nominated and won.
Due to its success, the film was continued into an animated television series called The Adventures of Jimmy Neutron, Boy Genius, which premiered on July 20, 2002, and ended on November 25, 2006. Four years later, a spin-off series called Planet Sheen was produced, focusing on Jimmy's friend Sheen Estevez. This series premiered on October 2, 2010 (alongside T.U.F.F. Puppy), and ended on February 15, 2013.
A simulator ride based on the film called Jimmy Neutron's Nicktoon Blast was set to take place after the events of the film and featured guest appearances by other Nicktoons characters. It opened at Universal Studios Florida on April 11, 2003, and closed on August 18, 2011.
Plot
Eleven-year-old boy genius Jimmy Neutron lives in Retroville with his parents, Judy and Hugh, and his robot dog, Goddard. Jimmy's friends are overweight Carl Wheezer and hyperactive Sheen Estevez, and his long-standing rival, intelligent classmate Cindy Vortex, teases him for his small stature. After Jimmy launches a communications satellite into space, hoping to contact alien life, he crashes his makeshift rocket into his house's roof, upsetting his mother. When Jimmy, Carl and Sheen learn of the opening of Retroland, an amusement park, popular kid Nick Dean convinces the boys to sneak out and attend. Judy refuses to let him attend the park that night. After his jetpack accidentally starts a fire in the house, she grounds him. Taking Nick's advice, Jimmy uses his shrink ray invention to sneak out and meets Carl and Sheen at Retroland for a night of fun.
Meanwhile, Jimmy's satellite is intercepted by the Yolkians, a race of egg-like aliens from the planet Yolkus. Their leader, King Goobot, views Jimmy's message and notices a picture featuring his parents, declaring his search complete. The Yolkians arrive on Earth and abduct all the adults in Retroville, except Jimmy's teacher Miss Fowl (due to being shrunken down to a small size by Jimmy's shrink ray). As their ships return to space, Jimmy, Carl and Sheen mistake their departure for a shooting star, prompting Jimmy to wish their parents were gone. The next morning, all the children notice the parents are missing and party recklessly. At first, having no parents was fun for the children. But then, the following day, they are miserable and realize they need their parents. Jimmy learns that his satellite has been found and deduces the parents have been abducted. He enlists the children to create rocket ships out of Retroland's rides and they blast off into space after their families.
After braving a meteor shower and camping on an asteroid, Jimmy and company eventually reach Yolkus and find the parents with mind control devices attached to their heads. When Jimmy tries to get the mind-control helmet off of Hugh, Goobot captures them and reveals that Jimmy had led the Yolkians directly to Earth to take their parents, whom they intend to sacrifice to their god Poultra. Jimmy is separated from Goddard by Goobot's bumbling assistant, Ooblar, and is locked in a dungeon with the other children, who blame Jimmy for their predicament. Taking pity on Jimmy, Cindy confesses she and the other children need him and encourages Jimmy to fix things by helping them escape. Using a cellphone owned by Cindy's friend, Libby Folfax, Jimmy contacts Goddard, who escapes from Ooblar and frees the children.
Jimmy and company reach the Yolkians' Colosseum where a giant egg is hatched, releasing Poultra, a gigantic three-eyed alien chicken. As Goobot arranges the parents to be eaten using a mind control remote, Jimmy rallies the children to storm the colosseum and battle the guards while Sheen retrieves an escape vessel, which knocks Poultra on the head upon his return. Jimmy steals the remote from Goobot and the children escape Yolkus with the parents. Goobot arranges a fleet to pursue them, which is all destroyed when the children fly their ship around the surface of the Sun, save for Goobot's vessel. When Goobot and Ooblar mock Jimmy's short size, Jimmy charges at Goobot's ship with Goddard in a flying bike form and uses his shrink ray to enlarge himself into the size of a planet. He then blows Goobot's vessel away into an asteroid, destroying it. Goobot survives and vows revenge. On the return trip to Earth, Jimmy reconciles with his parents, admitting that despite his intelligence, he still depends on them. The next day, Jimmy and Carl have eggs in an egg cup for breakfast, when Jimmy's parents drink one of his scientific experiments, which causes significant belching, thinking it is a real soda can. They all laugh out loud while Goddard is seen outside flying to chase a bird.
In the mid-credits scene, the still-shrunken Miss Fowl is seen riding on an apple worm, named Mr. Wiggles, on her way to the cafeteria in the elementary school hall.
Cast
Debi Derryberry as Jimmy Neutron
Patrick Stewart as King Goobot V
Martin Short as Ooblar
Carolyn Lawrence as Cindy Vortex
Rob Paulsen as Carl Wheezer, Mr. and Mrs. Wheezer (credited as Carl's Mom and Dad), Kid in Classroom
Jeffrey Garcia as Sheen Estevez
Crystal Scales as Libby Folfax and Courtney Tyler
Frank Welker as Goddard (uncredited), Poultra, Worm, Demon, Girl-Eating Plant, Oyster
Candi Milo as Nick Dean, Britney, PJ
Megan Cavanagh as Judy Neutron (credited as Mom), VOX, Mrs. Vortex (credited as Cindy's Mom)
Mark DeCarlo as Hugh Neutron (credited as Dad), Pilot, Arena Guard, Mr. Vortex (credited as Cindy's Dad)
Carlos Alazraqui as Mr. Estevez (credited as Sheen's Dad)
Kimberly Brooks as Zachery, Reporter, Angie
Andrea Martin as Ms. Winfred Fowl (Credited as Ms. Fowl)
Billy West as Bobby's Twin Brother, Butch, Old Man Johnson, Robobarber, Yolkian Officer, Jailbreak Cop, Anchor Boy, Flurp Announcer
Bob Goen and Mary Hart as Yolkian newscasters
Dee Bradley Baker as NORAD Officer
Greg Eagles as Mr. Folfax (uncredited)
David L. Lander as Yolkian Guard, Gus
Jim Cummings as Ultra Lord, Mission Control, General Bob
Paul Greenberg as Guard
Laraine Newman as Hostess
Jeannie Elias as Little Girl, Camera Person
Michael Hagiwara as Chris
Keith Alcorn as Bobby, Kid, Control Yolkian
Richard Allen as Digital Voice
Brian Capshaw as Screamer
Cheryl Ray as Screamer
Mark Menza as Yolkian Incubator Operator
Matthew Russell as Hyperactive Kid, Arena Yolkian
Production
Development
The idea for a series about a boy with super-genius powers was first conceived in the 1980s by John A. Davis, who scripted and storyboarded a short narrative titled Runaway Rocketboy, centering around a character named Johnny Quasar (inspired by a facetious nickname that his summer co-workers had coined for him in his youth) who builds a rocket ship and runs away from his parents. Davis stated in an episode of the Nickelodeon Animation Podcast that he initially wrote the concept with the intention of creating it as a live-action film with special effects and matte shots, even going so far as to apply to receive a grant in order to fund the project, but found that getting such an investment was difficult since the film was not educational or informative. The idea laid dormant for several years until Davis came across the abandoned script while in the process of moving. Around the same time, Davis' Dallas-based studio, DNA Productions, had just begun experimenting with the use of computer animation after obtaining copies of LightWave 3D. In turn, Davis realized that the film would be fitting as a CGI film, since all of the science fiction set pieces could be entirely modeled in 3-D.
Davis, alongside studio co-founder Keith Alcorn, created a 40-second proof-of-concept demo film which depicted Johnny and his robot dog, Goddard, flying through an asteroid belt and greeting the viewers. Simultaneously, Davis and Alcorn worked to create a story bible outlining a potential television series. The demo short was shown off in 1995 at the SIGGRAPH CGI convention, where it was entered into a competition for LightWave films. The demo quickly garnered notability in the computer animation industry, receiving frequent press coverage in magazines and winning two "Wavey" awards- one for Best Character Animation and another for Best in Show. Among people who caught wind of the film was Steve Oedekerk, the founder of O Entertainment, who saw a still shot of Johnny and Goddard in a CGI magazine. Oedekerk, a strong backer of computer animation, was impressed by the characters' designs – he stated in an interview that the image particularly stood out to him because it "seemed fun" compared to the mostly-photorealistic work being done with computer animation at the time. He cold-called Davis requesting to see a tape of the full short. After watching the demo, as well as seeing the show bible which Davis and Alcorn had developed, Oedekerk expressed interest in helping to pitch their concept to different networks.
After teaming up with O Entertainment, the company began working on developing a full-length episode for a TV series, titled The Adventures of Johnny Quasar, writing an expanded version of the original Runaway Rocket story and tweaking aspects of Johnny's design to make him look more like a child. In Fall 1995, the idea was pitched to Nickelodeon, who expressed immediate interest in the idea. Albie Hecht, the then-president of Nick, was particularly impressed- coining him to be "half Bart Simpson and half Albert Einstein," he strongly praised Johnny's blended personality as an adventurous and intelligent character and one grounded in the reality of childhood, which, according to him, made him "the perfect Nick kid." Following positive reception, Nickelodeon commissioned for a 13-minute pilot episode to be created. After several years of going through the review process, the episode began production in late 1997, and was completed in 1998. The name "Johnny Quasar" was changed at the request of Nickelodeon, who did not want the character to be confused with similarly named ones such as Jonny Quest and Captain Quazar, so Davis brainstormed other character names while walking his dog around the neighborhood block, eventually coming up with the final name, "Jimmy Neutron."
After the pilot was completed, Nickelodeon executives, who were impressed by the pilot and still enthusiastic about the show's potential, raised the prospect of creating a theatrical film to accompany the TV series, much to the surprise of Davis and his team at the studio. During the initial pitch to Nickelodeon, Oedekerk had highlighted the idea that using computer animation would allow the same models and assets to be reused between both a film and a TV show, an idea which Nick held strong faith in. Davis further suggested that the feature film be created first, since the characters being modeled could be created at a higher quality than they would have with a TV budget. Although Nick was worried that it would be more difficult to attract a movie-going audience without the TV show to build an install base for the series, these concerns were answered with a series of short TV interstitials which would begin airing in order to build up hype for the upcoming film.
With a budget of roughly $30 million, production of Jimmy Neutron: Boy Genius was greenlit in Fall 1999, and work began on a script for the film. Production officially started in February 2000 under the direction of Davis. In order to speed up the pace of work for a feature film, the company's staff count was considerably increased from 30 to around 150 employees, and the studio's workspace was also reformed in order to fit such a team of filmmakers. The film was completed in 24 months- roughly half that in which most other CGI films were completed.
Writing
The screenplay for Jimmy Neutron was written by Davis and Oedekerk, as well as Rugrats show-writers David N. Weiss and J. David Stem. In creating the many ideas in Jimmy Neutron, Davis and Oedekerk thought back to their childhoods, trying to think about "what a kid would create if he had the ability to create any kind of gadget." The film was largely inspired by Davis' own love of science fiction which he had since childhood, drawing influence from various sources including Thunderbirds and Ray Harryhausen's stop motion work. Oedekerk's 6-year-old daughter, Zoe, came up with the idea for "burp soda," which ultimately appeared in the movie as one of Jimmy's many inventions. According to Davis, the Ultralord-obsessed Sheen Estevez was inspired by Davis' own love of collecting. Sheen was initially intended to be Japanese, as he was named after the nickname of a Japanese employee who had worked for Davis, but the filmmaking team had trouble finding a good Japanese voice actor. Incidentally, they changed the character's nationality to Mexican after opening the role to a broader category and eventually settling on Mexican stand-up comic Jeff Garcia.
Animation
Jimmy Neutron was the first computer animated film to be created entirely using commercial animation programs rather than proprietary software, with most animation done using both Lightwave and project:messiah. Characters were first modeled in Lightwave, after which they were rigged and animated in Messiah. Texture painting was done via Adobe Photoshop, while compositing work was completed in Maya Fusion. In addition to serving as executive producer, Alcorn was the film's lead character designer, and created actively simplistic and cartoonish designs in order to avoid overcomplicating production. To animate crowd scenes, methods of simplification were used to make animation less time-consuming- characters that were farther from the camera had less articulation, and animators would duplicate the same characters, offset them to different areas, and change their body parts to differentiate them. One particular scene shows a crowd of 6000 Yolkians, each of which uses one of 30 distinct animation loops.
According to Davis, the character models were intentionally given a "sculpted, graphic look," both to avoid making them look overly realistic and to circumvent the prospect of having to deal with simulating cloth or hair. The over-the-top character designs, in turn, influenced the film world's aesthetic (e.g. cars were modeled to be able to fit the characters' stylistically large heads). Off-the-shelf shaders were favored over ones which created more photorealistic lighting in order to maintain a cartoonish appearance throughout.
Casting
Nancy Cartwright, Pamela Adlon and E. G. Daily were all considered for the role of Jimmy Neutron before Debi Derryberry was cast for the film and subsequent series. The film was Derryberry's biggest acting role at the time, as previously she had mostly provided minor roles in films and TV shows.
Soundtrack
Official soundtrack
The movie soundtrack was released by Zomba Music, Jive Records, and Nick Records on November 20, 2001, a month prior to the film's release. It includes covers of DJ Jazzy Jeff and The Fresh Prince's "Parents Just Don't Understand", Thomas Dolby's "She Blinded Me With Science" , and Kim Wilde's "Kids In America".
Original score
Additionally, a promotional CD containing the score by John Debney was released for Academy Award consideration.
Release
Theatrical release
Jimmy Neutron: Boy Genius was released in theaters on December 21, 2001, by Paramount Pictures.
Home media
Jimmy Neutron: Boy Genius was released on VHS and DVD by Paramount Home Entertainment on July 2, 2002. It was re-released on DVD on June 22, 2011, and re-released again on DVD on April 25, 2017. After 20 years, the film will be released on Blu-ray on March 8, 2022.
Film promotion
These shorts were used to promote the film. They have all been released on the official Jimmy Neutron: Boy Genius DVD release of the film. All of the inventions in each short were seen again at some point on the television series (except for the Pain-Transference helmet). Clips from similar versions of these shorts, along with clips from the unaired "Runaway Rocketboy" pilot, appeared in the teaser trailer for Jimmy Neutron: Boy Genius. The biggest difference between the clips seen in the trailer and the original shorts is that Jimmy wears the white and red striped shirt he wore in the pilot, rather than his trademark shirt.
Shorts
Reception
Critical response
Jimmy Neutron: Boy Genius received generally positive reviews from critics and audiences. On Rotten Tomatoes, the film has an approval rating of 74% based on 76 reviews, with an average rating of 6.40/10. The critics' consensus reads: "What Jimmy Neutron lacks in computer animation, it makes up for in charm and cleverness." According to Metacritic, the film has a weighted average score of 65 out of 100 based on 21 reviews, indicating "generally favorable reviews".
Rita Kempley of Washington Post praised the film, saying that "this little charmer both celebrates and kids the corny conventions of family sitcoms". Nell Minow of Common Sense Media enjoyed the "stylish 3-D computer animation, good characters", giving the film 3 out of 5 stars. Owen Gleiberman of Entertainment Weekly gave this film a grade of "B+", calling it "a lickety-split, madly packed, roller-coaster entertainment that might almost have been designed to make you scared of how much smarter your kids are than you". Paul Tatara of CNN.com called the film "the most delightfully original children's film of 2001". Roger Ebert of the Chicago Sun-Times gave the film three stars out of four, saying that "it doesn't have the little in-jokes that make Shrek and Monsters, Inc. fun for grown-ups. But adults who appreciate the art of animation may enjoy the look of the picture".
Box office
The film was financially successful, grossing $13,833,228 on its opening weekend in third place behind The Lord of the Rings: The Fellowship of the Ring and Ocean's Eleven and ended up with a total of $80,936,232 domestically, and the film did better overseas grossing $22,056,304 which made a total of $102,992,536 worldwide. It had a budget of roughly $30 million. It is one of only twelve feature films to be released in over 3,000 theaters and still improve on its box office performance in its second weekend, increasing 8.7% from $13,832,786 to $15,035,649.
Awards
Jimmy Neutron: Boy Genius was nominated for the first Academy Award for Best Animated Feature, losing to Shrek. It was the first release from Nickelodeon Movies to receive an Academy Award nomination.
Expanded franchise
Cancelled sequel and possible reboot film plans
In February 2002, a sequel was reported in development for a summer 2004 release. Producer Albie Hecht reported to The Los Angeles Times that the sequel "would be made on the same budget as the first, but with a new batch of inventions and adventures in Jimmy's town of Retroville." On June 20, 2002, The Hollywood Reporter reported that writer Kate Boutilier had signed a writing deal with Nickelodeon Movies and Paramount Pictures to write a sequel to the film, but the sequel never materialized. The film was cancelled because the writers could not agree on a story and Alcorn later stated in an interview that "once the TV series came out, there wasn't a lot of incentive to make a movie when fans could simply watch Jimmy Neutron for free at home."
In 2016, director John A. Davis stated that he has a story for a Jimmy Neutron reboot feature that he would like to make, but he is waiting for the "right situation" to make it.
When asked about a reboot in 2020, Rob Paulsen stated "Well, I've got to tell you, man. I go all over the world when we don't have the coronavirus, and people love Carl. They love Carl. I don't think it would be a bad thing at all to reboot Jimmy Neutron. I think that's one of those shows that a lot of people would love to see again. It was very good. Really smart. That wouldn't surprise me."
Television series
Due to the film's successful box office performance, it led to a sequel television series The Adventures of Jimmy Neutron, Boy Genius, that ran from July 2002 to November 2006. Four years later, a spin-off series (as well as a spin-off of the original) titled Planet Sheen, focusing on Sheen Estevez, ran from October 2, 2010, to February 15, 2013.
Other media
Genius, Sheenius or Inbetweenius
An event that aired on May 19, 2007, Nickelodeon rehired Debi Derryberry, Jeffrey Garcia and Rob Paulsen to return for a special audio commentary version of the film that features their animated counterparts' silhouettes, spoofing Mystery Science Theater 3000.
Theme Park Attraction
A simulator ride called Jimmy Neutron's Nicktoon Blast opened at Universal Studios Florida on April 4, 2003, and was operated until August 18, 2011. It was set to take place after the events of the film and featured guest appearances by other Nicktoons characters.
See also
List of films featuring extraterrestrials
References
External links
2001 films
2001 comedy films
2001 animated films
2000s American animated films
2000s adventure comedy films
2000s science fiction comedy films
Alien abduction films
Alien visitations in films
American Christmas films
American children's animated space adventure films
American children's animated science fantasy films
American children's animated comic science fiction films
American computer-animated films
American films
American robot films
English-language films
Films scored by John Debney
Films about missing people
Films adapted into television shows
Films directed by John A. Davis
Films set on fictional planets
Jimmy Neutron films
Films about size change
DNA Productions films
Nickelodeon Movies films
Nickelodeon animated films
Paramount Pictures films
Paramount Pictures animated films
Films with screenplays by John A. Davis
Films with screenplays by Steve Oedekerk
2000s English-language films
2001 directorial debut films |
12943175 | https://en.wikipedia.org/wiki/Service-oriented%20modeling | Service-oriented modeling | Service-oriented modeling is the discipline of modeling business and software systems, for the purpose of designing and specifying service-oriented business systems within a variety of architectural styles and paradigms, such as application architecture, service-oriented architecture, microservices, and cloud computing.
Any service-oriented modeling method typically includes a modeling language that can be employed by both the "problem domain organization" (the business), and "solution domain organization" (the information technology department), whose unique perspectives typically influence the service development life-cycle strategy and the projects implemented using that strategy.
Service-oriented modeling typically strives to create models that provide a comprehensive view of the analysis, design, and architecture of all software entities in an organization, which can be understood by individuals with diverse levels of business and technical understanding. Service-oriented modeling typically encourages viewing software entities as "assets" (service-oriented assets), and refers to these assets collectively as "services." A key service design concern is to find the right service granularity both on the business (domain) level and on a technical (interface contract) level.
Popular approaches
Several approaches have been proposed specifically for designing and modeling services, including SDDM, SOMA and SOMF.
Service-oriented design and development methodology
Service-oriented design and development methodology (SDDM) is a fusion method created and compiled by M. Papazoglou and W.J. van den Heuvel. The paper argues that SOA designers and service developers cannot be expected to oversee a complex service-oriented development project without relying on a sound design and development methodology. It provides an overview of the methods and techniques used in service-oriented design, approaches the service development methodology from the point of view of both service producers and requesters, and reviews the range of SDDM elements that are available to these roles.
An update to SDDM was later published in Web Services and SOA: Principles and Technology by M. Papazoglou.
Service-oriented modeling and architecture
IBM announced service-oriented modeling and architecture (SOMA) as its SOA-related methodology in 2004 and published parts of it subsequently. SOMA refers to the more general domain of service modeling necessary to design and create SOA. SOMA covers a broader scope and implements service-oriented analysis and design (SOAD) through the identification, specification and realization of services, components that realize those services (a.k.a. "service components"), and flows that can be used to compose services.
SOMA includes an analysis and design method that extends traditional object-oriented and component-based analysis and design methods to include concerns relevant to and supporting SOA. It consists of three major phases of identification, specification and realization of the three main elements of SOA, namely, services, components that realize those services (aka service components) and flows that can be used to compose services.
SOMA is an end-to-end SOA method for the identification, specification, realization and implementation of services (including information services), components, flows (processes/composition). SOMA builds on current techniques in areas such as domain analysis, functional areas grouping, variability-oriented analysis (VOA) process modeling, component-based development, object-oriented analysis and design and use case modeling. SOMA introduces new techniques such as goal-service modeling, service model creation and a service litmus test to help determine the granularity of a service.
SOMA identifies services, component boundaries, flows, compositions, and information through complementary techniques which include domain decomposition, goal-service modeling and existing asset analysis.
The service lifecycle in SOMA consists of the phases of identification, specification, realization, implementation, deployment and management in which the fundamental building blocks of SOA are identified then refined and implemented in each phase. The fundamental building blocks of SOA consist of services, components, flows and related to them, information, policy and contracts.
Service-oriented modeling framework (SOMF)
SOMF has been devised by author Michael Bell as a holistic and anthropomorphic modeling language for software development that employs disciplines and a universal language to provide tactical and strategic solutions to enterprise problems. The term "holistic language" pertains to a modeling language that can be employed to design any application, business and technological environment, either local or distributed. This universality may include design of application-level and enterprise-level solutions, including SOA landscapes, cloud computing, or big data environments. The term "anthropomorphic", on the other hand, affiliates the SOMF language with intuitiveness of implementation and simplicity of usage.
SOMF is a service-oriented development life cycle methodology, a discipline-specific modeling process. It offers a number of modeling practices and disciplines that contribute to a successful service-oriented life cycle development and modeling during a project (see image on left).
It illustrates the major elements that identify the “what to do” aspects of a service development scheme. These are the modeling pillars that will enable practitioners to craft an effective project plan and to identify the milestones of a service-oriented initiative—either a small or large-scale business or a technological venture.
The provided image thumb (on the left hand side) depicts the four sections of the modeling framework that identify the general direction and the corresponding units of work that make up a service-oriented modeling strategy: practices, environments, disciplines, and artifacts. These elements uncover the context of a modeling occupation and do not necessarily describe the process or the sequence of activities needed to fulfill modeling goals. These should be ironed out during the project plan – the service-oriented development life cycle strategy – that typically sets initiative boundaries, time frame, responsibilities and accountabilities, and achievable project milestones.
See also
Domain-driven design
Object-oriented analysis and design
Service-oriented architecture
Service granularity principle
Unified Modeling Language
References
Further reading
Ali Arsanjani et al. (2008). "SOMA: A method for developing service-oriented solutions ". IBM systems Journal Oct 2008
Michael Bell (2008). Service-Oriented Modeling: Service Analysis, Design, and Architecture. Wiley.
Birol Berkem (2008). "From The Business Motivation Model (BMM) To Service Oriented Architecture (SOA)" In: Journal of Object Technology Vol 7, no. 8
M. Brian Blake (2007). "Decomposing Composition: Service-Oriented Software Engineers". In: IEEE Software. Nov/Dec 2007. pp. 68–77.
Michael P. Papazoglou, Web Services - Principles and Technology. Prentice Hall 2008,
Dick A. Quartel, Maarten W. Steen, Stanislav Pokraev, Marten J. Sinderen, COSMO: A conceptual framework for service modelling and refinement, Information Systems Frontiers, v.9 n.2-3, p. 225-244, July 2007
Luba Cherbakov et al. (2006). "SOA in action inside IBM, Part 1: SOA case studies". IBM developerWorks
External links
Elements of Service-Oriented Analysis and Design, IBM developerWorks Web services zone, June 2004
Service-oriented (business computing)
Enterprise modelling |
7834893 | https://en.wikipedia.org/wiki/Scheller%20College%20of%20Business | Scheller College of Business | The Scheller College of Business at the Georgia Institute of Technology was established in 1912, and is consistently ranked in the top 30 business programs in the nation.
History
The Georgia Tech Scheller College of Business was established more than a century ago and has a distinguished history as part of a world-renowned technological research university. Georgia Tech's business school began in 1912 with the creation of a School of Commerce. In 1933 this school was moved to the University of Georgia during the newly created Georgia Board of Regents' decision to consolidate Georgia's system of higher education. It would later become Georgia State University.
To meet the need for management training in technology, an Industrial Management degree was established in 1934, with a master's degree in the subject becoming the first professional management degree offered in the state 11 years later. The PhD program began in 1970.
In 1989, the College of Management, previously named the School of Commerce, combined with the social sciences, humanities, and economics departments to form the Ivan Allen College of Management, Policy and International Affairs. Nine years later, in 1998, the College of Management separated from the Ivan Allen College of Liberal Arts to become its own college.
In 1996, Georgia Tech alumnus and restaurateur Thomas E. DuPree, Jr. pledged a $20 million donation to the College of Management, resulting in the college being named the DuPree College of Management in his honor. However, while DuPree donated over $5 million to the college, his name was removed from the college in 2004 when the additional $15 million was not forthcoming. DuPree had recently resigned as board chairman and CEO of Avado Brands, the parent company of several chain restaurants that had recently filed for Chapter 11 bankruptcy. In a carefully worded statement, Georgia Tech President G. Wayne Clough remarked that while DuPree's name would be "reluctantly" removed from the college, "We retain the utmost respect for Tom DuPree and all of his remarkable accomplishments and many philanthropic activities." DuPree's donation of over $5 million to the college did fund nearly 200 scholarships although the remaining pledged donations were never provided.
On November 6, 2009, the College of Management received an anonymous gift of $25 million. The donor was later identified as Ernest Scheller, Jr., a Georgia Tech alumnus with a degree in Industrial Management (now known as a Bachelor of Science in Business Administration) and former chairman of Silberline Manufacturing, a Hometown, Pennsylvania-based pigment manufacturer. Scheller used $20 million of his donation as a one-to-one challenge grant designed to inspire charitable gifts and commitments from other donors. Fundraising for the challenge concluded on June 30, 2012. The remaining $5 million of Mr. Scheller's initial $25 million donation has been designated as discretionary funds to be dispersed by the deans.
In June 2012, the college announced a $50 million gift from Ernest Scheller Jr.Diamond, Laura (11 June 2012). Georgie Tech Receives transformational donation, Atlanta Journal Constitution(12 June 2012). Officials say funds will transform the business college, Atlanta Journal Constitution, p. B1 (paywall) This $50 million included the $25 million that had been given by him anonymously in 2009. It was the largest cash gift in Georgia Tech's history. As a result of Mr. Scheller's gift, the College of Management was renamed the Ernest Scheller Jr. College of Business and his donations have been used to double the college's endowment, enriching academic programs, growing the faculty, and strengthening the Ph.D. programs, among other uses.
Facilities
In 2000, Georgia Tech undertook a $180 million building project in Atlanta called Technology Square. This new multi-building complex, home to the College of Business, is a fusion of business, education, research, and retail space. The complex also houses The Global Learning Center, Advanced Technology Development Center, Economic Development Institute, Center for Quality Growth and Regional Development as well as the Georgia Tech Hotel and Conference Center. The facilities are located in Midtown Atlanta next to several major corporate headquarters such as Bellsouth (AT&T), The Coca-Cola Company, Turner Broadcasting System Inc. and Earthlink.
The purpose of Technology Square is to promote the formation of a high tech business cluster centered around a premier research university. Similar formations have taken place in cities such as Palo Alto and Boston, both nexuses of thriving high-tech corridors.
On November 24, 2006, the Scheller College of Business dedicated the Ferris-Goldsmith Trading Floor. The trading floor includes fifty-four dual-display computers as well as electronic stock information on the walls and is used to train all levels of management students to use financial analysis and electronic trading tools. Business faculty will use the facility to research improved human performance in trading environments as well as create new financial service models.
The trading floor houses Tech's Quantitative and Computational Finance program.
In September 2018, Georgia Tech announced the beginning of the Tech Square Phase III initiative. The two-tower complex will add more than 400,000 gross square feet of space to Tech Square.
One of the two planned high-rises, Scheller Tower, will be a new resource for the Scheller College of Business that expands the college's footprint within Tech Square. The earliest the building will open is in 2022.
Degrees
Undergraduate
The College of Business offers a BS in Business Administration. U.S. News & World Report currently ranks the undergraduate program as number 19 out of the top 50 ranked programs.
U.S. News & World Report ranks the College No. 3 in Undergraduate Business Analytics and Management of Information Systems Programs, and No. 4 in Undergraduate Quantitative Analysis Programs, No. 6 in Undergraduate Production and Operations Programs, and No. 8 in Undergraduate Supply Chain Management/Logistics Programs.
MBA Programs
Reputation & Rankings
The College of Business ranks No. 27 in U.S. News & World Report's 2021 Best Business Schools ranking of the nation's top full-time Master of Business Administration (MBA) programs. U.S. News & World Report currently ranks the MBA program No. 3 for Business Analytics, No. 7 for Information Systems, and No. 8 for Production/Operations Management.
The Evening MBA program ranks No. 19 among part-time MBA programs in the country according to the publication's rankings.
As part of the 2019-2020 Best Business Schools ranking, Bloomberg Businessweek ranks Scheller College No. 24 in the U.S. Georgia Tech Scheller graduates rank No. 3 in the world for innovation and creativity and No. 4 in the world for entrepreneurship. In ranking colleges with the most diverse students, Scheller ranks No. 9 in the world.
The Financial Times ranks the Scheller MBA program's career services No. 4 in the U.S. According to The Financial Times, Scheller's MBA program has a graduate placement rate of 96 percent within the first three months after graduation.
Scheller also ranks No. 3 in the U.S. and No. 9 in the world on the Corporate Knights Better World MBA Ranking list.
Curriculum and Format
Georgia Tech Scheller College of Business offers three MBA programs: Full-time, Evening, and Executive.
Full-time MBA
Scheller's Full-time MBA is a two-year degree consisting of one year of required courses and another year of mostly elective courses. The Full-time MBA program offers classes Monday through Friday and enrollment takes place once per year in August. Concentrations include Accounting, Finance, Information Technology Management, Law & Ethics, Marketing, Operations Management, Organizational Behavior, and Strategy and Innovation. Students can use their elective hours to take an immersive track and/or a practicum.
For the Full-time MBA program, the core courses are listed below:
Analytical Tools for Decision Support
Business Communications
Financial and Managerial Accounting I
Leadership Assessment Workshop
Leading People and Organizations
Legal and Ethical Considerations
Managerial Economics
Managing Information Resources
Marketing Management
Operations Management
Principles of Finance
Strategic Management
Evening (part-time) MBA
The Evening MBA program offers classes Monday through Thursday evening. The program enrolls twice per year, in January and August.
Scheller's Evening (part-time) MBA offers the same core curriculum and concentrations as the Full-time MBA program but allows students to take courses at their own pace with an average completion rate of 24 to 36 months.
Practicums
The Full-time and Evening MBA programs provide practicums in the following areas:
Business Analytics
Finance
Information Technology
International
Lean Six Sigma
Marketing
New Venture
Pro Bono Consulting
Quantitative and Computational Finance
Real Estate
Supply Chain Innovation
Sustainable Business Consulting
Executive MBA
Scheller's Executive (weekend) MBA is a 17-month program with classes on Fridays and Saturdays for professionals with five or more years of professional work experience. Students can select one of two customizable tracks: Global Business or Management of Technology. Courses by subject area include Accounting, Finance, Business Law and Ethics, Information Technology Management, Marketing, Operations Management, Organizational Behavior, and Strategy and Innovation. There are fourteen core courses as well as additional courses required for one of the two tracks.
For the Executive MBA program, the core courses are listed below:
Business Regulations
Cross-Cultural Communications
Data Analysis for Business
Ethical Decision Making
Financial and Managerial Accounting
Financial Management
Global Economics
International Business Negotiations
Information Systems
Leadership and Organizational Behavior
Manufacturing and Service Management
Marketing and Consumer Behavior
Strategic Management
Sustainable Business Strategies
Postgraduate
PhD in Management
Scheller College of Business’ PhD program is limited to full-time students who will complete their entire doctoral program before leaving the campus. There are seven concentrations: Accounting, Finance, Information Technology Management, Marketing, Operations Management, Organizational Behavior, and Strategy and Innovation.
Centers and Initiatives
Business Analytics Center
The Business Analytics Center (BAC) brings together industry leaders, Georgia Tech faculty, and business analytics students to supports business analytics programs offered by the college and Georgia Tech including an MBA concentration in Analytics, an undergraduate Certificate in Analytics, and One-Year Interdisciplinary Master of Science in Analytics at Georgia Tech.
Cecil B. Day Program for Business Ethics
Named for Cecil B. Day, founder of Days Inns of America, the Program offers programs in business ethics and supports the Institute of Leadership and Social Impact's (ILSI) Ideas to Serve program, the Net Impact program, an annual speaker series, and a grant for faculty who incorporate ethics into their curriculum.
Center for International Business and Research
The Center for International Business and Research's (CIBER) mission is to ensure the long-term international economic competitiveness of the United States through the support of research, business education initiatives, and corporate outreach activities with forums, conferences and thought leadership. It is one of only 17 national resource centers of excellence in international business funded by the U.S. Department of Education.
Creative Destruction Lab
The Creative Destruction Lab (CDL) is an objectives-based program for massively scalable, seed-stage, science and technology-based ventures. It supports ventures innovating in commerce, consumer health, engineering, finance, logistics, public health, transportation, and more. The program's mentors provide assistance to technical founders who learn from other entrepreneurs. The TI:GER (Technology Innovation: Generating Economic Results) program is the educational arm of CDL-Atlanta.
Financial Analysis Lab
The Financial Analysis Lab conducts unbiased research on issues of financial reporting and analysis. Their focus is on issues of interest to a large segment of stock market participants. Depending on the issue, The Center may focus attention on individual companies, groups of companies, or on segments of the market at large and produce a quarterly report of cash flow trends and their quarterly drivers.
Institute for Leadership and Social Impact
The Institute for Leadership and Social Impact (ILSI) is an interdisciplinary Center that promotes servant leadership and organizational practices through workshops, coursework, a major lecture series (Impact), the Ideas to Serve student social innovation competition, the Servant Leadership student fellows program, and the Leadership for Social Good study abroad program.
Jones MBA Career Center
Their services included developing a career strategy, identifying job search targets, and exploring MBA career options.
Program for Engineering Entrepreneurship
The Program for Engineering Entrepreneurship is a collaboration between the Scheller College of Business, the College of Engineering, and the Woodruff School of Mechanical Engineering. The program's focus includes a graduate-level curriculum leading to a Certificate in Engineering Entrepreneurship, and short courses on a variety of topics for faculty members, scientists, and engineers, academic conferences, and seminars.
Ray C. Anderson Center for Sustainable Business
The Ray C. Anderson Center for Sustainable Business, named in honor of Georgia Tech alumnus Ray C. Anderson, focuses on conducting research, educating students, and partnering with industry practitioners to support the development of sustainable business practices. The center works with faculty research initiatives, offers grants to students interested in business sustainability, and partners with other universities for the Global Social Venture Competition. It also helps supports the Idea to Serve and Carbon Reduction Challenge competitions.
Steven A. Denning Management & Technology Program
The Steven A. Denning Technology & Management (T&M) Program is open to all Georgia Tech undergraduate students, which includes the Scheller College of Business, the College of Computing, the College of Engineering, the College of Design, the Ivan Allen College of Liberal Arts, and the College of Sciences. Business and Engineering students who complete the program earn a minor in Engineering and Business. Computer Science and IT Management students earn a minor in Computing & Business. Students from all other colleges earn a minor in Technology & Business.
TI:GER® (Technology Innovation: Generating Economic Results)
The TI:GER® (Technology Innovation: Generating Economic Results) program is a 16-month transdisciplinary program that prepares students for careers in technology innovation. Students in the MBA program work with PhD students from Georgia Tech's College of Engineering, College of Computing, and College of Sciences on a project in innovative technology with a possibility for commercialization.
Notable College of Business Alumni
See also
List of United States business school rankings
List of business schools in the United States
List of Atlantic Coast Conference business schools
References
External links
Scheller College of Business
Article on the College of Business from the New Georgia Encyclopedia
Georgia Tech colleges and schools
Business schools in Georgia (U.S. state)
Educational institutions established in 1934
1934 establishments in Georgia (U.S. state) |
30386872 | https://en.wikipedia.org/wiki/CheiRank | CheiRank | The CheiRank is an eigenvector with a maximal real eigenvalue of the Google matrix constructed for a directed network with the inverted directions of links. It is similar to the PageRank vector, which ranks the network nodes in average proportionally to a number of incoming links being the maximal eigenvector of the Google matrix with a given initial direction of links. Due to inversion of link directions the CheiRank ranks the network nodes in average proportionally to a number of outgoing links. Since each node belongs both to CheiRank and PageRank vectors the ranking of information flow on a directed network becomes two-dimensional.
Definition
For a given directed network the Google matrix is constructed in the way described in the article Google matrix. The PageRank vector is the eigenvector with the maximal real eigenvalue . It was introduced in and is discussed in the article PageRank. In a similar way the CheiRank is the eigenvector with the maximal real eigenvalue of the matrix built in the same way as but using inverted direction of links in the initially given adjacency matrix. Both matrices and belong to the class of Perron–Frobenius operators and according to the Perron–Frobenius theorem the CheiRank and PageRank eigenvectors have nonnegative components which can be interpreted as probabilities. Thus all nodes of the network can be ordered in a decreasing probability order with ranks for CheiRank and PageRank respectively. In average the PageRank probability is proportional to the number of ingoing links with . For the World Wide Web (WWW) network the exponent where is the exponent for ingoing links distribution. In a similar way the CheiRank probability is in average proportional to the number of outgoing links with
with
where is the exponent for outgoing links distribution of the WWW. The CheiRank was introduced for the procedure call network of Linux Kernel software in, the term itself was used in Zhirov. While the PageRank highlights very well known and popular nodes, the CheiRank highlights very communicative nodes. Top PageRank and CheiRank nodes have certain analogy to authorities and hubs appearing in the HITS algorithm but the HITS is query dependent while the rank probabilities and classify all nodes of the network. Since each node belongs both to CheiRank and PageRank we obtain a two-dimensional ranking of network nodes. There had been early studies of PageRank in networks with inverted direction of links but the properties of two-dimensional ranking had not been analyzed in detail.
Examples
An example of nodes distribution in the plane of PageRank and CheiRank is shown in Fig.1 for the procedure call network of Linux Kernel software.
The dependence of on for the network of hyperlink network of Wikipedia English articles is shown in Fig.2 from Zhirov. The distribution of these articles in the plane of PageRank and CheiRank is shown in Fig.3 from Zhirov. The difference between PageRank and CheiRank is clearly seen from the names of Wikipedia articles (2009) with highest rank. At the top of PageRank we have 1.United States, 2.United Kingdom, 3.France while for CheiRank we find 1.Portal:Contents/Outline of knowledge/Geography and places, 2.List of state leaders by year, 3.Portal:Contents/Index/Geography and places. Clearly PageRank selects first articles on a broadly known subject with a large number of ingoing links while CheiRank selects first highly communicative articles with many outgoing links. Since the articles are distributed in 2D they can be ranked in various ways corresponding to projection of 2D set on a line. The horizontal and vertical lines correspond to PageRank and CheiRank, 2DRank combines properties of CheiRank and PageRank as it is discussed in Zhirov. It gives top Wikipedia articles 1.India, 2.Singapore, 3.Pakistan.
The 2D ranking highlights the properties of Wikipedia articles in a new rich and fruitful manner. According to the PageRank the top 100 personalities described in Wikipedia articles have in 5 main category activities: 58 (politics), 10 (religion),17 (arts), 15 (science), 0 (sport) and thus the importance of politicians is strongly overestimated. The CheiRank gives respectively 15, 1, 52, 16, 16 while for 2DRank one finds 24, 5, 62, 7, 2. Such type of 2D ranking can find useful applications for various complex directed networks including the WWW.
CheiRank and PageRank naturally appear for the world trade network, or international trade, where they and linked with export and import flows for a given country respectively.
Possibilities of development of two-dimensional search engines based on PageRank and CheiRank are considered. Directed networks can be characterized by the correlator between PageRank and CheiRank vectors: in certain networks this correlator is close to zero (e.g. Linux Kernel network) while other networks have large correlator values (e.g. Wikipedia or university networks).
Simple network example
A simple example of the construction of the Google matrices and , used for determination of the related PageRank and CheiRank vectors, is given below. The directed network example with 7 nodes is shown in Fig.4. The matrix , built with the rules described
in the article Google matrix, is shown in Fig.5;
the related Google matrix is
and the PageRank vector is the right eigenvector of
with the unit eigenvalue (). In a similar way, to determine the CheiRank eigenvector all directions of links in Fig.4 are inverted,
then the matrix is built,
according to the same rules applied for the network with inverted link
directions, as shown in Fig.6. The related Google matrix is
and the CheiRank vector
is the right eigenvector of
with the unit eigenvalue (). Here is the damping factor taken at its usual value.
See also
PageRank, HITS algorithm, Google matrix
Markov chains, Transfer operator, Perron–Frobenius theorem
Information retrieval
Web search engines
References
External links
Two-dimensional ranking of Wikipedia articles
World trade: CheiRank versus PageRank
Towards two-dimensional search engines
Top people of Wikipedia
Link analysis |
225567 | https://en.wikipedia.org/wiki/Libranet | Libranet | Libranet was an operating system based on Debian.
The last version (as of April 25, 2005) released is Libranet 3.0, which cost about $90 in US dollars for new users, or $65 for existing Libranet users. The previous version, Libranet 2.8.1, became free to download.
Development of Libranet has been discontinued.
History
The name comes from "Libra Computer Systems" (a company owned by the founder) and the fact that "libra.com" was taken.
The first release of Libranet was in 1999. Most Linux distributions of this time were very difficult to install, and were considered either for programmers or those who wanted a low cost server. Libranet attempted to put out and sell a distribution that was easy to install, and meant for desktop use. Corel likewise attempted this with Corel Linux, but abandoned this and refocused on software for Windows and Mac OS X operating systems. Libranet, however, continued, and developed some recognition for having a Linux distribution that was good for desktop users. Corel sold the rights to their Linux operating system to Xandros, which later released their own offering of a Linux desktop.
From 1999 to 2003, most Linux distributions with comparable desktop usability to Libranet were also priced similarly. This began to change, however, in 2004. Linux as whole had advanced, and many distributions were now reasonably easy to install, with a relatively user-friendly desktop. Some distributions such as MEPIS were competitive and far less expensive. Others such as Knoppix were offered at no cost.
Libranet attempted to carve a niche as the user-friendly Linux distro, with extensive support (termed "up and running support"), which, of all the desktop distros, was the most compatible with the Debian release of the time (Woody). The support offered was truly extensive: Jon Danzig, the founder, would often personally answer people's inquiries. This helped make people who had chosen Libranet be even more loyal to it.
However, with the release of Debian Sarge in 2005, along with the emergence of Ubuntu (a Debian based distro offered at no cost, with the option to purchase support), Libranet received less attention. Debian itself revamped their installer making it easier to use (before this, a good reason to use a Debian-based distribution was that Debian's own installer was not user friendly).
Libranet released version 3.0, which received good reviews, but the market had changed for desktop distributions. Various other commercial vendors had released free versions of their distributions, such as SUSE, which released OpenSUSE, and Red Hat, which released Fedora. Whereas Libranet sold their distribution, and then gave free extensive support, many distributors chose to give away their distribution and sell support, and/or sell proprietary software enhancements.
Jon Danzig, the founder of Libranet, died on June 1, 2005. His son Tal had taken over the leadership of the development team, but then stated that he would stop maintaining Libranet. Daniel de Kok, the other remaining employee, went on to become a developer for CentOS.
Cancellation of OpenLibranet 3.1
There were many users who were interested in seeing Libranet, and the Libranet Adminmenu software, continue to be available. Adminmenu was an operating system setup and configuration tool. It was unique in that it also contained a user-friendly kernel compiling tool. So a team including Daniel de Kok proposed to opensource Libranet and release OpenLibranet 3.1. But after failing to obtain definitive permission for this plan from Tal Danzig, the owner, the plan was dropped.
References
External links
Libranet's last goodbye Newsforge, June 9, 2006
A Review of Libranet 3.0
Interview with Libranet Founder
Libranet Basics
LinuxPlanet Review of release 1.2.2
Debian-based distributions
Discontinued Linux distributions
Linux distributions |
1078075 | https://en.wikipedia.org/wiki/WavPack | WavPack | WavPack is a free and open-source lossless audio compression format and application implementing the format. It is unique in the way that it supports hybrid audio compression alongside normal compression which is similar to how FLAC works. It also supports compressing a wide variety of lossless formats, including various variants of PCM and also DSD as used in SACDs, together with its support for surround audio.
Features
WavPack compression can compress (and restore) 8-, 16-, 24-, and 32-bit fixed-point, and 32-bit floating point PCM audio files in the .WAV file format. It also supports surround sound streams and high frequency sampling rates. Like other lossless compression schemes, the data reduction rate varies with the source, but it is generally between 30% and 70% for typical popular music and somewhat better than that for classical music and other sources with greater dynamic range.
Hybrid mode
WavPack also incorporates a "hybrid" mode which still provides the features of lossless compression, but it creates two files: a relatively small, high-quality, lossy file (.wv) that can be used by itself; and a "correction" file (.wvc) that, when combined with the lossy file, provides full lossless restoration. This allows the use of lossy and lossless codecs together.
A similar "hybrid" feature is also offered by OptimFROG DualStream, MPEG-4 SLS and DTS-HD Master Audio.
Summary
Open-source, released under a BSD-like license
Multiplatform
Error robustness
Fast encoding speed
Higher compression ratios than other widely used (FLAC/ALAC) open source lossless audio codecs
Streaming support
Supports multichannel audio and high resolutions
Native support in WavPack 5.x for compressing Direct Stream Digital without converting the source file to PCM.
Hybrid/lossy mode
Hardware support (provided by Rockbox firmware)
Metadata support (ID3, APE tags) (APE tag is the preferred format.)
Supports RIFF chunks
ReplayGain compatible
Ability to create self extracting files for the Win32 platform
Supports 32-bit floating point streams
Supports embedded CUE sheets
Includes MD5 hashes for quick integrity checking
Can encode in both symmetrical and asymmetrical (slower encoding to speed up decoding) modes
History
David Bryant started development on WavPack in mid-1998 with the release of version 1.0 (1998-08-15). This first version compressed and decompressed audio losslessly, and it already featured one of the best efficiency vs. speed ratios among lossless encoders.
Very soon after the release of version 1.0, v. 2.0 (2 September 1998) was released, featuring lossy encoding (using only quantization of prediction residue for data reduction - no psychoacoustic masking model was applied to the stream).
In 1999, version 3.0 (12 September 1999) was released, with a new "fast mode" (albeit with reduced compression ratio), compression of raw (headerless) PCM audio files, and error detection using a 32-bit cyclic redundancy check.
A feature added in late 3.x versions is the "hybrid" mode where the encoder generates a lossy file and a correction file such that both can be decompressed back to a PCM stream that is same quality as the original. A “roadmap” is also published by the author, containing possible hints on future development.
Support
Software
Some software supports the format natively (like DeaDBeeF, foobar2000, and Jack! The Knife), while others require plugins. The official WavPack website offers plugins for Winamp, Nero Burning ROM, MediaChest 2.1, and several other applications, as well as a DirectShow filter. dBpoweramp CD-Ripper, by the author of foobar2000, as well as foobar2000 itself, and Asunder allow ripping Audio CDs directly into Wavpack files.
Linux support is available with a native port.
FFmpeg has a native WavPack encoder, which may be combined with software like GNU parallel to use multiple CPU cores to quickly transcode other lossless formats into WavPack, and from WavPack to any format that FFmpeg supports, without the need for additional software.
Hardware
Native support:
Cowon A3 PMP supports WavPack out of the box.
Non-native support:
Apple iPod range of music players do not support WavPack out of the box, but can through open source Rockbox firmware.
iriver H100 series, can through open source Rockbox firmware.
iriver H300 series, can through open source Rockbox firmware.
Android smartphones and tablets with the installation of third party media player software.
Chrome OS devices using media player software installed in the Linux subsystem or the Android Play Store.
The WavPack website also includes a plugin that allows support for the format on the Roku PhotoBridge HD.
Technology
To ensure high-speed operation, WavPack uses a predictor that is implemented entirely in integer math. In its "fast" mode the prediction is simply the arithmetic extrapolation of the previous two samples. For example, if the previous two samples were −10 and 20, then the prediction would be 50. For the default mode a simple adaptive factor is added to weigh the influence of the earlier sample on the prediction. In our example the resulting prediction could then vary between 20 for no influence to 50 for full influence. This weight factor is constantly updated based on the audio data's changing spectral characteristics.
The prediction generated is then subtracted from the actual sample to be encoded to generate the error value. In mono mode this value is sent directly to the coder. However, stereo signals tend to have some correlation between the two channels that can be further exploited. Therefore, two error values are calculated that represent the difference and average of the left and right error values. In the "fast" mode of operation these two new values are simply sent to the coder instead of the left and right values. In the default mode, the difference value is always sent to the coder along with one of the other three values (average, left, or right). An adaptive algorithm continuously determines the most efficient of the three to send based on the changing balance of the channels.
Instead of Rice coding, a special data encoder for WavPack is used. Rice coding is the optimal bit coding for this type of data, and WavPack's encoder is less efficient, but only by about 0.15 bits/sample (or less than 1% for 16-bit data). However, there are some advantages in exchange; the first one is that WavPack's encoder does not require the data to be buffered ahead of encoding; instead it converts each sample directly to bitcodes. This is more computationally efficient, and it is better in some applications where coding delay is critical. The second advantage is that it is easily adaptable to lossy encoding, since all significant bits (except the implied "one" MSB) are transmitted directly. In this way it is possible to only transmit, for example, the 3 most significant bits (with sign) of each sample. In fact, it is possible to transmit only the sign and implied MSB for each sample with an average of only 3.65 bits/sample.
This coding scheme is used to implement the "lossy" mode of WavPack. In the "fast" mode the output of the non-adaptive decorrelator is simply rounded to the nearest codable value for the specified number of bits. In the default mode the adaptive decorrelator is used (which reduces the average noise about 1 dB) and both the current and the next sample are considered in choosing the better of the two available codes (which reduces noise another 1 dB).
No floating-point arithmetic is used in WavPack's data path because, according to the author, integer operations are less susceptible to subtle chip-to-chip variations that could corrupt the lossless nature of the compression (the Pentium floating point bug being an example). It is possible that a lossless compressor that used floating-point math could generate different output when running on that faulty Pentium. Even disregarding actual bugs, floating-point math is complicated enough that there could be subtle differences between "correct" implementations that could cause trouble for this type of application. A 32-bit error detection code to the generated streams is included to maintain user confidence in the integrity of WavPack's compression.
WavPack source code is portable, and has been compiled on several Unix and Unix-like operating systems (Linux, Mac OS X, Solaris, FreeBSD, OpenBSD, NetBSD, Compaq Tru64, HP-UX...) as well as Windows, DOS, Palm OS, and OpenVMS. It works on many architectures, including x86, ARM, PowerPC, AMD64, IA-64, SPARC, Alpha, PA-RISC, MIPS and Motorola 68k.
A cut-down version of WavPack was developed for the Texas Instruments TMS320 series Digital Signal Processor. This was aimed predominantly at encouraging manufacturers to incorporate WavPack compression (and de-compression) into portable memory audio recorders. This version supported features that were applicable only to embedded applications (stream compression in real-time, selectable compression rate) and dropped off features that only applied to full computer systems (self extraction, high compression modes, 32-bit floats). The TMS320 series DSPs are native integer devices, and support WavPack well. Some 'special' features of the full WavPack software were included (ability to generate a correction 'file' (stream) for example) and others were excluded. The port was based on version 4.
WavPack support was added to WinZip starting with version 11.0 beta, released in October 2006. This extension to the ZIP file format was included by PKWARE, the maintainers of the format, in the official description file starting with version 6.3.2, released on 28 September 2007.
See also
Comparison of audio formats
MPEG-4 SLS
FLAC
TTA
Monkey's Audio
Meridian Lossless Packing
References
External links
Official website
WavPack forum at Hydrogenaudio Forums
Historical versions at ReallyRareWares
A comparison of several Lossless Audio encoders at Hydrogenaudio Wiki.
WavPack on MultimediaWiki
WavPack frontend
Flash WavPack player
Lossless audio codecs
Computer file formats
Free audio software
Free audio codecs
Open formats
Software using the BSD license |
1557022 | https://en.wikipedia.org/wiki/San%20Francisco%20Zoo | San Francisco Zoo | The San Francisco Zoo is a zoo located in the southwestern corner of San Francisco, California, between Lake Merced and the Pacific Ocean along the Great Highway. The SF Zoo is a public institution, managed by the non-profit San Francisco Zoological Society, a 501(c)(3) organization. As of 2016, the zoo housed more than one thousand individual animals, representing more than 250 species. It is noted as the birthplace of Koko the gorilla, and, since 1974, the home of Elly, the oldest black rhinoceros in North America.
The zoo's main entrance (one located on the north side across Sloat Boulevard and one block south of the Muni Metro L Taraval line) is to the west, on the ocean side.
History
Originally named the Fleishhacker Zoo after its founder, banker and San Francisco Parks Commission president Herbert Fleishhacker, planning for construction began in 1929 on the site adjacent to what was once the largest swimming pool in the United States, the Fleishhacker Pool. The area was also already home to a children’s playground, an original (circa 1921) Michael Dentzel/Marcus Illions carousel, and the Mother’s Building, a haven for women and their children. Most of the exhibits were populated with animals transferred from Golden Gate Park, including two zebras, a cape buffalo, five rhesus monkeys, two spider monkeys, and three elephants (Virginia, Marjorie, and Babe).
The first exhibits built in the 1930s cost $3.5 million, which included Monkey Island, Lion House, Elephant House, a small mammal grotto, an aviary, and bear grottos. These spacious, moated enclosures were among the first bar-less exhibits in the country. In 1955, a local San Francisco newspaper purchased Pennie, a baby female Asian elephant, and donated her to the zoo after many children donated their pennies, nickels, and dimes for her purchase.
Over the next forty years, the Zoological Society became a powerful fundraising source for the San Francisco Zoo, just as Fleishhacker had hoped when he envisioned: "…a Zoological Society similar to those established in other large cities. The Zoological Society will aid the Parks Commission in the acquisition of rare animals and in the operation of the zoo." True to its charter, the Society immediately exerted its influence on the zoo, obtaining more than 1,300 annual memberships in its first ten years (nearly 25,000 today). It also funded projects like the renovation of the Children’s Zoo in 1964, development of the African Scene in 1967, the purchase of medical equipment for the new zoo Hospital in 1975, and the establishment of the Avian Conservation Center in 1978.
In November 2004, Tinkerbelle, San Francisco Zoo's last Asian elephant, was moved to ARK 2000, a sanctuary run by PAWS-Performing Animal Welfare Society located in the Sierra Nevada foothills. She was later joined in March 2005 by the African elephant Lulu, the last elephant on display at the zoo. The moves followed the highly publicized deaths of thirty-eight-year-old Calle in March 2004, and forty-three-year-old Maybelle the following month.
In early 2006, the SF Zoo announced its offer to name a soon-to-hatch American bald eagle after comedian Stephen Colbert. The publicity and goodwill garnered from coverage of the event on the Colbert Report was a windfall for the zoo and the city of San Francisco. Stephen Jr. was born on April 17, 2006.
Exhibit renovations
Otter River (1994) featuring North American river otters
Feline Conservation Center (1994) housing three species of small cats, including the snow leopard, ocelot, and Malayan fishing cats
Spectacled bear exhibit renovation (1994)
Lion House outdoor enclosures (1994)
Eagle Island renovation (1995) providing a home for Sureshot, an injured (and non-releasable) bald eagle
Australian WalkAbout (1995) new space for red kangaroos and emus
Flamingo Lake renovation (1995)
Monkey Island demolition (1995)
Hippopotamus exhibit renovation (1995)
Warthog exhibit (1996)
Billabong (1996)
Aviary renovation (1996)
Ring-tailed lemur exhibit renovation (1996)
Children’s Zoo entrance (1996)
Kodiak bear exhibit (1996)
Avian Conservation Center (1997)
African lion cub exhibit (1997)
Aye-aye Forest (1997)
Asian elephant exhibit renovations (1997 and 1999)
Rainbow Landing (now Lorikeet Landing) (1998)
Outdoor aviary demolition (1998)
Restoration of Little Puffer (miniature railroad) (1998)
Primate Discovery Center terrace exhibit renovation (1998)
Children’s Zoo renovation (1999)
Puente al Sur (1999) now houses giant anteaters, mountain tapirs, and capybara
Infrastructure replacement (1999)
Aviary renovation (2000) depicts a South American tropical forest, complete with birds, caiman, and an anaconda
Seal pool/bear exhibits (2000)
Connie and Bob Lurie Education Center (2001)
Koret Animal Resource Center (2001)
Expanded Children’s Zoo and Family Farm (2001)
Wetlands habitat (2001)
Cassowary Exhibit (2001) features double-wattled cassowaries, one of the world's largest bird species
Lipman Family Lemur Forest (2002) houses five species of Madagascan primates in an outdoor forest
Friend and Taube Entry Village (2002)
Leaping Lemur Café (2002)
Split Mound artwork by McCarren/Fine (2002)
Bronze lion sculptures by Gwynn Murrill (2002)
Zoo Street and parking (2002)
Dentzel Carousel (2002)
African Savanna (2004) features giraffe, zebra, kudu, ostrich and other African wildlife roaming together in a lush, 3 acre (1 ha) habitat.
African Savanna Giraffe Feedings (2006)
Black swan exhibit (2006)
Binnowee Landing and Feeding (formerly Lorikeet Landing) (2006)
Kunekune pig exhibit at the Family Farm (formerly the miniature pig exhibit) (2006)
Hearst Grizzly Gulch exhibit (opened June 14, 2007)
Big Cat Exhibit Renovations (January 2008)
Hippopotamus and Rhinoceros exhibits (the 2 hippos, Puddles and Cuddles, died during renovation) (2007–2009)
Little Puffer restoration (2009)
South American Tropical Rainforest Aviary asbestos removal (2009–2010)
Fishing cat exhibit (2010)
Animals and exhibits
Indian peafowl roam the zoo grounds freely and are acknowledged officially on the zoo's website.
African Region
African Savannah
Grey crowned crane
Black crowned crane
Plains zebra
Greater kudu
Common ostrich
Reticulated giraffe
African Aviary
Hadada ibis
Hamerkop
Northern bald ibis
Blue-bellied roller
Long-tailed glossy starling
Gorilla Preserve
Western gorilla
Primate Discovery Center
François' langur
Mandrill
Squirrel monkey
Bald eagle
Colobus monkey
Squirrel monkey
Pied tamarin
Black Howler monkey
Siamang
Lemur Forest
Black-and-white ruffed lemur
Blue-eyed black lemur
Crowned lemur
Red-bellied lemur
Red-fronted lemur
Red ruffed lemur
Ring-tailed lemur
Coquerel's sifaka
Great Ape Passage
Chimpanzee
Bornean orangutan
Cat Kingdom
Indian rhinoceros
Black rhinoceros
Magellanic penguin
Lion
Bobcat
Fishing cat
Snow leopard
Sumatran tiger
Bongo
North American River otter
Aldabra giant tortoise
Komodo dragon
Wolverine
Siberian Tiger
Outback Trail
Koala
Red kangaroo
Common wallaroo
Emu
Southern cassowary
South America
Linnaeus's two-toed sloth
Black-necked swan
Black swan
Capybara
Giant anteater
Greater rhea
Red-footed tortoise
Northern caiman lizard
Plumed basilisk
Boa constrictor
Green anaconda
Emerald tree boa
Guanaco
Blue-throated piping guan
Blue-winged teal
Blue-headed macaw
Curl-crested aracari
Great curassow
Crested oropendola
White-faced whistling duck
Ruddy duck
Red-and-green macaw
Blue poison dart frog
Golfodulcean poison frog
Waxy monkey tree frog
Bear Country
American black bear
Grizzly bear
Chacoan peccary
Mexican wolf
American white pelican
Pink-backed pelican
Exploration Zone
Meerkat
Black-tailed prairie dog
Red-rumped agouti
Red panda
Rosy-faced lovebird
Spectacled owl
Safety incidents and animal deaths
2007 tiger attacks
On December 22, 2006, Tatiana, the 242-pound Siberian tiger, attacked zookeeper Lori Komejan, causing the keeper to be hospitalized for several weeks with lacerated limbs and shock. The Lion House was closed for ten months as a result. California's Division of Occupation Safety and Health found the zoo liable for the keeper's injuries, fined the zoo, and ordered safety improvements.
On December 25, 2007, the same tiger escaped from her grotto and attacked three zoo visitors after being taunted and pummeled by sticks and pine cones by the visitors. Carlos Sousa, 17, of San Jose, California, was killed at the scene, while another taunter was mauled and survived. The tiger was shot and killed by police while hiding in the landscape after the attack. Three other tigers who shared Tatiana's grotto did not escape. Tatiana arrived at the San Francisco Zoo from the Denver Zoo in 2005, in hopes that she would mate. (This "Tatiana" is not the same as the one successfully breeding in the Toronto Zoo.) According to the Association of Zoos and Aquariums, the attack is the first visitor fatality due to animal escape at a member zoo in the history of the organization.
Other incidents
In October 2020, A 30-year-old man was arrested when he stole an endangered ring-tailed lemur named Maki. He was charged in July 2021 for a violation of the Endangered Species Act. He faces $50,000 in fines and as much as one year in prison. Maki was found the day after he was kidnapped at a playground in Daly City and was returned to the zoo.
Conservation
Two black bears were rescued as orphans in Alaska. The male was found on the edges of town near Valdez in May 2017 and the female cub was found near Juneau in June 2017. Both cubs were determined by the Alaska Department of Fish and Game to be motherless and were brought to Alaska Zoo and rehabilitated back to health. In 2017, the Alaska Zoo had more orphaned bear cubs than ever before, due to the repeal of bear hunting regulations by the Trump administration, which allowed for the hunting of hibernating bears in their dens. Mr. Lampi said. The two bears were brought to the San Francisco Zoo in 2017 and a previously empty habitat was repurposed to host them.
The zoo housed Henry, a 10-year-old blind California sea lion who was found stranded on a beach in Humboldt County in 2010. In 2012, he was brought to the San Francisco Zoo, he was treated by veterinarians for his blindness.
Species survival projects
The San Francisco Zoo participates in Species Survival Plans, conservation programs sponsored by the Association of Zoos and Aquariums. The program began in 1981 for selected species in North American zoos and aquariums where the breeding of a species done to maintain healthy, self-sustaining, genetically diverse and demographically stable populations. The zoo participates in more than 30 SSP programs, working to conserve species ranging from Madagascan radiated tortoises and reticulated giraffes to black rhinos and gorillas.
See also
Citizens Lobbying for Animals in Zoos
References
External links
run web gi tunnel = vps
Zoos in California
Parks in San Francisco
Sunset District, San Francisco
Insectariums
Landmarks in San Francisco
Urban public parks
Zoos established in 1929
1929 establishments in California
Tourist attractions in San Francisco |
18158263 | https://en.wikipedia.org/wiki/Optics%20Software%20for%20Layout%20and%20Optimization | Optics Software for Layout and Optimization | Optics Software for Layout and Optimization (OSLO) is an optical design program originally developed at the University of Rochester in the 1970s. The first commercial version was produced in 1976 by Sinclair Optics. Since then, OSLO has been rewritten several times as computer technology has advanced. In 1993, Sinclair Optics acquired the GENII program for optical design, and many of the features of GENII are now included in OSLO. Lambda Research Corporation (Littleton MA) purchased the program from Sinclair Optics in 2001.
The OSLO software is used by scientists and engineers to design lenses, reflectors, optical instruments, laser collimators, and illumination systems. It is also used for simulation and analysis of optical systems using both geometrical and physical optics. In addition to optical design and analysis, OSLO provides a complete technical software development system including interactive graphics, math, and database libraries.
Applications
OSLO provides an integrated software environment that helps complete contemporary optical design. More than a lens design software, OSLO provides advanced tools for designing medical instrumentation, illuminations systems and telecommunications equipment, to name just a few typical applications. OSLO has been used in a multitude of optical designs including holographic systems, anastigmatic telescopes, gradient index optics, off-axis refractive/diffractive telescopes. the James Webb Space Telescope, aspheric lenses, interferometers, and time-varying designs.
Capabilities
OSLO is primarily used in the lens design process to determine the optimal sizes and shapes of the components in optical systems. OSLO has the capability of modeling a wide range of reflective, refractive and diffractive components. In addition, OSLO is used to simulate and analyze the performance of optical systems. OSLO's CCL (Compiled Command Language), which is a subset of the C programming language, can be used to develop specialized optical and lens design software tools for modeling, testing, and tolerancing optical systems.
OSLO has many unique features, for instance slider wheels. This feature allows users to affix up to 32 graphical sliders providing callbacks to default or user-supplied routines that perform evaluation or even full optimization iterations when a slider is moved. Some examples in the use of these slider wheels to design telescopes are provided by Howard.
Compatibility
OSLO works with other software products using a DDE (Dynamic Data Exchange) Client/Server interface. This enables the program to work with products such as MATLAB to create a multi-disciplinary environment, such an environment was used to design and analyze the Thirty Meter Telescope (TMT).
Editions
OSLO is available in one educational and one commercial edition.
Free educational product
• OSLO EDU
OSLO EDU can be downloaded from the Lambda Research Corporation web site.
The OSLO Optics Reference, which can be downloaded as a PDF, provides a self-contained introductory course in optical design.
Commercial product
• OSLO Premium
See also
Optical lens design
References
External links
Lambda Research Website
Optical software |
1227761 | https://en.wikipedia.org/wiki/Econet | Econet | Econet was Acorn Computers's low-cost local area network system, intended for use by schools and small businesses. It was widely used in those areas, and was supported by a large number of different computer and server systems produced both by Acorn and by other companies.
Econet software was later mostly superseded by Acorn Universal Networking (AUN), though some suppliers were still offering bridging kits to interconnect old and new networks. AUN was in turn superseded by the Acorn Access+ software.
Implementation history
Econet was specified in 1980, and first developed for the Acorn Atom and Acorn System 2/3/4 computers in 1981. Also in that year the BBC Microcomputer was released, initially with provision for floppy disc and Econet interface ports, but without the necessary supporting ICs fitted, optionally to be added in a post sale upgrade.
In 1982, the Tasmania Department of Education requested a tender for the supply of personal computers to their schools. Earlier that year Barson Computers, Acorn's Australian computer distributor, had released the BBC Microcomputer with floppy disc storage as part of a bundle. Acorn's Hermann Hauser and Chris Curry agreed to allow it to be also offered with Econet fitted, as they had previously done with the disc interface. As previously with the Disc Filing System, they stipulated that Barson would need to adapt the network filing system from the System 2 without assistance from Acorn. Barson's engineers applied a few modifications to fix bugs on the early BBC Micro motherboards, which were adopted by Acorn in later releases. With both floppy disc and networking available, the BBC Micro was approved for use in schools by all state and territory education authorities in Australia and New Zealand, and quickly overtook the Apple II as the computer of choice in private schools.
With no other supporting documentation available, the head of Barson's Acorn division, Rob Napier, published Networking with the BBC Microcomputer, the first reference documentation for Econet.
Econet was officially released for the BBC Micro in the UK in 1984, and it later became popular as a networking system for the Acorn Archimedes. Econet was eventually officially supported on all post-Atom Acorn machines, apart from the Electron (except in Australia and New Zealand where Barson Computers built their own Econet daughter board), along with 3rd party ISA cards for the IBM PC. The "Ecolink" ISA interface card for IBM-compatible PCs was available. It used Microsoft's MS-NET Redirector for MS-DOS to provide file and printer sharing via the NET USE command.
File, Print and Tape servers, for the architecture were also supplied by 3rd party vendors such as S J Research.
Econet was supported by Acorn MOS, RISC OS, RISC iX, FreeBSD and Linux operating systems.
Acorn once received an offer from Commodore International to license the technology, which it refused.
Subsequent development
With the falling prices and widespread adoption of IP networking in the early 1990s, Acorn Universal Networking (AUN), an implementation of Econet protocols and addressing over TCP/IP (in Acorn's words "an AUN network is a conformant TCP/IP network underneath the Econet-like veneer"), was developed to provide legacy support for Econet on Ethernet-connected machines.
Support for the Econet protocol and AUN was removed from the Linux kernel in 2012 from version 3.5, due to lack of use and privilege escalation vulnerabilities.
Supported systems
Econet was supported by a large number of different computer and server systems, produced both by Acorn and by other companies. As well as Acorn's MOS and RISC OS these also used other operating systems such as CP/M, DR-DOS, Unix, Linux or Microsoft Windows.
The Econet API includes an Econet_MachinePeek command, which can be used by software to determine if a machine is present on the network and its hardware platform. The machine-type codes which can be returned by that command
are a useful indication of the range of hardware that offered Econet as their primary networking function or as an option:
The manual includes an assembly language program to report a machine type, software version and release numbers.
An update to the list in volume 5A of the PRM
lists the following additions to the table above:
Physical and data-link layers
Econet is a five-wire bus network. One pair of wires is used for the clock, one pair for data, and one wire as a common ground. Signalling used the RS-422 5-volt differential standard, with one bit transferred per clock cycle. Unshielded cable was used for short lengths, and shielded cable for longer networks. The cable was terminated at each end to prevent reflections and to guarantee high logic levels when the bus was undriven.
The original connectors were five-pin circular 180° DIN types. On later 32-bit machines (notably the A3020 and A4000), the Econet connection was made via five of the pins on their 15-pin D-type Network port, which could also accept MAUs (Media Attachment Units) to allow other types of network to be connected via the same socket. This port looks similar to an AUI port, but is not compatible.
The Acorn A4 laptop used another implementation, in the form of a 5 pin mini-DIN.
Each Econet interface was controlled by a Motorola MC68B54 Advanced Data Link Controller (ADLC) chip, which handled electrical transmission/reception, frame checksumming and collision detection.
Network and transport layers
Econet used a connectionless transmission model, similar to UDP, with no checksumming or error correction at this layer. Each packet had a four byte header consisting of:
The destination station number
The destination network number
The source station number
The source network number
A single data transmission consisted of four frames, each with a header as above:
The sending station sends a scout packet with a port number and a flag byte
The addressed receiving station returns a scout acknowledge to the sender
The sending station sends the data
The receiving station finishes with a final acknowledge, identical to the scout acknowledge
There was provision for broadcast transmissions, a single frame sent with its destination station and network numbers set to 255. There was also provision for promiscuous mode reception, termed wild receive
in the PRM, requested by listening for station and network numbers both being zero.
Technical details of packets and frames, the Econet API, and worked examples in ARM assembler and BBC BASIC
are given in the RISC OS Programmer's Reference Manual.
Network services
At the time and in the markets for which Econet was developed, the main purpose of computer networking was to provide local area shared access to expensive hardware such as disc storage and printers. Acorn provided software for the BBC Micro to implement a file server, and optionally a printer server also. The original file server was very basic, essentially allowing limited access to a floppy disc over the network. The server software was further developed over many years, and Acorn and other manufacturers also produced dedicated Econet servers based on various technologies. So the servers available fell into roughly three categories:
The Acorn Fileserver, from Level 1 through Level 4, running on a standard computer (BBC, Master or Archimedes) and providing simple file and print services.
The dedicated Acorn Filestore units, running on dedicated hardware with higher capacity and more facilities.
Third party units (notably from SJ Research), again running on dedicated hardware and with their own implementations of the server software. These were compatible with the Acorn implementations, but with additional enhancements. (Notably, Oak Solutions collaborated with Acorn to develop the Level 4 Fileserver solution.)
The machine type numbers listed in the "Supported systems" section above are an indication of the range of hardware that was available or planned.
Additional services could be implemented, using the network API provided. Short utilities such as network chat programs were often published in magazines or distributed by sharing among users; these made use of the Econet protocols to work alongside the basic file and print services. Larger software packages (some of them commercial) were available that provided services such as Teletext and modem drivers.
Filestore
Acorn emphasised the Filestore in the late 1980s as a solution for small workgroups, offering a base unit with optional hard disk storage modules. The Filestore was a 65C102-based machine with 64 KB of RAM and 64 KB of ROM having Econet connectivity, two 3.5" floppy drives, a parallel printer interface, expansion bus, Econet clock and termination circuits, a real-time clock, and a quantity of battery-backed RAM. The battery-backed RAM was used to hold configuration and authentication details.
Initially, hard disk expansion was offered in the form of the E20 module providing a 3.5" 20 MB Winchester disk drive for the E01 base unit; later expansions in the form of the E40S and E60S provided 40 MB and 60 MB storage respectively for the E01S base unit. The "S" suffix reportedly signifies that the units are "stacking".
Fileserver
Acorn also offered the Level 1, Level 2 and Level 3 Fileserver solutions running on sufficiently upgraded BBC Micro or BBC Master computers. The Level 1 product offered access to existing Acorn DFS discs via a BBC Model B with Econet, disc interface and single or dual drives. Level 2 elevated the requirements to include a 6502 second processor but provided hierarchical storage with the number of files limited only by the amount of storage available, plus enhanced access controls, random access to data files, and authentication support. Level 3 introduced Winchester hard drive support.
With the release of the Level 4 Fileserver software providing a means to "extend the life of existing Acorn computers, such as the A310", allowing "any Archimedes computer to act as a fileserver", the emphasis had evidently shifted away from the Filestore and towards the Level 4 product at the start of the 1990s. A base Filestore E01S unit had a price inclusive of VAT of £1148.85 in February 1989, whereas an Archimedes 310 with 1 MB of RAM cost only £958.00 and an Econet module £56.35, illustrating the pricing considerations for potential buyers. By 1991, the Filestore was apparently no longer offered in Acorn's pricing (nor was the A310), but the Level 4 software was priced at £233.83 and an Archimedes 410/1 with 1 MB of RAM at £1049.33.
Unix system services
With the introduction of Acorn's Unix workstations running RISC iX, an envisaged application for Econet was the use of Master 128 computers acting as terminals to these Unix systems. Such systems also offered the capability to act as bridges between Econet and Ethernet networks, offering routing facilities to any Unix machines attached to the Econet, this being enabled by the IP-over-Econet support in RISC iX.
X.25 network services
An Econet X.25 gateway product was offered by Acorn, providing access to X.25 networks for computers on an Econet, with the X25 Terminal ROM and the existing Acorn DNFS ROM needing to be fitted to computers to enable access to X.25 services, with the Terminal ROM providing terminal emulation and file transfer functionality.
The gateway hardware consisted of the core functionality of a BBC Micro, this being the network service module connected to the Econet, combined with a Z80 second processor connected via the Tube interface, this acting as the gateway module and having 16 KB ROM and 32 KB of private RAM, augmented by another board with a Z80 processor with 32 KB of private RAM, this being the X25 module accessing the X.25 line. The gateway and X25 modules communicated via 16 KB of dual-ported shared RAM. The X25 module was designed by Symicron and ran the "proven" Symicron Telematics Software (STS).
Econet users would send network service requests to the gateway that would be forwarded by the STS functionality of the gateway to the X.25 network. Incoming X.25 calls would be forwarded by the STS functionality to the network service functionality and on to the Econet. Network service requests could employ X.25, Yellow Book Transport Service, and X.29 protocols.
Comparison with modern systems
While Econet was essentially specific to the Acorn range of computers, it does share common concepts with modern network file systems and protocols:
Remote Procedure Call – Almost all network operations were performed via a primitive remote procedure call system, either by passing a command line direct to the file server, or by passing an operating system call parameter block. The logon command *I AM was processed by passing the whole command line and reading back the result code.
Access Permissions – By the time of the Acorn Level 4 File Server and the SJ Research MDFS systems, Econet file servers had a full user name and password system with public and private attributes. These worked similar to Unix permissions without the group field. Files could be set to be readable and/or writable by everyone, just by the user, or both.
Subnetting – A basic Econet would be a single network segment, which is usually assumed to be network 0. With the use of one or more bridges, it is possible to have up to 127 Econet segments with up to 254 hosts each, for a maximum of 32,258 possible machines.
Broadcasting – By using host 255, an Econet host could send broadcast packets to all hosts on the network segment. Later implementations of the client software used this to automatically locate file and printer servers.
Printer Spooling – Later versions of the Econet printer server software used printer spooling to locally cache print jobs before sending to the remote printer. This ensured whole print jobs were sent to the printer in one go.
Ports – Because the various protocols (file and printer servers, bridge discovery, and so forth) used defined port numbers, it was possible to for additional services such as BroadcastLoader, AppFS, a teletext server, and a range of chat programs and multiplayer games to coexist within the Econet system.
See also
LocalTalk
List of device bandwidths
References
External links
The Econet Enthusiasts Area
Chris' Acorns
Econet documentation at 8-bit software
RISC OS Programmer's Reference Manuals the latest versions as of May 2014
Acorn Computers
Computer buses
Local area networks |
1800904 | https://en.wikipedia.org/wiki/PL-4 | PL-4 | PL-4 or POS-PHY Level 4 was the name of the interface that the interface SPI-4.2 is based on. It was proposed by PMC-Sierra to the Optical Internetworking Forum. The name means Packet Over SONET Physical layer level 4. PL-4 was developed by PMC-Sierra in conjunction with the Saturn Development Group.
Context
There are two broad categories of chip-to-chip interfaces. The first, exemplified by PCI-Express and HyperTransport, supports reads and writes of memory addresses. The second broad category carries user packets over 1 or more channels and is exemplified by the IEEE 802.3 family of Media Independent Interfaces and the Optical Internetworking Forum family of System Packet Interfaces. Of these last two, the family of System Packet Interfaces is optimized to carry user packets from many channels. The family of System Packet Interfaces is the most important packet-oriented, chip-to-chip interface family used between devices in the Packet over SONET and Optical Transport Network, which are the principal protocols used to carry the internet between cities.
Applications
PL-4 was designed to be used in systems that support OC-192 SONET interfaces and is sometimes used in 10 Gigabit Ethernet based systems. A typical application of PL-4 (SPI-4.2) is to connect a framer device to a network processor. It has been widely adopted by the high speed networking marketplace.
Technical details
The interface consists of (per direction):
sixteen LVDS pairs for the data path
one LVDS pair for control
one LVDS pair for clock at half of the data rate
two FIFO status lines running at 1/8 of the data rate
one status clock
The clocking is Source-synchronous and operates around 700 MHz. Implementations of SPI-4.2 (PL-4) have been produced which allow somewhat higher clock rates. This is important when overhead bytes are added to incoming packets.
Trivia
The name is an acronym of an acronym of an acronym as the P in PL stands for POS-PHY and the S in POS-PHY stands for SONET (Synchronous Optical Network).
History
PL-4 is a descendant of PL-3 which itself is a descendant of the ATM Forum UTOPIA family of standards. The UTOPIA standards were developed by the SATURN Development Group for use in ATM systems.
See also
PL-3
External links
OIF Interoperability Agreements
Network protocols |
641324 | https://en.wikipedia.org/wiki/Virtual%208086%20mode | Virtual 8086 mode | In the 80386 microprocessor and later, virtual 8086 mode (also called virtual real mode, V86-mode or VM86) allows the execution of real mode applications that are incapable of running directly in protected mode while the processor is running a protected mode operating system. It is a hardware virtualization technique that allowed multiple 8086 processors to be emulated by the 386 chip; it emerged from the painful experiences with the 80286 protected mode, which by itself was not suitable to run concurrent real mode applications well. John Crawford developed the Virtual Mode bit at the register set paving way to this environment.
VM86 mode uses a segmentation scheme identical to that of real mode (for compatibility reasons) which creates 20-bit linear addresses in the same manner as 20-bit physical addresses are created in real mode, but are subject to protected mode's memory paging mechanism.
Overview
The virtual 8086 mode is a mode for a protected-mode task. Consequently, the processor can switch between VM86 and non-VM86 tasks, enabling multitasking legacy (DOS) applications.
To use virtual 8086 mode, an operating system sets up a virtual 8086 mode monitor, which is a program that manages the real-mode program and emulates or filters access to system hardware and software resources. The monitor must run at privilege level 0 and in protected mode. Only the 8086 program runs in VM86 mode and at privilege level 3. When the real-mode program attempts to do things like access certain I/O ports to use hardware devices or access certain regions in its memory space, the CPU traps these events and calls the V86 monitor, which examines what the real mode program is trying to do and either acts as a proxy to interface with the hardware, emulates the intended function the real-mode program was trying to access, or terminates the real-mode program if it is trying to do something that cannot either be allowed or be adequately supported (such as reboot the machine, set a video display into a mode that is not supported by the hardware and is not emulated, or write over operating system code).
The V86 monitor can also deny permission gently by emulating the failure of a requested operation—for example, it can make a disk drive always appear not ready when in fact it has not even checked the drive but simply will not permit the real-mode program to access it. Also, the V86 monitor can do things like map memory pages, intercept calls and interrupts, and preempt the real-mode program, allowing real-mode programs to be multitasked like protected-mode programs. By intercepting the hardware and software I/O of the real-mode program and tracking the state that the V86 program expects, it can allow multiple programs to share the same hardware without interfering with each other. So V86 mode provides a way for real-mode programs designed for a single-tasking environment (like DOS) to run concurrently in a multitasking environment.
Usage
It is used to execute certain DOS programs in
FlexOS 386 (since 1987), Concurrent DOS 386 (since 1987), Windows/386 2.10 (since 1987), DESQview 386 (since 1988), Windows 3.x (since 1990), Multiuser DOS (since 1991), Windows for Workgroups 3.1x (since 1992), OS/2 2.x (since 1992), 4690 OS (since 1993), REAL/32 (since 1995) running in 386 Enhanced Mode as well as in Windows 95, 98, 98 SE and ME through virtual DOS machines, in SCO UNIX through Merge, and in Linux through DOSEMU. (Other DOS programs which use protected mode execute using user mode under the emulator.) NTVDM in x86 Windows NT-based operating systems also use VM86 mode, but with very limited direct hardware access. Some boot loaders (e.g. GRUB) use the protected mode, and execute the BIOS interrupt calls in Virtual 8086 mode.
Memory addressing and interrupts
The most common problem by running 8086 code from protected mode is memory addressing which is totally different between protected mode and real mode.
As mentioned, by working under VM86 mode the segmentation mechanism is reconfigured to work just like under real mode, but the paging mechanism is still active, and it is transparent to the real mode code; thus, memory protection is still applicable, and so is the isolation of the address space.
When interrupts (hardware, software and int instruction) occur, the processor switches off the VM86 mode and returns to work in full protected mode to handle the interrupt. Also, before servicing the interrupt, the DS, ES, FS, and GS registers are pushed on the new stack and zeroed.
Virtual-8086 mode extensions (VME)
The Pentium architecture added a number of enhancements to the virtual 8086 mode. These were however documented by Intel only starting with the subsequent P6 (microarchitecture); their more recent formal name is Virtual-8086 Mode Extensions, abbreviated VME (older documentation may use "Virtual 8086 mode enhancements" as the VME acronym expansion). Some later Intel 486 chips also support it. The enhancements address mainly the 8086 virtualization overhead, with a particular focus on (virtual) interrupts. Before the extensions were publicly documented in the P6 documentation, the official documentation referred to the famed Appendix H, which was omitted from the public documentation and shared only with selected partners under NDA.
Activating VME is done by setting bit number 0 (0x1 in value) of CR4. Because the VME interrupt speed-up enhancements were found useful for non-VM86 protected tasks, they can also be enabled separately by setting only bit number 1 (0x2 in value), which is called PVI (Protected Mode Virtual Interrupts). Detecting whether a processor supports VME (including PVI) is done using the CPUID instruction, with an initial EAX value of 0x1, by testing the value of second bit (bit number 1, 0x2 in value) in EDX register, which is set if VME is supported by the processor. In Linux, this latter bit is reported as the flag in the file, under the "flags" section.
In virtual 8086 mode, the basic idea is that when IOPL is less than 3, PUSHF/POPF/STI/CLI/INT/IRET instructions will treat the value of VIF in the real 32-bit EFLAGS register as the value of IF in the simulated 16-bit FLAGS register (32-bit PUSHFD/POPFD continues to GP fault). VIP will cause a GP fault on the setting of simulated IF, directing the OS to process any pending interrupts. PVI is the same idea but only affects CLI/STI instructions.
First generation AMD Ryzen CPUs have been found to feature a broken VME implementation. The second generation Ryzen (2000 series) has fixed this issue.
64-bit and VMX support
Virtual 8086 mode is not available in x86-64 long mode, although it is still present on x86-64 capable processors running in legacy mode.
Intel VT-x brings back the ability to run virtual 8086 mode from x86-64 long mode, but it has to be done by transitioning the (physical) processor to VMX root mode and launching a logical (virtual) processor itself running in virtual 8086 mode.
Westmere and later Intel processors usually can start the virtual processor directly in real mode using the "unrestricted guest" feature (which itself requires Extended Page Tables); this method removes the need to resort to the nested virtual 8086 mode simply to run the legacy BIOS for booting.
AMD-V can do virtual 8086 mode in guests, too, but it can also just run the guest in "paged real mode" using the following steps: you create a SVM (Secure Virtual Machine) mode guest with CR0.PE=0, but CR0.PG=1 (that is, with protected mode disabled but paging enabled), which is ordinarily impossible, but is allowed for SVM guests if the host intercepts page faults.
See also
IA-32
x86 assembly language
Notes
References
X86 operating modes
Virtualization software
Intel products
Programming language implementation |
95374 | https://en.wikipedia.org/wiki/Cunard%20Line | Cunard Line | Cunard Line () is a British cruise line based at Carnival House at Southampton, England, operated by Carnival UK and owned by Carnival Corporation & plc. Since 2011, Cunard and its three ships have been registered in Hamilton, Bermuda.
In 1839, Samuel Cunard was awarded the first British transatlantic steamship mail contract, and the next year formed the British and North American Royal Mail Steam-Packet Company in Glasgow with shipowner Sir George Burns together with Robert Napier, the famous Scottish steamship engine designer and builder, to operate the line's four pioneer paddle steamers on the Liverpool–Halifax–Boston route. For most of the next 30 years, Cunard held the Blue Riband for the fastest Atlantic voyage. However, in the 1870s Cunard fell behind its rivals, the White Star Line and the Inman Line. To meet this competition, in 1879 the firm was reorganised as the Cunard Steamship Company, Ltd, to raise capital.
In 1902, White Star joined the American-owned International Mercantile Marine Co. In response, the British Government provided Cunard with substantial loans and a subsidy to build two superliners needed to retain Britain's competitive position. Mauretania held the Blue Riband from 1909 to 1929. Her running mate, Lusitania, was torpedoed in 1915 during the First World War.
In 1919, Cunard relocated its British homeport from Liverpool to Southampton, to better cater for travellers from London. In the late 1920s, Cunard faced new competition when the Germans, Italians and French built large prestige liners. Cunard was forced to suspend construction on its own new superliner because of the Great Depression. In 1934, the British Government offered Cunard loans to finish Queen Mary and to build a second ship, Queen Elizabeth, on the condition that Cunard merged with the then ailing White Star line to form Cunard-White Star Line. Cunard owned two-thirds of the new company. Cunard purchased White Star's share in 1947; the name reverted to the Cunard Line in 1950.
Upon the end of the Second World War, Cunard regained its position as the largest Atlantic passenger line. By the mid-1950s, it operated 12 ships to the United States and Canada. After 1958, transatlantic passenger ships became increasingly unprofitable because of the introduction of jet airliners. Cunard undertook a brief foray into air travel via the "Cunard Eagle" and "BOAC Cunard" airlines, but withdrew from the airliner market in 1966. Cunard withdrew from its year-round service in 1968 to concentrate on cruising and summer transatlantic voyages for holiday makers. The Queens were replaced by Queen Elizabeth 2 (QE2), which was designed for the dual role.
In 1998, Cunard was acquired by the Carnival Corporation, and accounted for 8.7% of that company's revenue in 2012. In 2004, QE2 was replaced on the transatlantic runs by Queen Mary 2 (QM2). The line also operates Queen Victoria (QV) and Queen Elizabeth (QE). As of 2019, Cunard is the only shipping company to operate a scheduled passenger service between Europe and North America.
History
Early years: 1840–1850
The British Government started operating monthly mail brigs from Falmouth, Cornwall, to New York in 1756. These ships carried few non-governmental passengers and no cargo. In 1818, the Black Ball Line opened a regularly scheduled New York–Liverpool service with clipper ships, beginning an era when American sailing packets dominated the North Atlantic saloon-passenger trade that lasted until the introduction of steamships. A Committee of Parliament decided in 1836 that to become more competitive, the mail packets operated by the Post Office should be replaced by private shipping companies. The Admiralty assumed responsibility for managing the contracts. The famed Arctic explorer Admiral Sir William Edward Parry was appointed as Comptroller of Steam Machinery and Packet Service in April 1837. Nova Scotians led by their young Assembly Speaker, Joseph Howe, lobbied for steam service to Halifax. On his arrival in London in May 1838, Howe discussed the enterprise with his fellow Nova Scotian Samuel Cunard (1787–1865), a shipowner who was also visiting London on business. Cunard and Howe were associates and Howe also owed Cunard £300 (). Cunard returned to Halifax to raise capital, and Howe continued to lobby the British government. The Rebellions of 1837–1838 were ongoing and London realised that the proposed Halifax service was also important for the military.
That November, Parry released a tender for North Atlantic monthly mail service to Halifax beginning in April 1839 using steamships with 300 horsepower. The Great Western Steamship Company, which had opened its pioneer Bristol–New York service earlier that year, bid £45,000 for a monthly Bristol–Halifax–New York service using three ships of 450 horsepower. While British American, the other pioneer transatlantic steamship company, did not submit a tender, the St. George Steam Packet Company, owner of Sirius, bid £45,000 for a monthly Cork–Halifax service and £65,000 for a monthly Cork–Halifax–New York service. The Admiralty rejected both tenders because neither bid offered to begin services early enough.
Cunard, who was back in Halifax, unfortunately did not know of the tender until after the deadline. He returned to London and started negotiations with Admiral Parry, who was Cunard's good friend from when Parry was a young officer stationed in Halifax 20 years earlier. Cunard offered Parry a fortnightly service beginning in May 1840. While Cunard did not then own a steamship, he had been an investor in an earlier steamship venture, Royal William, and owned coal mines in Nova Scotia. Cunard's major backer was Robert Napier whose Robert Napier and Sons was the Royal Navy's supplier of steam engines. He also had the strong backing of Nova Scotian political leaders at the time when London needed to rebuild support in British North America after the rebellion.
Over Great Western's protests, in May 1839 Parry accepted Cunard's tender of £55,000 for a three-ship Liverpool–Halifax service with an extension to Boston and a supplementary service to Montreal. The annual subsidy was later raised £81,000 to add a fourth ship and departures from Liverpool were to be monthly during the winter and fortnightly for the rest of the year. Parliament investigated Great Western's complaints, and upheld the Admiralty's decision. Napier and Cunard recruited other investors including businessmen James Donaldson, Sir George Burns, and David MacIver. In May 1840, just before the first ship was ready, they formed the British and North American Royal Mail Steam Packet Company with initial capital of £270,000, later increased to £300,000 (£ in ). Cunard supplied £55,000. Burns supervised ship construction, MacIver was responsible for day-to-day operations, and Cunard was the "first among equals" in the management structure. When MacIver died in 1845, his younger brother Charles assumed his responsibilities for the next 35 years. (For more detail of the first investors in the Cunard Line and also the early life of Charles MacIver, see Liverpool Nautical Research Society's Second Merseyside Maritime History, pp. 33–37 1991.)
In May 1840 the coastal paddle steamer Unicorn made the company's first voyage to Halifax to begin the supplementary service to Montreal. Two months later the first of the four ocean-going steamers of the Britannia Class, departed Liverpool. By coincidence, the steamer's departure had patriotic significance on both sides of the Atlantic: she was named Britannia, and sailed on 4 July. Even on her maiden voyage, however, her performance indicated that the new era she heralded would be much more beneficial for Britain than the US. At a time when the typical packet ship might take several weeks to cross the Atlantic, Britannia reached Halifax in 12 days and 10 hours, averaging 8.5 knots (15.7 km/h), before proceeding to Boston. Such relatively brisk crossings quickly became the norm for the Cunard Line: during 1840–41, mean Liverpool–Halifax times for the quartet were 13 days 6 hours to Halifax and 11 days 4 hours homeward. Two larger ships were quickly ordered, one to replace the Columbia, which sank at Seal Island, Nova Scotia, in 1843 without loss of life. By 1845, steamship lines led by Cunard carried more saloon passengers than the sailing packets. Three years later, the British Government increased the annual subsidy to £156,000 so that Cunard could double its frequency. Four additional wooden paddlers were ordered and alternate sailings were direct to New York instead of the Halifax–Boston route. The sailing packet lines were now reduced to the immigrant trade.
From the beginning Cunard's ships used the line's distinctive red funnel with two or three narrow black bands and black top. It appears that Robert Napier was responsible for this feature. His shipyard in Glasgow used this combination previously in 1830 on Thomas Assheton Smith's private steam yacht "Menai". The renovation of her model by Glasgow Museum of Transport revealed that she had vermilion funnels with black bands and black top. The line also adopted a naming convention that utilised words ending in "IA".
Cunard's reputation for safety was one of the significant factors in the firm's early success. Both of the first transatlantic lines failed after major accidents: the British and American line collapsed after the President foundered in a gale, and the Great Western Steamship Company failed after Great Britain stranded because of a navigation error. Cunard's orders to his masters were, "Your ship is loaded, take her; speed is nothing, follow your own road, deliver her safe, bring her back safe – safety is all that is required." In particular, Charles MacIver's constant inspections were responsible for the firm's safety discipline.
New Competition: 1850–1879
In 1850 the American Collins Line and the British Inman Line started new Atlantic steamship services. The American Government supplied Collins with a large annual subsidy to operate four wooden paddlers that were superior to Cunard's best, as they demonstrated with three Blue Riband-winning voyages between 1850 and 1854. Meanwhile, Inman showed that iron-hulled, screw propelled steamers of modest speed could be profitable without subsidy. Inman also became the first steamship line to carry steerage passengers. Both of the newcomers suffered major disasters in 1854. The next year, Cunard put pressure on Collins by commissioning its first iron-hulled paddler, Persia. That pressure may well have been a factor in a second major disaster suffered by the Collins Line, the loss of its steamer Pacific. Pacific sailed out of Liverpool just a few days before Persia was due to depart on her maiden voyage, and was never seen again; it was widely assumed at the time that the captain had pushed his ship to the limit in order to stay ahead of the new Cunarder, and had likely collided with an iceberg during what was a particularly severe winter in the North Atlantic. A few months later Persia inflicted a further blow to the Collins Line, regaining the Blue Riband with a Liverpool–New York voyage of 9 days 16 hours, averaging .
During the Crimean War Cunard supplied 11 ships for war service. Every British North Atlantic route was suspended until 1856 except Cunard's Liverpool–Halifax–Boston service. While Collins' fortunes improved because of the lack of competition during the war, it collapsed in 1858 after its subsidy for carrying mail across the Atlantic was reduced by the US Congress. Cunard emerged as the leading carrier of saloon passengers and in 1862 commissioned Scotia, the last paddle steamer to win the Blue Riband. Inman carried more passengers because of its success in the immigrant trade. To compete, in May 1863 Cunard started a secondary Liverpool–New York service with iron-hulled screw steamers that catered for steerage passengers. Beginning with China, the line also replaced the last three wooden paddlers on the New York mail service with iron screw steamers that only carried saloon passengers.
When Cunard died in 1865, the equally conservative Charles MacIver assumed Cunard's role. The firm retained its reluctance about change and was overtaken by competitors that more quickly adopted new technology. In 1866 Inman started to build screw propelled express liners that matched Cunard's premier unit, Scotia. Cunard responded with its first high speed screw propellered steamer, Russia which was followed by two larger editions. In 1871 both companies faced a new rival when the White Star Line commissioned the Oceanic and her five sisters. The new White Star record-breakers were especially economical because of their use of compound engines. White Star also set new standards for comfort by placing the dining saloon midships and doubling the size of cabins. Inman rebuilt its express fleet to the new standard, but Cunard lagged behind both of its rivals. Throughout the 1870s Cunard passage times were longer than either White Star or Inman.
In 1867 responsibility for mail contracts was transferred back to the Post Office and opened for bid. Cunard, Inman and the German Norddeutscher Lloyd were each awarded one of the three weekly New York mail services. The fortnightly route to Halifax formerly held by Cunard went to Inman. Cunard continued to receive a £80,000 subsidy (equivalent to £ in ), while NDL and Inman were paid sea postage. Two years later the service was rebid and Cunard was awarded a seven-year contract for two weekly New York mail services at £70,000 per annum. Inman was awarded a seven-year contract for the third weekly New York service at £35,000 per year.
The Panic of 1873 started a five-year shipping depression that strained the finances of all of the Atlantic competitors. In 1876 the mail contracts expired and the Post Office ended both Cunard's and Inman's subsidies. The new contracts were paid on the basis of weight, at a rate substantially higher than paid by the United States Post Office. Cunard's weekly New York mail sailings were reduced to one and White Star was awarded the third mail sailing. Every Tuesday, Thursday and Saturday a liner from one of the three firms departed Liverpool with the mail for New York.
Cunard Steamship Company Ltd: 1879–1934
To raise additional capital, in 1879 the privately held British and North American Royal Mail Steam Packet Company was reorganised as a public stock corporation, the Cunard Steamship Company, Ltd. Under Cunard's new chairman, John Burns (1839–1900), son of one of the firm's original founders, Cunard commissioned four steel-hulled express liners beginning with of 1881, the first passenger liner with electric lighting throughout. In 1884, Cunard purchased the almost new Blue Riband winner from the Guion Line when that firm defaulted on payments to the shipyard. That year, Cunard also commissioned the record-breakers and capable of . Starting in 1887, Cunard's newly won leadership on the North Atlantic was threatened when Inman and then White Star responded with twin screw record-breakers. In 1893 Cunard countered with two even faster Blue Riband winners, and , capable of .
No sooner had Cunard re-established its supremacy than new rivals emerged. Beginning in the late 1860s several German firms commissioned liners that were almost as fast as the British mail steamers from Liverpool. In 1897 of Norddeutscher Lloyd raised the Blue Riband to , and was followed by a succession of German record-breakers. Rather than match the new German speedsters, White Star – a rival which Cunard line would merge with – commissioned four very profitable Big Four ocean liners of more moderate speed for its secondary Liverpool–New York service. In 1902 White Star joined the well-capitalized American combine, the International Mercantile Marine Co. (IMM), which owned the American Line, including the old Inman Line, and other lines. IMM also had trade agreements with Hamburg America and Norddeutscher Lloyd. Negotiators approached Cunard's management in late 1901 and early 1902, but did not succeed in drawing the Cunard Line into IMM, then being formed with support of financier J. P. Morgan.
British prestige was at stake. The British Government provided Cunard with an annual subsidy of £150,000 plus a low interest loan of £2.5 million (equivalent to £ in ), to pay for the construction of the two superliners, the Blue Riband winners and , capable of . In 1903 the firm started a Fiume–New York service with calls at Italian ports and Gibraltar. The next year Cunard commissioned two ships to compete directly with the Celtic-class liners on the secondary Liverpool–New York route. In 1911 Cunard entered the St Lawrence trade by purchasing the Thompson line, and absorbed the Royal line five years later.
Not to be outdone, both White Star and Hamburg–America each ordered a trio of superliners. The White Star liners at and the Hapag liners at were larger and more luxurious than the Cunarders, but not as fast. Cunard also ordered a new ship, , capable of , to complete the Liverpool mail fleet. Events prevented the expected competition between the three sets of superliners. White Star's Titanic sank on its maiden voyage, both White Star's and Cunard's Lusitania were war losses, and the three Hapag super-liners were handed over to the Allied powers as war reparations.
In 1916 Cunard Line completed its European headquarters in Liverpool, moving in on 12 June of that year. The grand neo-Classical Cunard Building was the third of Liverpool's Three Graces. The headquarters were used by Cunard until the 1960s.
Due to First World War losses, Cunard began a post-war rebuilding programme including eleven intermediate liners. It acquired the former Hapag (renamed Berengaria) to replace the lost Lusitania as the running mate for Mauretania and Aquitania, and Southampton replaced Liverpool as the British destination for the three-ship express service. By 1926 Cunard's fleet was larger than before the war, and White Star was in decline, having been sold by IMM.
Despite the dramatic reduction in North Atlantic passengers caused by the shipping depression beginning in 1929, the Germans, Italians and the French commissioned new "ships of state" prestige liners. The German took the Blue Riband at in 1933, the Italian recorded on a westbound voyage the same year, and the French crossed the Atlantic in just under four days at in 1937. In 1930 Cunard ordered an 80,000-ton liner that was to be the first of two record-breakers fast enough to fit into a two-ship weekly Southampton–New York service. Work on "Hull Number 534" was halted in 1931 because of the economic conditions.
Cunard-White Star Ltd: 1934–1949
In 1934, both the Cunard Line and the White Star Line were experiencing financial difficulties. David Kirkwood, MP for Clydebank where the unfinished Hull Number 534 had been sitting idle for two and a half years, made a passionate plea in the House of Commons for funding to finish the ship and restart the dormant British economy. The government offered Cunard a loan of £3 million to complete Hull Number 534 and an additional £5 million to build a second ship, if Cunard merged with White Star.
The merger took place on 10 May 1934, creating Cunard-White Star Limited. The merger was accomplished with Cunard owning about two-thirds of the capital. Due to the surplus tonnage of the new combined Cunard White Star fleet many of the older liners were sent to the scrapyard; these included the ex-Cunard liner Mauretania and the ex-White Star liners and . In 1936 the ex-White Star was sold when Hull Number 534, now named , replaced her in the express mail service. Queen Mary reached on her 1938 Blue Riband voyage. Cunard-White Star started construction on , and a smaller ship, the second , joined the fleet and could also be used on the Atlantic run when one of the Queens was in drydock. The ex-Cunard liner Berengaria was sold for scrap in 1938 after a series of fires.
During the Second World War the Queens carried over two million servicemen and were credited by Churchill as helping to shorten the war by a year. All four of the large Cunard-White Star express liners, the two Queens, Aquitania and Mauretania survived, but many of the secondary ships were lost. Both and were sunk with heavy loss of life.
In 1947 Cunard purchased White Star's interest, and by 1949 the company had dropped the White Star name and was renamed "Cunard Line". Also in 1947 the company commissioned five freighters and two cargo liners. , was completed in 1949 as a permanent cruise liner and Aquitania was retired the next year.
Disruption by airliners, Cunard Eagle and BOAC-Cunard: (1950–1968)
Cunard was in an especially good position to take advantage of the increase in North Atlantic travel during the 1950s and the Queens were a major generator of US currency for Great Britain. Cunard's slogan, "Getting there is half the fun", was specifically aimed at the tourist trade. Beginning in 1954, Cunard took delivery of four new 22,000-GRT intermediate liners for the Canadian route and the Liverpool–New York route. The last White Star motor ship, of 1930, remained in service until 1960.
The introduction of jet airliners in 1958 heralded major change for the ocean liner industry. In 1960 a government-appointed committee recommended the construction of project Q3, a conventional 75,000 GRT liner to replace Queen Mary. Under the plan, the government would lend Cunard the majority of the liner's cost. However, some Cunard stockholders questioned the plan at the June 1961 board meeting because transatlantic flights were gaining in popularity. By 1963 the plan had been changed to a dual-purpose 55,000 GRT ship designed to cruise in the off-season. Ultimately, this ship came into service in 1969 as the 70,300 GRT .
Cunard attempted to address the challenge presented by jet airliners by diversifying its business into air travel. In March 1960, Cunard bought a 60% shareholding in British Eagle, an independent (non-government owned) airline, for £30 million, and changed its name to Cunard Eagle Airways. The support from this new shareholder enabled Cunard Eagle to become the first British independent airline to operate pure jet airliners, as a result of a £6 million order for two new Boeing 707-420 passenger aircraft. The order had been placed (including an option on a third aircraft) in expectation of being granted traffic rights for transatlantic scheduled services.<ref>Aeroplane — Air Transport ...: "Cunard Eagle Buys Boeings, Vol. 100, No. 2587, p. 545, Temple Press, London, 18 May 1961</ref> The airline took delivery of its first Bristol Britannia aircraft on 5 April 1960 (on lease from Cubana). Cunard hoped to capture a significant share of the 1 million people that crossed the Atlantic by air in 1960. This was the first time more passengers chose to make their transatlantic crossing by air than sea. In June 1961, Cunard Eagle became the first independent airline in the UK to be awarded a licence by the newly constituted Air Transport Licensing Board (ATLB) to operate a scheduled service on the prime Heathrow – New York JFK route, but the licence was revoked in November 1961 after main competitor, state-owned BOAC, appealed to Aviation Minister Peter Thorneycroft.Aircraft (Gone but not forgotten... British Eagle), pp. 34/5 On 5 May 1962, the airline's first 707 inaugurated scheduled jet services from London Heathrow to Bermuda and Nassau. The new jet service – marketed as the Cunarder Jet in the UK and as the Londoner in the western hemisphere – replaced the earlier Britannia operation on this route. Cunard Eagle succeeded in extending this service to Miami despite the loss of its original transatlantic scheduled licence and BOAC's claim that there was insufficient traffic to warrant a direct service from the UK. A load factor of 56% was achieved at the outset. Inauguration of the first British through-plane service between London and Miami also helped Cunard Eagle increase utilisation of its 707s.
BOAC countered Eagle's move to establish itself as a full-fledged scheduled transatlantic competitor on its Heathrow—JFK flagship route by forming BOAC-Cunard as a new £30 million joint venture with Cunard. BOAC contributed 70% of the new company's capital and eight Boeing 707s. Cunard Eagle's long-haul scheduled operation – including the two new 707s – was absorbed into BOAC-Cunard before delivery of the second 707, in June 1962.Aeroplane — World Transport Affairs: C.E.A. hands over mid-Atlantic service, Vol. 104, No. 2659, p. 12, Temple Press, London, 4 October 1962 BOAC-Cunard leased any spare aircraft capacity to BOAC to augment the BOAC mainline fleet at peak times. As part of this deal, BOAC-Cunard also bought flying hours from BOAC for using the latter's aircraft in the event of capacity shortfalls. This maximised combined fleet utilisation. The joint fleet use agreement did not cover Cunard Eagle's European scheduled, trooping and charter operations. However, the joint venture was not successful for Cunard and lasted only until 1966, when BOAC bought out Cunard's share. Cunard also sold a majority holding in the remainder of Cunard Eagle back to its founder in 1963.
Within ten years of the introduction of jet airliners in 1958, most of the conventional Atlantic liners were gone. Mauretania was retired in 1965, Queen Mary and Caronia in 1967, and Queen Elizabeth in 1968. Two of the new intermediate liners were sold by 1970 and the other two were converted to cruise ships. All Cunard ships flew both the Cunard and White Star Line house flags until 4 November 1968, when the last White Star ship, Nomadic was withdrawn from service. After this, the White Star flag was no longer flown and all remnants of both White Star Line and Cunard-White Star Line were retired.
Trafalgar House years: 1971–1998
In 1971, when the line was purchased by the conglomerate Trafalgar House, Cunard operated cargo and passenger ships, hotels and resorts. Its cargo fleet consisted of 42 ships in service, with 20 on order. The flagship of the passenger fleet was the two-year-old Queen Elizabeth 2. The fleet also included the remaining two intermediate liners from the 1950s, plus two purpose-built cruise ships on order. Trafalgar acquired two additional cruise ships and disposed of the intermediate liners and most of the cargo fleet. During the Falklands War, QE2 and Cunard Countess were chartered as troopships while Cunard's container ship Atlantic Conveyor was sunk by an Exocet missile.
Cunard acquired the Norwegian America Line in 1983, with two classic ocean liner/cruise ships. Also in 1983, the Trafalgar attempted a hostile takeover of P&O, another large passenger and cargo shipping line, which was founded three years before Cunard. P&O objected and forced the issue to the British Monopolies and Mergers Commission. In their filing, P&O was critical of Trafalgar's management of Cunard and their failure to correct Queen Elizabeth 2's mechanical problems. In 1984, the Commission ruled in favour of the merger, but Trafalgar decided against proceeding. In 1988, Cunard acquired Ellerman Lines and its small fleet of cargo vessels, organising the business as Cunard-Ellerman, however, only a few years later, Cunard decided to abandon the cargo business and focus solely on cruise ships. Cunard's cargo fleet was sold off between 1989 and 1991, with a single container ship, the second Atlantic Conveyor, remaining under Cunard ownership until 1996. In 1993, Cunard entered into a 10-year agreement to handle marketing, sales and reservations for the Crown Cruise Line, and its three vessels joined the Cunard fleet under the Cunard Crown banner. In 1994 Cunard purchased the rights to the name of the Royal Viking Line and its Royal Viking Sun. The rest of Royal Viking Line's fleet stayed with the line's owner, Norwegian Cruise Line.
By the mid-1990s Cunard was ailing. The company was embarrassed in late 1994 when Queen Elizabeth 2 experienced numerous defects during the first voyage of the season because of unfinished renovation work. Claims from passengers cost the company US$13 million. After Cunard reported a US$25 million loss in 1995, Trafalgar assigned a new CEO to the line, who concluded that the company had management issues. In 1995, Cunard Line introduced White Star Service to Queen Elizabeth 2 as a reference to the high standards of customer service expected of the company. The term is still today onboard it's newer vessels. The company has also created the White Star Academy, an in-house programme for preparing new crew members for the service standards expected on Cunard ships.
In 1996 the Norwegian conglomerate Kværner acquired Trafalgar House, and attempted to sell Cunard. When there were no takers, Kværner made substantial investments to turn around the company's tarnished reputation.
Carnival: from 1998-Present
In 1998, the cruise line conglomerate Carnival Corporation acquired 62% of Cunard for US$425 million. Coincidently, it was the same percentage that Cunard owned in Cunard-White Star Line. The next year Carnival acquired the remaining stock for US$205 million. Ultimately, Carnival sued Kværner claiming that the ships were in worse condition than represented and Kværner agreed to refund US$50 million to Carnival. Each of Carnival's cruise lines is designed to appeal to a different market, and Carnival was interested in rebuilding Cunard as a luxury brand trading on its British traditions. Under the slogan "Advancing Civilization Since 1840", Cunard's advertising campaign sought to emphasise the elegance and mystique of ocean travel. Only Queen Elizabeth 2 and Caronia continued under the Cunard brand and the company began Project Queen Mary to build a new ocean liner/cruise ship for the transatlantic route.
By 2001, Carnival was the largest cruise company, followed by Royal Caribbean and P&O Princess Cruises, which had recently separated from its parent, P&O. When Royal Caribbean and P&O Princess agreed to merge, Carnival countered with a hostile takeover bid for P&O Princess. Carnival rejected the idea of selling Cunard to resolve antitrust issues with the acquisition. European and US regulators approved the merger without requiring Cunard's sale. After the merger was completed, Carnival moved Cunard's headquarters to the offices of Princess Cruises in Santa Clarita, California, so that administrative, financial and technology services could be combined.
Carnival House opened in Southampton in 2009, and executive control of Cunard Line transferred from Carnival Corporation in the United States, to Carnival UK, the primary operating company of Carnival plc. As the UK-listed holding company of the group, Carnival plc had executive control of all Carnival Group activities in the UK, with the headquarters of all UK-based brands, including Cunard, in offices at Carnival House.
In 2004, the 36-year-old QE2 was replaced on the North Atlantic by Queen Mary 2. Caronia was sold and Queen Elizabeth 2 continued to cruise until she was retired in 2008. In 2007 Cunard added Queen Victoria, a cruise ship of the Vista class originally designed for Holland America Line. To reinforce Cunard traditions, Queen Victoria has a small museum on board. Cunard commissioned a second Vista class cruise ship, Queen Elizabeth, in 2010.
In 2010, Cunard appointed its first female commander, Captain Inger Klein Olsen. In 2011, Cunard changed the vessel registry of all three of its ships in service to Hamilton, Bermuda, the first time in the 171-year history of the company that it had no ships registered in the United Kingdom. The captains of ships registered in Bermuda can marry couples at sea, whereas those of UK-registered ships cannot, and weddings at sea are a lucrative market.
On 25 May 2015, the three Cunard ships – Queen Mary 2, Queen Elizabeth and Queen Victoria – sailed up the Mersey into Liverpool to commemorate the 175th anniversary of Cunard. The ships performed manoeuvres, including 180-degree turns, as the Red Arrows performed a fly-past. Just over a year later Queen Elizabeth returned to Liverpool under Captain Olsen to take part in the celebrations of the centenary of the Cunard Building on 2 June 2016.
The White Star Line flag is raised on all current Cunard ships and the Nomadic every April 15 in memory of the Titanic disaster.
Fleet
Current fleet
Future fleet
Former fleet
The Cunard fleet, all built for Cunard unless otherwise indicated, consisted of the following ships in order of acquisition:
1840–1850
All ships of this period had wooden hulls and paddle wheels.
1850–1869
Only Arabia had a wooden hull and only Arabia, Persia, Shamrock, Jackal and Scotia had paddle wheels.
1869–1901
1901–1918
1918–1934
1934–1949
See also: White Star Line's Olympic, Homeric, Majestic, Doric, and Laurentic.
1949–1968
1968–1999
Cunard Hotels
After Trafalgar House bought the company in 1971, Cunard operated the former company's existing hotels as Cunard-Trafalgar Hotels. In the 1980s, the chain was restyled as Cunard Hotels & Resorts, before folding in 1995.
See also
Cruise line
Transatlantic crossing
Cunard Yanks
Cunard Building (New York City)
References
Notes
Citations
Bibliography
Fowler Jr., William M. Steam Titans: Cunard, Collins, and the Epic Battle for Commerce on the North Atlantic'' (London: Bloomsbury), 2017. 358 pp
External links
Cunard History Website on Chriscunard.com
Official 'Queen Mary 2' Fan Page
Cunard Line Passenger Lists, Brochures, Other Historical Documents 1800s – 1954 GG Archives
The Last Ocean Liners – Cunard Line – trade routes and ships of the Cunard Line since the 1950s
http://www.charlesfreemandesign.com/curator-intro Cunard Sesquicentennial Exhibition – 150 Transatlantic Years – The Ocean Liner Museum, New York NY
TheShips List
1998 mergers and acquisitions
Carnival Corporation & plc
British companies established in 1840
Cruise lines
History of Liverpool
Packet (sea transport)
Shipping companies of the United Kingdom
Transatlantic shipping companies
Travel and holiday companies of the United Kingdom
1840 establishments in England
Cunard family
Companies based in Southampton
Transport companies established in 1840 |
30864962 | https://en.wikipedia.org/wiki/Counterforce | Counterforce | In nuclear strategy, a counterforce target is one that has a military value, such as a launch silo for intercontinental ballistic missiles, an airbase at which nuclear-armed bombers are stationed, a homeport for ballistic missile submarines, or a command and control installation.
The intent of a counterforce strategy (attacking counterforce targets with nuclear weapons) is to conduct a pre-emptive nuclear strike which has as its aim to disarm an adversary by destroying its nuclear weapons before they can be launched. That would minimize the impact of a retaliatory second strike. However, counterforce attacks are possible in a second strike as well, especially with weapons like UGM-133 Trident II. A counterforce target is distinguished from a countervalue target, which includes an adversary's population, knowledge, economic, or political resources. In other words, a counterforce strike is against an adversary's military, and a countervalue strike is against an adversary's cities.
A closely related tactic is the decapitation strike, which destroys an enemy's nuclear command and control facilities and similarly has a goal to eliminate or reduce the enemy's ability to launch a second strike. Of course, it must be mentioned that counterforce targets are almost always near to civilian population centers, which would not be spared in the event of a counterforce strike.
Theory
In nuclear warfare, enemy targets are divided into two types: counterforce and countervalue. A counterforce target is an element of the military infrastructure, usually either specific weapons or the bases that support them. A counterforce strike is an attack that targets those elements but leaving the civilian infrastructure, the countervalue targets, as undamaged as possible. Countervalue refers to the targeting of an opponent's cities and civilian populations.
An ideal counterforce attack would kill no civilians. Military attacks are prone to causing collateral damage, especially when nuclear weapons are employed. In nuclear terms, many military targets are located near civilian centers, and a major counterforce strike that uses even relatively small nuclear warheads against a nation would certainly inflict many civilian casualties. Also, the requirement to use ground burst strikes to destroy hardened targets would produce far more fallout than the air bursts used to strike countervalue targets, which introduces the possibility that a counterforce strike would cause more civilian casualties over the medium term than a countervalue strike.
Counterforce weapons may be seen to provide more credible deterrence in future conflict by providing options for leaders. One option considered by the Soviet Union in the 1970s was basing missiles in orbit.
Cold War
Counterforce is a type of attack which was originally proposed during the Cold War.
Because of the low accuracy (circular error probable) of early generation intercontinental ballistic missiles (and especially submarine-launched ballistic missiles), counterforce strikes were initially possible only against very large, undefended targets like bomber airfields and naval bases. Later-generation missiles, with much-improved accuracy, made possible counterforce attacks against the opponent's hardened military facilities, like missile silos and command and control centers.
Both sides in the Cold War took steps to protect at least some of their nuclear forces from counterforce attacks. At one point, the US kept B-52 Stratofortress bombers permanently in flight so that they would remain operational after any counterforce strike. Other bombers were kept ready for launch on short notice, allowing them to escape their bases before intercontinental ballistic missiles, launched from land, could destroy them. The deployment of nuclear weapons on ballistic missile submarines changed the equation considerably, as submarines launching from positions off the coast would likely destroy airfields before bombers could launch, which would reduce their ability to survive an attack. Submarines themselves, however, are largely immune from counterforce strikes unless they are moored at their naval bases, and both sides fielded many such weapons during the Cold War.
A counterforce exchange was one scenario mooted for a possible limited nuclear war. The concept was that one side might launch a counterforce strike against the other; the victim would recognize the limited nature of the attack and respond in kind. That would leave the military capability of both sides largely destroyed. The war might then come to an end because both sides would recognize that any further action would lead to attacks on the civilian population from the remaining nuclear forces, a countervalue strike.
Critics of that idea claimed that since even a counterforce strike would kill millions of civilians since some strategic military facilities like bomber airbases were often located near large cities. That would make it unlikely that escalation to a full-scale countervalue war could be prevented.
MIRVed land-based ICBMs are considered destabilizing because they tend to put a premium on striking first. For example, suppose that each side has 100 missiles, with 5 warheads each, and each side has a 95 percent chance of neutralizing the opponent's missiles in their silos by firing 2 warheads at each silo. In that case, the side that strikes first can reduce the enemy ICBM force from 100 missiles to about 5 by firing 40 missiles with 200 warheads and keeping the remaining 60 missiles in reserve. For such an attack to be successful, the warheads would have to strike their targets before the enemy launched a counterattack (see second strike and launch on warning). This type of weapon was therefore banned under the START II agreement, which was not ratified and therefore ineffectual.
Counterforce disarming first-strike weapons
R-36M (SS-18 Satan). Deployed in 1976, this counterforce MIRV ICBM had single (20 Mt) or ten MIRV (550-750 kt each) warheads, with a circular error probable (CEP) of . Targeted against Minuteman III silos as well as CONUS command, control, and communications facilities. Has sufficient throw-weight to carry up to 10 RVs and 40 penaids. Still in service.
RSD-10 (SS-20 Saber). Deployed in 1978, this counterforce MIRV IRBM could hide behind the Urals in Asian Russia, and launch its highly accurate three warhead payload (150 kt each, with a CEP) against NATO command, control, and communications installations, bunkers, air fields, air defense sites, and nuclear facilities in Europe. Extremely short flight time ensured NATO would be unable to respond prior to weapon impact. Triggered development and deployment of the Pershing II by NATO in 1983.
Peacekeeper (MX Missile). Deployed in 1986, this missile boasted ten MIRV warheads each with a 300 kt yield, CEP . Decommissioned.
Pershing II. Deployed in 1983, this single warhead MRBM boasted 50 m CEP with terminal active radar homing/DSMAC guidance. Short, seven-minute flight-time (which makes launch on warning much harder), variable yield warhead of 5-50 kt, and range of , allowed this weapon to strike command, control, and communications installations, bunkers, air fields, air defense sites, and ICBM silos in the European part of the Soviet Union with scarcely any warning. Decommissioned.
RT-23 Molodets (SS-24 Scalpel). Deployed in 1987, this MIRV ICBM carried ten warheads, each with 300-550 kt yield and a CEP of .
UGM-133 Trident II. Deployed in 1990, this intercontinental-range SLBM carries up to eight RVs with CEP of and yield of 100/475 kt. Main purpose is second strike countervalue retaliation, but the excellent CEP and much shorter flight-time due to submarine launch (reducing the possibility of launch on warning) makes it an excellent first-strike weapon. However, that any nuclear power would be willing to place its nuclear submarines close to enemy shores during times of strategic tension is highly questionable. Has sufficient throw-weight to deploy up to twelve warheads, but the post-boost vehicle is only capable of deploying eight, and on average about four are deployed in current practice.
See also
Balance of power (international relations)
Balance of terror
Deterrence theory
Limited first strike
Peace through strength
References
Military strategy
Nuclear warfare
Nuclear strategy
Cold War terminology |
586866 | https://en.wikipedia.org/wiki/Virginia%20State%20University | Virginia State University | Virginia State University (VSU or Virginia State) is a public historically black land-grant university in Ettrick, Virginia. Founded on , Virginia State developed as the United States's first fully state-supported four-year institution of higher learning for black Americans. The university is a member school of the Thurgood Marshall College Fund.
History
Virginia State University was founded on March 6, 1882, when the legislature passed a bill to charter the Virginia Normal and Collegiate Institute. The bill was sponsored by Delegate Alfred W. Harris, a black attorney whose offices were in Petersburg, but who lived in and represented Dinwiddie County in the General Assembly. A hostile lawsuit delayed opening day for nineteen months, until October 1, 1883. In 1902, the legislature revised the charter act to curtail the collegiate program and to change the name to Virginia Normal and Industrial Institute.
In 1920, the land-grant program for Blacks was moved from a private school, Hampton Institute, where it had been since 1872, to Virginia Normal and Industrial Institute. In 1923 the college program was restored, and the name was changed to Virginia State College for Negroes in 1930. The two-year branch in Norfolk was added to the college in 1944; the Norfolk division became a four-year branch in 1956 and gained independence as Norfolk State College in 1969. Meanwhile, the parent school was renamed Virginia State College in 1946. Finally, the legislature passed a law in 1979 to provide the present name, Virginia State University.
In the first academic year, 1883–84, the University had 126 students and seven faculty (all of them Black), one building, 33 acres, a 200-book library, and a $20,000 budget. By the centennial year of 1982, the University was fully integrated, with a student body of nearly 5,000, a full-time faculty of about 250, a library containing 200,000 books and 360,000 microform and non-print items, a 236-acre campus and 416-acre farm, more than 50 buildings, including 15 dormitories and 16 classroom buildings, and a biennial budget of $31,000,000, exclusive of capital outlay.
The university is situated in Chesterfield County at Ettrick, on a bluff across the Appomattox River from the city of Petersburg. It is accessible via Interstate Highways 95 and 85, which meet in Petersburg. The university is only two and a half hours away from Washington, D.C. to the north, the Raleigh-Durham-Chapel Hill area to the southwest, and Charlottesville to the northwest.
The first person to bear the title of President, John Mercer Langston, was one of the best-known blacks of his day. Until 1992, he was the only black ever elected to the United States Congress from Virginia (elected in 1888), and he was the great-uncle of the famed writer Langston Hughes. From 1888 to 1968, four presidents – James H. Johnston, John M. Gandy, Luther H. Foster, Robert P. Daniel served an average of 20 years, helping the school to overcome adversity and move forward. The next twenty years, 1968–1992, saw six more presidents—James F. Tucker, Wendell P. Russell, Walker H. Quarles, Jr., Thomas M. Law, Wilbert Greenfield, and Wesley Cornelious McClure. On June 1, 1993, Eddie N. Moore, Jr., the former Treasurer of the Commonwealth of Virginia, became the twelfth President of Virginia State University. Dr. Keith T. Miller became Virginia State University's 13th president from 2010 to 2014. In 2015, Dr. Pamela V. Hammond became the first woman to lead Virginia State University in 133 years. She was appointed as interim president on January 1, 2015. On February 1, 2016, President Makola Abdullah, Ph.D., was named as the 14th president of Virginia State University. Dr. Abdullah previously served as provost and senior vice president at Bethune-Cookman University in Daytona Beach, Fla. President Abdullah is a Chicago native who is the youngest African American to receive a Ph.D. in engineering. He earned his undergraduate degree from Howard University in civil engineering and a Master of Science in civil engineering from Northwestern University.
In 2020, MacKenzie Scott donated $30 million to Virginia State. Her donation is the largest single gift in Virginia State's history.
Main campus
The university has a main campus and a agricultural research facility known as the Randolph Farm. The main campus includes more than 50 buildings, including 11 dormitories and 18 academic buildings. The main campus is located close to the Appomattox River in Ettrick, Virginia.
Residence halls
Branch Hall
Byrd Hall
Eggleston Hall
Gateway 2
Langston Hall
Moore Hall
Quad Hall (buildings I&II)
Seward Hall
Whiting Hall
Williams Hall
University Apartments (off-campus)
Academics
This is a list of the departments within each college:
College of Agriculture
Agriculture Business and Economics
Agricultural Education
Animal Science
Animal Science and Pre-Veterinary Medicine
Aquatic Science, Environmental Science
Hospitality Management
Plant and Soil Science
The Reginald F. Lewis College of Business
Accounting and Finance
Management Information Systems
Management and Marketing
College of Engineering and Technology
Electrical and Engineering Technology
Mechanical Engineering Technology
Computer Engineering
Information and Logistics Technology
Manufacturing Engineering
Computer Science
Mathematics
College of Natural Sciences
Biology
Chemistry and Physics
Psychology
College of Education
Professional Education Programs
Graduate Professional Education Programs
Center for Undergraduate Professional Education Programs
Health, Physical Education and Recreation
College of Humanities and Social Sciences
Art and Design
Animation
Graphic Design
Studio Art
History and Philosophy
Languages and Literature
English
Mass Communications
Military Science
Music
Political Science, Public Administration and Economics
Sociology, Social Work, and Criminal Justice
Bachelor of Individualized Studies
College of Graduate Studies, Research, and Outreach (offering master's degrees in):
Biology (MS)
Computer Science (MS)
Counselor Education (MS, MEd)
Criminal Justice (MS)
Economics (MA)
Education (MEd)
Educational Administration and Supervision (MS, MEd)
Interdisciplinary Studies (MIS)
Mathematics (MS)
Media Management (MA)
Psychology (MS)
Sport Management (MS)
Demographics
The 2017–2018 student body was 57.4% female and 43% male. It consists of 69.7% in-state and 30.3% out-of-state students. 97.2% of students live on campus and 2.8% off-campus. 91.1% of students self-identify as Black/African American, while 4.0% are White, and 4.0% are racially unreported.
Athletics
Virginia State has 14 Division II athletic teams on campus.
Student activities
Greek life
Virginia State University has a very active National Pan-Hellenic Council (NPHC) along with six other non Pan-Hellenic fraternities and sororities which include the following active fraternities and sororities:
Alpha Phi Alpha (Beta Gamma)
Alpha Kappa Alpha (Alpha Epsilon)
Kappa Alpha Psi(Alpha Phi)
Omega Psi Phi (Nu Psi)
Delta Sigma Theta (Alpha Eta)
Phi Beta Sigma (Alpha Alpha Alpha)
Zeta Phi Beta (Phi)
Sigma Gamma Rho (Alpha Zeta)
Sigma Alpha Iota (Mu Beta)
Iota Phi Theta (Eta)
Pershing Rifles
Pershing Angels
Phi Mu Alpha Sinfonia (Sigma Zeta)
Kappa Kappa Psi (Zeta Psi)
Tau Beta Sigma (Epsilon Rho)
Marching band
The Virginia State University Trojan Explosion is composed of instrumentalists, Essence of Troy Dancers, Satin Divas Flag, and Troy Elegance Twirlers.
The famed “Marching 110,” was built during the leadership of Dr. F. Nathaniel Pops Gatlin and Dr. Claiborne T. Richardson. In 1984 the marching band was renamed the “Trojan Explosion” under the direction of Harold J. Haughton, Sr. and the music department began to grow. In 2013, Professor James Holden, Jr. became Director of Bands. In addition to serving as Director of the world renowned VSU Gospel Chorale, Professor Holden has served as Assistant Director of Bands since 1984. Arguably one of the top arrangers in the country, Professor Holden is known throughout the musical world as an exquisite saxophonist.
The renowned Trojan Explosion Marching Band is a captivating show style band executing high intensity, musicality and showmanship on and off the field. The Trojan Explosion has been selected to attend the Honda Battle of the Bands 9 consecutive years. In addition to numerous accolades and achievements, the drum line performed at the White House for President Barack Obama during the signing of the HBCU Funding Bill. The Trojan Explosion don blue and orange for home games and blue, orange and white for away games.
Cheerleading
Originally led by head coach Paulette Johnson, for 35 years, the Woo Woos are a nationally recognized cheerleading squad known for original, up-tempo and high energy performances. The 30 member squad is composed of young women from all over the country. The squad focuses on community service as well as promoting school spirit. Tryouts are held annually during the spring semester for VSU full-time students. Instructional camps and workshops are offered throughout the state.
In 2001, the university granted the Woo Woo Alumni chapter its initial charter. The organization has a rapidly growing membership that is actively involved in the promotion of the squad and its individual members. Shandra Claiborne, a former Woo Woo, led the team for one year following the retirement of Johnson. The squad has been under the leadership of former Woo Woo Cassandra Artis-Williams since 2013.
Concert choir
The Department of Music had a recording Concert Choir. In 1974, This choir recorded an album entitled The Undine Smith Moore Song Book a recording in the series of Afro-American heritage in songs. This recording was third in the series, which aspired to produce a recording each year of the works of this black composer who was a former faculty member and co-director of the Black Man in American Music Center. The choir also performed selections from this series in Baltimore at Bethel AME Church," including songs from a group of gospel selections arranged by VSC students Larry Bland, Janet Coleman, and Roger Holliman." Several graduates of VSC were living in Baltimore, and came to join the choir at the end of the program as they sang the Evening Song.
Notable people
Alumni
This list includes graduates, non-graduate former students and current students of Virginia State University.
References
External links
1882 establishments in Virginia
African-American history of Virginia
Buildings and structures in Chesterfield County, Virginia
Education in Chesterfield County, Virginia
Education in Petersburg, Virginia
Educational institutions established in 1882
Greater Richmond Region
Historically black universities and colleges in the United States
Land-grant universities and colleges
Universities and colleges accredited by the Southern Association of Colleges and Schools
Public universities and colleges in Virginia |
6025550 | https://en.wikipedia.org/wiki/Persistence%20%28computer%20science%29 | Persistence (computer science) | In computer science, persistence refers to the characteristic of state of a system that outlives (persists more than) the process that created it. This is achieved in practice by storing the state as data in computer data storage. Programs have to transfer data to and from storage devices and have to provide mappings from the native programming-language data structures to the storage device data structures.
Picture editing programs or word processors, for example, achieve state persistence by saving their documents to files.
Orthogonal or transparent persistence
Persistence is said to be "orthogonal" or "transparent" when it is implemented as an intrinsic property of the execution environment of a program. An orthogonal persistence environment does not require any specific actions by programs running in it to retrieve or save their state.
Non-orthogonal persistence requires data to be written and read to and from storage using specific instructions in a program, resulting in the use of persist as a transitive verb: On completion, the program persists the data.
The advantage of orthogonal persistence environments is simpler and less error-prone programs.
The term "persistent" was first introduced by Atkinson and Morrison in the sense of orthogonal persistence: they used an adjective rather than a verb to emphasize persistence as a property of the data, as distinct from an imperative action performed by a program. The use of the transitive verb "persist" (describing an action performed by a program) is a back-formation.
Adoption
Orthogonal persistence is widely adopted in operating systems for hibernation and in platform virtualization systems such as VMware and VirtualBox for state saving.
Research prototype languages such as PS-algol, Napier88, Fibonacci and pJama, successfully demonstrated the concepts along with the advantages to programmers.
Persistence techniques
System images
Using system images is the simplest persistence strategy. Notebook hibernation is an example of orthogonal persistence using a system image because it does not require any actions by the programs running on the machine. An example of non-orthogonal persistence using a system image is a simple text editing program executing specific instructions to save an entire document to a file.
Shortcomings: Requires enough RAM to hold the entire system state. State changes made to a system after its last image was saved are lost in the case of a system failure or shutdown. Saving an image for every single change would be too time-consuming for most systems, so images are not used as the single persistence technique for critical systems.
Journals
Using journals is the second simplest persistence technique. Journaling is the process of storing events in a log before each one is applied to a system. Such logs are called journals.
On startup, the journal is read and each event is reapplied to the system, avoiding data loss in the case of system failure or shutdown.
The entire "Undo/Redo" history of user commands in a picture editing program, for example, when written to a file, constitutes a journal capable of recovering the state of an edited picture at any point in time.
Journals are used by journaling file systems, prevalent systems and database management systems where they are also called "transaction logs" or "redo logs".
Shortcomings: When journals are used exclusively, the entire (potentially large) history of all system events must be reapplied on every system startup. As a result, journals are often combined with other persistence techniques.
Dirty writes
This technique is the writing to storage of only those portions of system state that have been modified (are dirty) since their last write. Sophisticated document editing applications, for example, will use dirty writes to save only those portions of a document that were actually changed since the last save.
Shortcomings: This technique requires state changes to be intercepted within a program. This is achieved in a non-transparent way by requiring specific storage-API calls or in a transparent way with automatic program transformation. This results in code that is slower than native code and more complicated to debug.
Persistence layers
Any software layer that makes it easier for a program to persist its state is generically called a persistence layer. Most persistence layers will not achieve persistence directly but will use an underlying database management system.
System prevalence
System prevalence is a technique that combines system images and transaction journals, mentioned above, to overcome their limitations.
Shortcomings: A prevalent system must have enough RAM to hold the entire system state.
Database management systems (DBMSs)
DBMSs use a combination of the dirty writes and transaction journaling techniques mentioned above. They provide not only persistence but also other services such as queries, auditing and access control.
Persistent operating systems
Persistent operating systems are operating systems that remain persistent even after a crash or unexpected shutdown. Operating systems that employ this ability include
KeyKOS
EROS, the successor to KeyKOS
CapROS, revisions of EROS
Coyotos, successor to EROS
Multics with its single-level store
Phantom
IBM System/38
Grasshopper OS
Lua OS
tahrpuppy-6.0.5
See also
Persistent data
Persistent data structure
Persistent identifier
Persistent memory
Copy-on-write
CRUD
Java Data Objects
Java Persistence API
System Prevalence
Orthogonality
Service Data Object
Snapshot (computer storage)
References
Computing terminology
Computer programming
Models of computation |
11542044 | https://en.wikipedia.org/wiki/2007%20Nebraska%20Cornhuskers%20football%20team | 2007 Nebraska Cornhuskers football team | The 2007 Nebraska Cornhuskers football team represented the University of Nebraska–Lincoln in the 2007 NCAA Division I FBS football season. The team was coached by Bill Callahan and played their home games at Memorial Stadium in Lincoln, Nebraska.
Before the season
The Nebraska football team's schedule was rated the toughest in the Big 12 Conference, and the 12th toughest in the 2007 NCAA Division I FBS football season. The team had predictions to win the Big 12 North division.
After a tight competition with two-year backup Joe Ganz, former-Arizona State transfer quarterback Sam Keller won the starting position; Keller had spent the previous season on the scout team as per NCAA transfer rules.
Schedule
Roster and coaching staff
Game summaries
Nevada
Marlon Lucky was named the Walter Camp Football Foundation National Offensive Player of the Week for his career-best 233 yards rushing against Nevada.
Wake Forest
As ESPN described the game, "Sam Keller nearly gave away the game during his first road start at Nebraska. Zack Bowman jumped up to take it back for the Cornhuskers. Three plays after Keller threw an interception deep in Nebraska territory, Bowman picked off a Wake Forest pass in the end zone and the 16th-ranked Cornhuskers held on to win 20–17 on Saturday."
USC
After a bye week, the Trojans visited the Nebraska Cornhuskers in Lincoln, Nebraska. In the pre-season, the game was named as one of the candidates for the 10 most important games of 2007. For the Huskers, the game was especially critical to their hopes of showing progress under 4th year head coach Bill Callahan. The game marks the first time a No. 1-ranked team has visited Lincoln since 1978. Because of the game's significance, ESPN College GameDay chose it as the site of its weekly broadcast.
Callahan had been criticized for his conservative play-calling during the 2006 game in Los Angeles; instead of playing to win, it appeared the Huskers were playing to not get blown out by the then-favored Trojans. In that game the normally prolific West Coast offense of Nebraska, which had produced 541 yards a game, was corralled on the ground and attempted only 17 passes in a 28–10 Husker loss. For 2007, Callahan pledged to play more aggressively, using running back Marlon Lucky and quarterback Sam Keller. Keller, the Huskers redshirt senior starting quarterback, was a 2006 transfer from Arizona State; as a Sun Devil Keller started the first seven games of his 2005 junior season, throwing for 2,165 yards, before a disastrous game against USC where, after leading ASU to a 21–3 halftime lead, he and the offense fell apart on the way to a 38–28 loss where he was sacked five times and threw five interceptions. Due to NCAA transfer rules, Keller spent the 2006 season on the Huskers' scout team.
The Trojans stayed in nearby Omaha and practiced at a local high school; Carroll took the rare step of closing practice to outsiders after a local radio station announced the location. The game marked the return of primary receiver Patrick Turner and running back Chauncey Washington from injury; linebacker Brian Cushing, who injured his ankle early against Idaho, had not fully recovered but was allowed to suit-up as a reserve. Senior center Matt Spanos remained injured, and true freshman Kris O'Dowd was called to start again. Veteran secondary member Josh Pinkard was lost for the season after his sore knee gave out during a bye week practice, resulting in a torn ACL requiring surgery.
Anticipation for the game was high in Lincoln, fueling strong demand for tickets and accommodations; the game brought celebrities including USC fans Will Ferrell (also an alumnus) and Keanu Reeves, Nebraska fans Larry the Cable Guy, Supreme Court Justice Clarence Thomas, Rush Limbaugh, and Ward Connerly; past Husker Heisman-winner Mike Rozier, Trojans Heisman-winner Marcus Allen and star Trojans safety Ronnie Lott were also on hand for the game. The game fell on Pete Carroll's 56th birthday; as a surprise, Carroll was treated to a recorded message by actor Kiefer Sutherland, star of his favorite television show, 24. The morning recording of College GameDay attracted 13,293 fans, second to the all-time record of 15,808 set by Nebraska in 2001. With 84,959 in attendance, Nebraska recorded its NCAA-record 284th consecutive home sellout dating back to 1962.
USC dominated the game 49–31, in a game that was not as close as the final score indicated: the Trojans led 42–10 going into the fourth quarter; Nebraska scored two touchdowns in the final five minutes during garbage time. The Trojans dominated on the ground, as they out-gained Nebraska 313–31 in rushing yards and averaged 8.2 yards per carry, the most ever against a Nebraska team. Stafon Johnson led USC running backs with a career-best 144 yards in 11 carries with one touchdown; other major contributors were C. J. Gable (69 yards in four carries, including a 40-yard run), Washington (43 yards in 12 carries with two touchdowns), and another versatile performance by fullback Stanley Havili (52 rushing yards in two rushes with one touchdown, and three pass receptions for 22 yards with one touchdown). The Trojans passing game again did not find a rhythm, with several dropped passes, but the defense was able to frustrate the Husker offense for most of the game and cause two pivotal 3rd quarter interceptions.
The Trojans did not escape injuries, as linebacker Clay Matthews, substituting for the recovering Brian Cushing, broke his thumb, causing Cushing to enter the game as his replacement. The Trojans also suffered two injuries on kick returns: fullback Alfred Rowe suffered a mild concussion, and there was a moment of worry when returner Vincent Joseph, after being tackled and fumbling the ball, lay on the turf for over 10 minutes before being removed by stretcher with a bruised larynx and a neck sprain, but no serious injuries. Linebacker Rey Maualuga was flagged during a field goal attempt for the rarely called penalty of "disconcerting", which is given for "words or signals that disconcert opponents when they are preparing to put the ball in play".
After losing first place votes in the polls during the bye week, USC's performance regained six after their performance against the Huskers in a hostile environment. Receiving specific praise was the Trojans offensive line, as well as the continued poise and ability of freshman center O'Dowd.
Ball State
Iowa State
Missouri
Oklahoma State
Athletic Director Steve Pederson was fired after this game. Former Nebraska head coach Tom Osborne was named as interim Athletic Director. He indicated that there would be no coaching changes during the season."
Texas A&M
Texas
Nebraska first played the Texas Longhorns in 1933 and the Longhorns hold a 7–4–0 record. Nebraska won the first meeting by the lopsided score of 26–0. As with Oklahoma State and Texas A&M, Nebraska plays the Longhorns two out of every four years as part of the Big 12 Conference schedule. Since their first meeting, the series has included a number of upsets and close calls. In 1960 a #4 ranked Longhorn squad was upset by an unranked Nebraska team, 14–13. In 1996 an unranked Texas team defeated #3 ranked Nebraska (who were also the defending national champions) 37–27 to win the inaugural Big 12 Conference football championship and deprive the Cornhuskers a shot at repeating as national champions. In 1998 an unranked Texas team beat #7 Nebraska 20–16.
In 1999 the two teams met twice. In the regular season, #18 Texas beat #3 Nebraska by 24–20. However, #3 Nebraska beat #12 Texas in the Big 12 Championship game, 6–22. In 2002 the Longhorns were ranked No. 7 and they went to Lincoln, Nebraska to play an unranked Nebraska team. In front of the largest crowd in Nebraska history (78,268) the 'Horns snapped the Huskers' national-best 26-game winning streak at Memorial Stadium by a score of 27–24. Most recently, in the 2006 game, #5 Texas faced #17 Nebraska on a snowy day in Lincoln. The Longhorns were trailing and needed a field goal by walk-on kicker Ryan Bailey (with just 23 seconds remaining in the game) to win 22–20.
On the morning of the game, oddsmakers favored Texas to win by 21 points. The weather forecast called for a high of 76 degrees and plentiful sunshine with winds NNE at 10 to 15 miles per hour. Texas stuck with their passing game for three quarters and was trailed Nebraska most of the way; the Cornhuskers led 17–9 to start the fourth. ESPN reported, "Once Texas figured out it should be running against one of the nation's worst run defenses, things turned out all right for the Longhorns."
The Longhorns may have switched to running game almost by chance. McCoy took a hard hit as he scrambled outside the pocket and was shaken up badly enough to leave the game for a play. John Chiles came in at quarterback; his one play, a zone-read handoff to Jamaal Charles, produced 24 yards. According to ESPN, "suddenly Texas had figured out how to beat a Cornhuskers' team that had been steamrolled on the ground in recent weeks. Texas only threw three passes in the fourth quarter."
Once Texas switched to the zone read offense, they quickly started gaining yards and points. Charles ran for a career-high 290 yards, including 216 yards and three long touchdown runs in the fourth quarter. His tally also set a new record for rushing against the Cornhuskers, surpassing the old record of 247 yards by Oklahoma's Billy Simms. Charles explained "It was my time to show everyone what I can do. When I saw a hole, I blasted through it." Texas finished with 181 yards passing and 364 yards rushing; Nebraska had 315 yards passing and 132 yards rushing. The running back was named the Walter Camp Football Foundation National Offensive Player of the Week.
The game was a milestone for one coach and a millstone for another; it was the 100th win for Mack Brown at Texas; and it put more pressure on beleaguered Nebraska coach Bill Callahan. Brown remarked on his victory, "A hundred is nice. I knew the game was going to come down like it did. It didn't surprise me. They made sure that I'll remember it the rest of my life." Callahan was fired five weeks later.
Kansas
The Nebraska-Kansas series is the longest uninterrupted series in college football at 102 years. In the 2007 meeting, Kansas beat Nebraska 76–39. The Jayhawks set an all-time record for most touchdowns and most points scored by a Nebraska opponent. Their 48 points in the first half was the most ever scored against Nebraska in the first half. With the win, Kansas took their record to 9–0 for the first time since 1908.
Fox Sports reported, "It was only the second victory for Kansas in the last 39 games against Nebraska, which appears to be coming to pieces in the fourth season of embattled coach Bill Callahan."
Kansas State
Getting just his second career start after taking over for the injured Sam Keller in the fourth quarter of the Texas game, Joe Ganz broke the school single-game records for passing yards and touchdowns.
Colorado
It was a must-win situation for both teams, as they had identical 5–6 records and each needed a win to get to a bowl. Although they trailed by 11 points at the half, Colorado went on to win 65–51, as the Husker defense simply could not find an answer for Colorado's offense. Husker Coach Bill Callahan, having had his second losing season in four years, both being decided by a loss to Colorado, was fired the day after the game. Athletic Director Tom Osborne went on to hire Mark "Bo" Pelini as head coach.
Rankings
After the season
The team was coached by Bill Callahan, who returned for his fourth year with the Huskers, and expectations for the season were high, considering NU had reached the Big 12 title game the previous year. But the Huskers recorded only their second losing season since 1961, and the second in four years (the last one coming in 2004 on Callahan's watch). Following the conclusion of the season, Callahan was fired by interim athletic director Tom Osborne.
On December 2, 2007, Bo Pelini was named as head coach for Nebraska by interim athletic director Tom Osborne.
Awards
Draft picks, signees, or other future professional players
Prince Amukamara, 2011 1st–round pick of the New York Giants
Larry Asante, 2010 4th–round pick of the Cleveland Browns
Zackary Bowman, 2008 5th–round pick of the Chicago Bears
Lance Brandenburgh, 2008 free agent signee of the San Francisco 49ers
Chris Brooks, 2010 free agent signee of the Tampa Bay Buccaneers
Brett Byford, 2008 free agent signee of the New York Jets
Jared Crick, 2012 4th-round pick of the Houston Texans
Phillip Dillard, 2010 4th–round pick of the New York Giants
Joe Ganz, 2009 free agent signee of the Washington Redskins
Cody Glenn, 2009 5th–round pick of the Washington Redskins
Tierre Green, 2008 free agent signee of the Green Bay Packers
Cortney Grixby, 2008 free agent signee of the Carolina Panthers
Eric Hagg, 2011 7th–round pick of the Cleveland Browns
Frantz Hardy, 2008 free agent signee of the Philadelphia Eagles
Roy Helu, Jr., 2011 4th–round pick of the Washington Redskins
Alex Henery, 2011 4th–round pick of the Philadelphia Eagles
Ricky Henry, 2011 UFL 1st–round pick of the Hartford Colonials
Brandon Johnson, Sioux City Bandits
Andre Jones, Central Valley Coyotes
D.J. Jones, 2011 UFL 6th–round pick of the Omaha Nighthawks
Marcel Jones, 2012 7th-round draft pick of the New Orleans Saints
Sam Keller, 2008 free agent signee of the Tampa Bay Buccaneers
Marlon Lucky, 2009 free agent signee of the Cincinnati Bengals
Corey McKeon, 2008 free agent signee of the Tampa Bay Buccaneers
Lydon Murtha, 2009 7th–round pick of the Miami Dolphins
Carl Nicks, 2008 5th–round pick of the New Orleans Saints
Matt O'Hanlon, 2010 free agent signee of the Carolina Panthers
Steve Octavien, 2008 free agent signee of the Kansas City Chiefs
Niles Paul, 2011 5th–round pick of the Washington Redskins
Todd Peterson, 2009 free agent signee of the Jacksonville Jaguars
Zach Potter, 2009 free agent signee of the New York Jets
Andy Poulosky, Sioux City Bandits
Maurice Purify, 2008 free agent signee of the Cincinnati Bengals
Bo Ruud, 2008 6th–round pick of the New England Patriots
Matt Slauson, 2009 6th–round pick of the New York Jets
Mike Smith, 2011 UFL 5th–round pick of the Omaha Nighthawks
Ty Steinkuhler, 2009 free agent signee of the New York Jets
Ndamukong Suh, 2010 1st–round pick of the Detroit Lions
Nate Swift, 2009 free agent signee of the Denver Broncos
Barry Turner, 2010 free agent signee of the Detroit Lions
Keith Williams, 2011 6th–round pick of the Pittsburgh Steelers
Kenny Wilson, Sioux City Bandits
References
Nebraska
Nebraska Cornhuskers football seasons
Nebraska Cornhuskers football |
32205776 | https://en.wikipedia.org/wiki/P.E.S.%20Institute%20of%20Technology%20and%20Management | P.E.S. Institute of Technology and Management | P.E.S. Institute of Technology and Management is an engineering and management college located in Shivamogga, Karnataka, India. It is affiliated to the Visvesvaraya Technological University, Belgaum.
About
PESITM is currently offering a multitude of courses in engineering and management. Shivamogga established in 2007.
The institute provides support to research and development activities, and is presently offering-
Six BE courses
One MBA programme
Ph.D. research centres
M.Tech programmes in digital electronics and computer science & engineering.
Certification
PESITM is an ISO 9001:2008 certified institute
1.Quality Management System of PES Institute of Technology & Management, Shivamogga complies with the requirements of ISO 9001:2008.
2. This certificate is valid concerning all activities related to educational services offering four-year B.E programmes in Civil Engg, CSE, EEE, ECE, ISE and ME and 2 year MBA programme in specializations such as marketing, finance and human resources.
Courses offered
UG Programs
Computer Science & Engineering
Electronics & Communications Engineering
Information Science & Engineering
Electrical & Electronics Engineering
Mechanical Engineering
Civil engineering
PG programs
M.Tech in Computer Science and Engineering
M.Tech in Digital Electronics
Master of Business Administration
Recent activities
'Prerana' 2015. The bi-annual cultural fest was held on May 8 & 9. With ten pre-fest events, and 15 fest events.
'Android Hackathon' workshop conducted from 13–15 February 2015.
Department of Computer Science & Engineering, PESITM conducted International Conference on Information and Communication Technologies (ICICT-2014) on 5 and 6 May 2014.
PESITM conducted Annual Cultural Fest Prerana - 2014 in the month of April, 2014
Google Android app development workshop was conducted by Dept. of Computer Science in association with Google Development Group (GDG) of Mangalore 14 and 15 March 2014.
A two-day WorkShop on "Getting started with Unity3D for Windows 8″ was conducted at PESITM College on 16 and 17 March by "Student Nokia Developers Group of PESITM".
PESITM in association with Microsoft conducted 24-hour App Fest Hackaton on 29 and 30 September 2013.
Department of CSE conducted three- day Mobile Innovation Workshop in association with Nokia.
References
External links
Affiliates of Visvesvaraya Technological University
Engineering colleges in Karnataka
Education in Shimoga
Universities and colleges in Shimoga district
Educational institutions established in 2007
2007 establishments in Karnataka |
28929055 | https://en.wikipedia.org/wiki/Black%20Ships%20Before%20Troy | Black Ships Before Troy | Black Ships Before Troy: The story of the Iliad is a novel for children written by Rosemary Sutcliff, illustrated by Alan Lee, and published (posthumously) by Frances Lincoln in 1993. Partly based on the Iliad, the book retells the story of the Trojan War, from the birth of Paris to the building of the Trojan Horse. For his part Lee won the annual Kate Greenaway Medal from the Library Association, recognizing the year's best children's book illustration by a British subject.
Plot
Reviews and reprints
Kirkus Reviews noted the "compelling vision and sensitivity to language, history, and heroics" that she brought to retelling both Arthurian legends and the Homeric epic. The Reading Teacher remarked that the book's division into 19 chapters makes it a good text to spread out over multiple readings, and praised Sutcliff's "graceful, powerful language". Sutcliff's prose is praised also in Books to Build On, a collection of teaching resources edited by E. D. Hirsch, Jr. A Common Core handbook suggests it for grades 6-8.
Delacorte Press reprinted Black Ships in the US within the calendar year (October 1993; ).
Sequel
Sutcliff's retelling of Homer's Odyssey story was also illustrated by Alan Lee and published by Frances Lincoln in a companion edition, The Wanderings of Odysseus: The story of the Odyssey (1995, ).
Kirkus praised both Sutcliff's text, for preserving "a certain formality of language" and for graceful "winnowing", and Lee's "spectacular paintings": "Beautiful and detailed ... the pictures are obviously the result of careful research and reward close scrutiny. A gorgeous book, more than worthy of its predecessor." It suggested the book for ages 10 and up.
References
External links
—immediately, contemporary US reprint
Rosemary Sutcliff —curated by Anthony Lawton, literary executor
British children's novels
Kate Greenaway Medal winning works
Novels by Rosemary Sutcliff
Novels set during the Trojan War
1993 British novels
Novels set in ancient Troy
Novels published posthumously
1993 children's books
Novels based on the Iliad |
11635962 | https://en.wikipedia.org/wiki/Mythbuntu | Mythbuntu | Mythbuntu is a discontinued media center operating system based on Ubuntu, which integrated the MythTV media center software as its main function, and did not install with all of the programs included with Ubuntu.
Following the principles of fellow Linux distributions LinHES and MythDora, Mythbuntu was designed to simplify the installation of MythTV on a home theater PC. After Mythbuntu had been installed the MythTV setup program begins in which it can be configured as a frontend (a media viewer), backend (a media server), or combination of the two.
Mythbuntu aimed to keep close ties with Ubuntu thus allowing changes to be moved upstream for the greater benefit of the Ubuntu Community. Due to the close link with Ubuntu, easy conversions between desktop and standalone Mythbuntu installations are possible. The development cycle of Mythbuntu originally followed that of Ubuntu, with releases occurring every six months. Starting with 12.04, Mythbuntu releases tracked Ubuntu's LTS (long-term support) releases, which release approximately every two years.
On 4 November 2016 the development team announced the end of Mythbuntu as a separate distribution, citing insufficient developers. The team will continue to maintain the Mythbuntu software repository; the announcement advised new users to install another Ubuntu distribution, then install MythTV from the repository.
Desktop
Mythbuntu uses the Xfce desktop interface by default, but users can install ubuntu-desktop, kubuntu-desktop, or xubuntu-desktop through the Mythbuntu Control Centre, allowing users to get the default interfaces from those flavors of Ubuntu. The only software that is included in this release is media-related software such as VLC, Amunix, and Rhythmbox.
Mythbuntu Control Centre
The Mythbuntu Control Centre provides a GUI which can be used to configure the system. The user can select what kind of system (Backend, Frontend, Both) they wish to have installed. Inside the Control Centre, the user can perform common actions such as installing plugins for MythTV, configuring the MySQL database, setting passwords, and installing drivers and codecs. MythTV updates can be enabled here as well as switching to the latest release version or development branch of MythTV. Configuration of remote controls and a range of other utilities and small programs are performed all from within this program.
Different applications of Mythbuntu
Complete installation (front-end and back-end)
Mythbuntu can be used to install a full MythTV system on a single device (acting as both a client and a server). The front-end is the software required for the visual elements (or the GUI) and is utilised by the common user to find, play, and manipulate media files. The back end is the server where the media files, tuners, and database are actually stored. A combined front-and-back-end system may have an advantage in that it has portability: it is a standalone device that is not dependent on a separate server, such as a gaming console.
Front-end-only installations
Alternatively, Mythbuntu can be used to install a MythTV client: a front-end-only system. This might be useful where users already have a central storage server in their home. The central storage device can act as a MythTV server, and the MythTV front-end client software can be installed on devices with low-power hardware. Mythbuntu can also run directly from a CD-ROM (without installation), provided that there is a network connection to a PC with a MythTV back-end server.
Using a server separate from one or more front-end units offers the ability to use multiple clients with simultaneous access to a single repository of shared media files. The server used would generally have hardware of a relatively high specification and can be kept outside of the main living room or other entertainment area of the home. Another advantage is the ability to move some of the potentially noisy hardware out of the living room, as low-noise, high-performance hardware can be expensive.
Adding Mythbuntu to Ubuntu
Mythbuntu is an Ubuntu derivative that offers an easy single-click conversion from Ubuntu to Mythbuntu. This means a user no longer needs to type in command line, which can be daunting to new users, or hunt for packages in the various package managers.
Version history
Mythbuntu 7.10 Gutsy Gibbon (with MythTV .20) was released on Monday, October 22, 2007.
Mythbuntu 8.04 Hardy Heron (with MythTV .21) was released Thursday, Apr 24, 2008.
Mythbuntu 8.10 Intrepid Ibex (with MythTV .21) was released on Thursday Oct 30th, 2008.
Mythbuntu 9.04 Jaunty Jackalope (with MythTV .21-fixes) was released on Thursday April 23, 2009.
Mythbuntu 9.10 Karmic Koala (with MythTV .22) was released on Thursday October 29, 2009.
Mythbuntu 10.04 Lucid Lynx (with MythTV .23) was released on Thursday April 29, 2010.
Mythbuntu 10.10 Maverick Meerkat (with MythTV .23.1) was released on October 19, 2010.
Mythbuntu 11.04 Natty Narwhal (with MythTV .24) was released on April 28, 2011.
Mythbuntu 11.10 Oneiric Ocelot (with MythTV .24) was released on October 13, 2011.
Mythbuntu 12.04 Precise Pangolin (with MythTV .25) was released on April 26, 2012.
Mythbuntu 14.04 Trusty Tahr (with MythTV .27) was released on April 17, 2014.
Mythbuntu 16.04 Xenial Xerus (with MythTV .28) was released on April 21, 2016.
See also
LinuxMCE
List of free television software
XBMC
References
External links
Free television software
Multimedia software
Ubuntu derivatives
Discontinued Linux distributions
Linux distributions |
1218406 | https://en.wikipedia.org/wiki/ACM%20SIGGRAPH | ACM SIGGRAPH | ACM SIGGRAPH is the international Association for Computing Machinery's Special Interest Group on Computer Graphics and Interactive Techniques based in New York. It was founded in 1969 by Andy van Dam (its direct predecessor, ACM SICGRAPH was founded two years earlier in 1967).
ACM SIGGRAPH convenes the annual SIGGRAPH conference, attended by tens of thousands of computer professionals. The organization also sponsors other conferences around the world, and regular events are held by its professional and student chapters in several countries.
Committees
Professional and Student Chapters Committee
The Professional and Student Chapters Committee (PSCC) is the leadership group that oversees the activities of ACM SIGGRAPH Chapters around the world. Details about Local Chapters can be found below.
International Resources Committee
The International Resources Committee (IRC) facilitates throughout the year worldwide collaboration in the ACM SIGGRAPH community, provides an English review service to help submitters whose first language is not English, and encourages participation in all SIGGRAPH conference venues, activities, and events.
Awards
ACM SIGGRAPH presents six awards to recognize achievement in computer graphics. The awards are presented at the annual SIGGRAPH conference.
Steven A. Coons Award
The Steven Anson Coons Award for Outstanding Creative Contributions to Computer Graphics is considered the highest award in computer graphics, and is presented each odd-numbered year to individuals who have made a lifetime contribution to computer graphics. It is named for Steven Anson Coons, an early pioneer in interactive computer graphics.
Recipients:
Computer Graphics Achievement Award
The Computer Graphics Achievement award is given each year to recognize individuals for an outstanding achievement in computer graphics and interactive techniques that provided a significant advance in the state of the art of computer graphics and is still significant and apparent.
Recipients:
Significant New Researcher Award
The Significant New Researcher Award is given annually to a researcher with a recent significant contribution to computer graphics.
Recipients:
Distinguished Artist Award
The Distinguished Artist Award is presented annually to an artist who has created a significant body of digital art work that has advanced the aesthetic content of the medium.
Recipients:
Professional and Student Chapters
Within their local areas, Chapters continue the work of ACM SIGGRAPH on a year-round basis via their meetings and other activities. Each ACM SIGGRAPH Professional and Student Chapter consists of individuals involved in education, research & development, the arts, industry and entertainment. ACM SIGGRAPH Chapter members are interested in the advancement of computer graphics and interactive techniques, its related technologies and applications. For the annual conference, some of the Chapters produce a "Fast Forward" overview of activities.
Listed below are some examples of Chapter activities:
MetroCAF is the annual NYC Metropolitan Area College Computer Animation Festival, organized by the New York City chapter of ACM SIGGRAPH.
Bogota ACM SIGGRAPH has become one of the largest Animation and VFX events in Latin America, counting more than 6,000 registered attendees in 2015’s edition.
ACM SIGGRAPH Helsinki runs an evening-long graphics conference called SyysGraph, which is held autumn every year. The seminar strives to bring the latest updates of the 3D graphics field, demos, animations and interactive technologies. The presentations are held in English.
Silicon Valley ACM SIGGRAPH held "Star Wars: The Force Awakens" Visual Effects Panel with Industrial Light & Magic.
See also
Association for Computing Machinery
ACM Transactions on Graphics
Computer Graphics, its defunct quarterly periodical publication.
SIGGRAPH Conferences
References
External links
Official website
1969 establishments in New York (state)
Computer-related introductions in 1969
Organizations established in 1969
Association for Computing Machinery Special Interest Groups
Computer graphics organizations |
42572552 | https://en.wikipedia.org/wiki/Software%20taggant | Software taggant | A software taggant is a cryptographic signature added to software that enables positive origin identification and integrity of programs. Software taggants use standard PKI techniques (see Public key infrastructure) and were introduced by the Industry Connections Security Group of IEEE in an attempt to control proliferation of malware obfuscated via executable compression (runtime packer).
The concept of a PKI-based system to mitigate runtime packer abuse was introduced in 2010 and described in a Black Hat Briefings presentation by Mark Kennedy and Igor Muttik. The term was proposed by Arun Lakhotia (due to its similarities with chemical taggants) who also analyzed the economics of a packer ecosystem.
A software taggant is a form of code signing somewhat similar to Microsoft's Authenticode. The key differences between a software taggant and Authenticode are that the transparent and free addition of a software taggant for the end user of a runtime packer. Also, a software taggant may cover small critical areas of the program to minimize the cost of software integrity checking. To contrast, Authenticode always covers nearly the entire file so the cost of checking linearly depends on the file size.
The software taggant project is run by IEEE ICSG and has open-source nature - it is hosted on GitHub and relies on OpenSSL. Software taggants also help to legitimate software from malware which also utilize anti-tampering methods.
References
Cryptographic algorithms |
1350680 | https://en.wikipedia.org/wiki/Unix%20file%20types | Unix file types | The seven standard Unix file types are regular, directory, symbolic link, FIFO special, block special, character special, and socket as defined by POSIX. Different OS-specific implementations allow more types than what POSIX requires (e.g. Solaris doors). A file's type can be identified by the ls -l command, which displays the type in the first character of the file-system permissions field.
For regular files, Unix does not impose or provide any internal file structure; therefore, their structure and interpretation is entirely dependent on the software using them. However, the file command can be used to determine what type of data they contain.
Representations
Numeric
In the stat structure, file type and permissions (the mode) are stored together in a bit field, which has a size of at least 12 bits (3 bits to specify the type among the seven possible types of files; 9 bits for permissions). The layout for permissions is defined by POSIX to be at the least-significant 9 bits, but the rest is undefined.
By convention, the mode is a 16-bit value written out as a six-digit octal number without a leading zero. The format part occupies the lead 4-bits (2 octal digits), and "010" ( in binary) usually stands for a regular file. The next 3 bits (1 digit) are usually used for setuid, setgid, and sticky. The last part is already defined by POSIX to contain the permission. An example is "100644" for a typical file. This format can be seen in git, tar, and ar, among other places.
The type of a file can be tested using macros like S_ISDIR. Such a check is usually performed by masking the mode with S_IFMT (often the octal number "170000" for the lead 4 bits convention) and checking whether the result matches S_IFDIR. S_IFMT is not a core POSIX concept, but a X/Open System Interfaces (XSI) extension; systems conforming to only POSIX may use some other methods.
Mode string
Take for example one line in the ls -l output:
drwxr-xr-x 2 root root 0 Jan 1 1970 home
POSIX specifies the format of the output for the long format (-l option). In particular, the first field (before the first space) is dubbed the "file mode string" and its first character describes the file type. The rest of this string indicates the file permissions.
Therefore, in the example, the mode string is drwxr-xr-x: the file type is d (directory) and the permissions are rwxr-xr-x.
Examples of implementations
The GNU coreutils version of ls uses a call to filemode(), a glibc function (exposed in the gnulib library) to get the mode string.
FreeBSD uses a simpler approach but allows a smaller number of file types.
Regular file
Regular files show up in ls -l with a hyphen-minus - in the mode field:
$ ls -l /etc/passwd
-rw-r--r-- ... /etc/passwd
Directory
The most common special file is the directory. The layout of a directory file is defined by the filesystem used. As several filesystems are available under Unix, both native and non-native, there is no one directory file layout.
A directory is marked with a d as the first letter in the mode field in the output of ls -dl or stat, e.g.
$ ls -dl /
drwxr-xr-x 26 root root 4096 Sep 22 09:29 /
$ stat /
File: "/"
Size: 4096 Blocks: 8 IO Block: 4096 directory
Device: 802h/2050d Inode: 128 Links: 26
Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
...
Symbolic link
A symbolic link is a reference to another file. This special file is stored as a textual representation of the referenced file's path (which means the destination may be a relative path, or may not exist at all).
A symbolic link is marked with an l (lower case L) as the first letter of the mode string, e.g.
lrwxrwxrwx ... termcap -> /usr/share/misc/termcap
lrwxrwxrwx ... S03xinetd -> ../init.d/xinetd
FIFO (named pipe)
One of the strengths of Unix has always been inter-process communication. Among the facilities provided by the OS are pipes, which connect the output of one process to the input of another. This is fine if both processes exist in the same parent process space, started by the same user, but there are circumstances where the communicating processes must use FIFOs, here referred to as named pipes. One such circumstance occurs when the processes must be executed under different user names and permissions.
Named pipes are special files that can exist anywhere in the file system. They can be created with the command mkfifo as in mkfifo mypipe.
A named pipe is marked with a p as the first letter of the mode string, e.g.
prw-rw---- ... mypipe
Socket
A socket is a special file used for inter-process communication, which enables communication between two processes. In addition to sending data, processes can send file descriptors across a Unix domain socket connection using the sendmsg() and recvmsg() system calls.
Unlike named pipes which allow only unidirectional data flow, sockets are fully duplex-capable.
A socket is marked with an s as the first letter of the mode string, e.g.
srwxrwxrwx /tmp/.X11-unix/X0
Device file (block, character)
In Unix, almost all things are handled as files and have a location in the file system, even hardware devices like hard drives. The great exception is network devices, which do not turn up in the file system but are handled separately.
Device files are used to apply access rights to the devices and to direct operations on the files to the appropriate device drivers.
Unix makes a distinction between character devices and block devices. The distinction is roughly as follows:
Character devices provide only a serial stream of input or accept a serial stream of output
Block devices are randomly accessible
Although, for example, disk partitions may have both character devices that provide un-buffered random access to blocks on the partition and block devices that provide buffered random access to blocks on the partition.
A character device is marked with a c as the first letter of the mode string. Likewise, a block device is marked with a b, e.g.
crw------- ... /dev/null
brw-rw---- ... /dev/sda
Door
A door is a special file for inter-process communication between a client and server, currently implemented only in Solaris.
A door is marked with a D (upper case) as the first letter of the mode string, e.g.
Dr--r--r-- ... name_service_door
See also
file (command)
References
Unix file system technology |
1262466 | https://en.wikipedia.org/wiki/Electronic%20identification | Electronic identification | An electronic identification ("eID") is a digital solution for proof of identity of citizens or organizations. They can be used to view to access benefits or services provided by government authorities, banks or other companies, for mobile payments, etc. Apart from online authentication and login, many electronic identity services also give users the option to sign electronic documents with a digital signature.
One form of eID is an electronic identification card (eIC), which is a physical identity card that can be used for online and offline personal identification or authentication. The eIC is a smartcard in ID-1 format of a regular bank card, with identity information printed on the surface (such as personal details and a photograph) and in an embedded RFID microchip, similar to that in biometric passports. The chip stores the information printed on the card (such as the holder's name and date of birth) and the holder's photo(s). Several photos may be taken from different angles along with different facial expressions, thus allowing the biometric facial recognition systems to measure and analyze the overall structure, shape and proportions of the face. It may also store the holder's fingerprints. The card may be used for online authentication, such as for age verification or for e-government applications. An electronic signature, provided by a private company, may also be stored on the chip.
Countries which currently issue government-issued eIDs include Afghanistan, Bangladesh, Belgium, Bulgaria, Chile, Finland, Guatemala, Germany, India, Indonesia, Israel, Italy, Luxembourg, the Netherlands, Nigeria, Morocco, Pakistan, Peru, Portugal, Poland, Romania, Estonia, Latvia, Lithuania, Spain, Slovakia, Malta, and Mauritius. Germany, Uruguay and previously Finland have accepted government issued physical eICs. Norway, Sweden and Finland accept bank-issued eIDs (also known as BankID) for identification by government authorities. There are also an increasing number of countries applying electronic identification for voting (enrollment, issuing voter ID cards, voter identification and authentication, etc.), including those countries using biometric voter registration.
eID in Europe
European Union
According to the EU electronic identification and trust services (eIDAS) Regulation, described as a pan-European login system, all organizations delivering public digital services in an EU member state shall accept electronic identification from all EU member states from 29 September 2018.
Belgium
Belgium has been issuing eIDs since 2003, and all identity cards issued since 2004 have been electronic, replacing the previous plastic card.
Chip contents
The eID card contains a chip containing:
the same information as legible on the card
the address of the card holder
the identity - and signature keys and certificates
Using the eID
At home, the users can use their electronic IDs to log into specific websites (such as Tax-on-web, allowing them to fill in their tax form online). To do this the user needs
an eID card
a smartcard reader
the eID middleware software
When other software (such as an Internet Browser) attempts to read the eID, the users are asked for confirmation for this action, and potentially even for their PIN.
Other applications include signing emails with the user's eID certificate private key. Giving the public key to your recipients allows them to verify your identity.
Kids ID
Although legally Belgian citizens only have to carry an ID from the age of 12, as of March 2009, a "Kids ID" has been introduced for children below this age, on a strictly voluntary basis. This ID, beside containing the usual information, also holds a contact number that people, or the child themselves, can call when they, for example, are in danger or had an accident. The card can be used for electronic identification after the age of six, and it does not contain a signing certificate as minors cannot sign a legally binding document. An important goal of the Kids-ID card is to allow children to join "youth-only" chat sites, using their eID to gain entrance. These sites would essentially block any users above a certain age from gaining access to the chat sessions, effectively blocking out potential pedophiles.
Bulgaria
Bulgaria introduced a limited scale proof-of-concept of electronic identity cards, called ЕИК (Eлектронна карта за идентичност), in 2013.
Croatia
Croatia introduced its electronic identity cards, called e-osobna iskaznica, on 8 June 2015.
Denmark
Electronic identities in Denmark issued by banks are called NemID. NemID authentication allows larger payments in MobilePay - a service used by more than half of the population as of 2017.
Estonia
The Estonian ID card is also used for authentication for Estonia's Internet-based voting system. In February 2007, Estonia was the first country to allow for electronic voting for parliamentary elections. Over 30,000 voters participated in the country's e-election.
At end of 2014 Estonia extended the Estonian ID Card to non-residents. The target of the project is to reach 10 million residents by 2025, which is 8 times more than the Estonian population of 1.3 million.
Finland
The Finnish electronic ID was first issued to citizens on 1 December 1999.
Germany
Germany introduced its electronic identity cards, called Personalausweis, in 2010.
Italy
Italy introduced its electronic identity cards, called Carta d'Identità Elettronica (in Italy identified with the acronym CIE), to replace the paper-based ID card in Italy. Since 4 July 2016, Italy is in the process of renewing all ID cards to electronic ID cards.
Latvia
eID and eSignature service provider in Latvia is called eParaksts
Malta
Since 12 February 2014, Malta is in the process of renewing all ID cards to electronic ID cards.
Netherlands
Electronic identities in Netherlands are called DigiD and Netherlands is currently developing an eID scheme.
Norway
Electronic identities in Norway issued by banks are called BankID (different than Sweden's BankID). They make it possible to log into Norwegian authorities, universities and banks, and to make larger payments using the Vipps mobile payment service, used by more than half of the population as of 2017. The Norwegian BankID på mobil service is utilizing the mobile phone SIM card for authentication, and is financed by a fee to the mobile network operator for each authentication.
Spain
Electronic identity cards in Spain are called DNIe and have been issued since 2006.
Switzerland
SwissID,
developed by SwissSign Group,
is the current certified solution for eID in Switzerland offered since 2017 (2010–17 as SuisseID). As a base for a new Federal Act on Electronic Identification Services (e-ID Act),
an eID-concept had been developed by the authorities, yet experts criticized its technology part.
The law was accepted by the Swiss parliament on 29 September 2019. It would have updated current legislation and would have continued to allow private companies or public organizations to issue eIDs if certified by a new federal authority. However, an optional referendum called for a public vote on this issue in the weeks until Sunday, 7 March 2021. The vote resulted in 35.6 % Yes and 64.4 % No, rejecting the proposed new law.
The SwissSign Group might develop the SwissID further, to make it compatible with the coming E-ID regulations.
Sweden
The most widespread electronic identification in Sweden is issued by banks and called BankID. The BankID may be in the form of a certificate file on disk, on card or on smart phones. The latter (Swedish mobile BankID service) was used by 84 percent of the Swedish population in 2019. A Mobile BankID login does not require a fee since the service is provided by banks rather than mobile operators. It can be used both for authentication within various apps and web services on the same smart phone, and also for web pages on other devices. It also supports fingerprint and face recognition authentication on compatible iOS and Android devices.
Electronic IDs are used for secure web login to Swedish authorities, banks, health centers (allowing people to see their medical records and prescriptions and book doctors visits), and companies such as pharmacies. Mobile BankID also allows the Swish mobile payment service, utilized by 78 percent of the Swedish population in 2019, at first mainly for payments between individuals. BankID was previously used for university applications and admissions, but this was prohibited by Swedbank since universities utilized the system for distribution of their own student logins. Increasingly, BankID is used as an added security for signing contracts.
eID in other countries
Afghanistan
Afghanistan issued its first electronic ID (e-ID) card on 3 May 2018. Afghan President Ashraf Ghani was the first to receive the card. Afghan President was accompanied by First Lady Rula Ghani, his VP, Head of Afghan Senate, Head of Afghan Parliament, Chief Justice and other senior government officials, and they also received their cards. As of January 2021, approximately 1.7 million Afghan citizens have obtained their e-ID cards.
Costa Rica
Costa Rica plans to introduce facial recognition data into its national identification card.
Guatemala
Guatemala introduced its electronic identity card called DPI (Documento Personal de Identificación) in August 2010.
India
Aadhaar
Indonesia
Indonesian electronic ID was trialed in six areas in 2009 and launched nationwide in 2011.
Israel
Electronic identity cards in Israel have been issued since July 2013.
Kazakhstan
Kazakhstan introduced its electronic identity cards in 2009.
Mauritius
Mauritius has had electronic identity cards since 2013.
Mexico
Even they have been destroyed in 2018 and there were 3 million cards issued it was a great effort.
Mexico had an intent to develop an official electronic biometric ID card for all youngsters under the age of 18 years and was called the Personal Identity Card (Record of Minors), which included the data verified on the birth certificate, including the names of the legal ascendant(s), a unique key of the Population Registry (CURP), a biometric facial recognition photograph, a scan of all 10 fingerprints, and an iris scan registration.
Nigeria
General Multi-purpose Electronic Identity Cards are issued by the National Identity Management Commission (NIMC), a Federal Government agency under the Presidency. The NeID Card complies with ICAO standard 9303, ISO standard 7816-4., as well as GVCP for the MasterCard-supported payment applet.
NIMC plans to issue 50m multilayer-polycarbonate cards, the first set being contact only, but also dual-interface with DESFire Emulation in the near future.
Pakistan
Pakistan officially began its nationwide Computerized National Identity Card (CNIC) distribution in 2002, with over 89.5 x CNICs issued by 2012. In October 2012, the National Database and Registration Authority (NADRA) introduced the smart national identity card (SNIC), which contains a data chip and 36 security features. The SNIC complies with ICAO standard 9303 and ISO standard 7816-4. The SNIC can be used for both offline and online identification, voting, pension disbursement, social and financial inclusion programmes and other services. NADRA aims to replace all 89.5 million CNICs with SNICs by 2020.
Serbia
Serbia has its first trustful and reliable electronic identity since June 2019. The first reliable service provider is The Office for IT and eGovernment, through which citizens and residents of Serbia can access services on eGovernment Portal and eHealth portal. The electronic identification offers two levels of security, first basic level with authentication of only user name and password, and medium level of two-factor of authentication.
Sri Lanka
Since on 1 January 2016, Sri Lanka is in the process of developing a Smart Card based RFID E-National Identity Card which will replace the obsolete 'Laminated Type' cards by storing the holders information on a chip that can be read by banks, offices etc. thereby reducing the need to have documentation of these informations physically by storing in the cloud.
Turkey
In Turkey the e-Government (e-Devlet) Gateway is a largely scaled Internet site that provides access to all public services from a single point. The purpose of the Gateway is to present public services to the citizens, enterprises and public institutions effectively and efficiently with information and communication technologies.
Uruguay
Uruguay has had electronic identity cards since 2015. The Uruguayan eID has a private key that allows to digitally sign documents, and has the user fingerprint stored in order to allow to verify the identity. It is also a valid travel document in some South American countries.
As of 2017 the old laminated ID coexists with the new eID.
Manufacturing
Electronic identification can also be attributed to the manufacturing sector, where the technology of electronic identification is transferred to individual parts or components within a manufacturing facility in order to track, and identify these parts to enhance manufacturing efficiency. This can also be referred to location detection technologies within the Fourth Industrial Revolution.
See also
List of national identity card policies by country
Self-sovereign identity
References
External links
World Map of eID deployments
Identity documents
Identification |
15929501 | https://en.wikipedia.org/wiki/Performance%20per%20watt | Performance per watt | In computing, performance per watt is a measure of the energy efficiency of a particular computer architecture or computer hardware. Literally, it measures the rate of computation that can be delivered by a computer for every watt of power consumed. This rate is typically measured by performance on the LINPACK benchmark when trying to compare between computing systems: an example using this is the Green500 list of supercomputers. Performance per watt has been suggested to be a more sustainable measure of computing than Moore’s Law.
System designers building parallel computers, such as Google's hardware, pick CPUs based on their performance per watt of power, because the cost of powering the CPU outweighs the cost of the CPU itself.
Spaceflight computers have hard limits on the maximum power available and also have hard requirements on minimum real-time performance. A ratio of processing speed to required electrical power is more useful than raw processing speed.
Definition
The performance and power consumption metrics used depend on the definition; reasonable measures of performance are FLOPS, MIPS, or the score for any performance benchmark. Several measures of power usage may be employed, depending on the purposes of the metric; for example, a metric might only consider the electrical power delivered to a machine directly, while another might include all power necessary to run a computer, such as cooling and monitoring systems. The power measurement is often the average power used while running the benchmark, but other measures of power usage may be employed (e.g. peak power, idle power).
For example, the early UNIVAC I computer performed approximately 0.015 operations per watt-second (performing 1,905 operations per second (OPS), while consuming 125 kW). The Fujitsu FR-V VLIW/vector processor system on a chip in the 4 FR550 core variant released 2005 performs 51 Giga-OPS with 3 watts of power consumption resulting in 17 billion operations per watt-second. This is an improvement by over a trillion times in 54 years.
Most of the power a computer uses is converted into heat, so a system that takes fewer watts to do a job will require less cooling to maintain a given operating temperature. Reduced cooling demands makes it easier to quiet a computer. Lower energy consumption can also make it less costly to run, and reduce the environmental impact of powering the computer (see green computing).
If installed where there is limited climate control, a lower power computer will operate at a lower temperature, which may make it more reliable. In a climate controlled environment, reductions in direct power use may also create savings in climate control energy.
Computing energy consumption is sometimes also measured by reporting the energy required to run a particular benchmark, for instance EEMBC EnergyBench. Energy consumption figures for a standard workload may make it easier to judge the effect of an improvement in energy efficiency.
Performance (in operations/second) per watt can also be written as operations/watt-second, or operations/joule, since 1 watt = 1 joule/second.
FLOPS per watt
FLOPS per watt is a common measure. Like the FLOPS (Floating Point Operations Per Second) metric it is based on, the metric is usually applied to scientific computing and simulations involving many floating point calculations.
Examples
, the Green500 list rates the two most efficient supercomputers highest those are both based on the same manycore accelerator PEZY-SCnp Japanese technology in addition to Intel Xeon processors both at RIKEN, the top one at 6673.8 MFLOPS/watt; and the third ranked is the Chinese-technology Sunway TaihuLight (a much bigger machine, that is the ranked 2nd on TOP500, the others are not on that list) at 6051.3 MFLOPS/watt.
In June 2012, the Green500 list rated BlueGene/Q, Power BQC 16C as the most efficient supercomputer on the TOP500 in terms of FLOPS per watt, running at 2,100.88 MFLOPS/watt.
In November 2010, IBM machine, Blue Gene/Q achieves 1,684 MFLOPS/watt.
On 9 June 2008, CNN reported that IBM's Roadrunner supercomputer achieves 376 MFLOPS/watt.
As part of Intel's Tera-Scale research project, the team produced an 80-core CPU that can achieve over 16,000 MFLOPS/watt. The future of that CPU is not certain.
Microwulf, a low cost desktop Beowulf cluster of four dual-core Athlon 64 X2 3800+ computers, runs at 58 MFLOPS/watt.
Kalray has developed a 256-core VLIW CPU that achieves 25,000 MFLOPS/watt. Next generation is expected to achieve 75,000 MFLOPS/watt. However, in 2019 their latest chip for embedded is 80-core and claims up to 4 TFLOPS at 20 W.
Adapteva announced the Epiphany V, a 1024-core 64-bit RISC processor intended to achieve 75 GFLOPS/watt, while they later announced that the Epiphany V was "unlikely" to become available as a commercial product
US Patent 10,020,436, July 2018 claims three intervals of 100, 300, and 600 GFLOPS/watt.
GPU efficiency
Graphics processing units (GPU) have continued to increase in energy usage, while CPUs designers have recently focused on improving performance per watt. High performance GPUs may draw large amount of power, therefore intelligent techniques are required to manage GPU power consumption. Measures like 3DMark2006 score per watt can help identify more efficient GPUs. However that may not adequately incorporate efficiency in typical use, where much time is spent doing less demanding tasks.
With modern GPUs, energy usage is an important constraint on the maximum computational capabilities that can be achieved. GPU designs are usually highly scalable, allowing the manufacturer to put multiple chips on the same video card, or to use multiple video cards that work in parallel. Peak performance of any system is essentially limited by the amount of power it can draw and the amount of heat it can dissipate. Consequently, performance per watt of a GPU design translates directly into peak performance of a system that uses that design.
Since GPUs may also be used for some general purpose computation, sometimes their performance is measured in terms also applied to CPUs, such as FLOPS per watt.
Challenges
While performance per watt is useful, absolute power requirements are also important. Claims of improved performance per watt may be used to mask increasing power demands. For instance, though newer generation GPU architectures may provide better performance per watt, continued performance increases can negate the gains in efficiency, and the GPUs continue to consume large amounts of power.
Benchmarks that measure power under heavy load may not adequately reflect typical efficiency. For instance, 3DMark stresses the 3D performance of a GPU, but many computers spend most of their time doing less intense display tasks (idle, 2D tasks, displaying video). So the 2D or idle efficiency of the graphics system may be at least as significant for overall energy efficiency. Likewise, systems that spend much of their time in standby or soft off are not adequately characterized by just efficiency under load. To help address this some benchmarks, like SPECpower, include measurements at a series of load levels.
The efficiency of some electrical components, such as voltage regulators, decreases with increasing temperature, so the power used may increase with temperature. Power supplies, motherboards, and some video cards are some of the subsystems affected by this. So their power draw may depend on temperature, and the temperature or temperature dependence should be noted when measuring.
Performance per watt also typically does not include full life-cycle costs. Since computer manufacturing is energy intensive, and computers often have a relatively short lifespan, energy and materials involved in production, distribution, disposal and recycling often make up significant portions of their cost, energy use, and environmental impact.
Energy required for climate control of the computer's surroundings is often not counted in the wattage calculation, but it can be significant.
Other energy efficiency measures
SWaP (space, wattage and performance) is a Sun Microsystems metric for data centers, incorporating power and space:
Where performance is measured by any appropriate benchmark, and space is size of the computer.
Reduction of power, mass, and volume is also important for spaceflight computers.
See also
Energy efficiency benchmarks
Average CPU power (ACP) a measure of power consumption when running several standard benchmarks
EEMBC EnergyBench
SPECpower a benchmark for web servers running Java (Server Side Java Operations per Joule)
Other
Data center infrastructure efficiency (DCIE)
Energy proportional computing
GeForce 9 series for GPU list, with energy use and theoretical FLOPS
IT energy management
Koomey's law
Landauer's principle
Low-power electronics
Power usage effectiveness (PUE)
Processor power dissipation
Notes and references
Further reading
External links
The Green500
Benchmarks (computing)
Computers and the environment
Electric power
Energy conservation
Computer performance |
47596400 | https://en.wikipedia.org/wiki/Janet%20Abbate | Janet Abbate | Janet Abbate (born June 3, 1962) is an associate professor of science, technology, and society at Virginia Tech. Her research focuses on the history of computer science and the Internet, particularly on the participation of women in the field.
Academic career
Abbate received her Bachelor's degree from Harvard University and her Master's degree from the University of Pennsylvania. She also received her Ph.D. from the University of Pennsylvania in 1994. From 1996 to 1998, she was a postdoctoral fellow with the IEEE History Center, where she conducted research on women in computing. She joined the faculty of Virginia Tech's Northern Capital Region campus in 2004 and is now an associate professor and the co-director of the graduate program in Science, Technology, and Society.
Prior to her academic work, Abbate was a computer programmer herself. Her background in computer programming has influenced her research approach and has been cited as relevant in reviews of her work.
Research
In 1995, Abbate co-edited Standards Policy for Information Infrastructure with Brian Kahin.
Abbate is the author of two books: Inventing the Internet (2000) and Recoding Gender: Women’s Changing Participation in Computing (2012). Inventing the Internet was widely reviewed as an important work in the history of computing and networking, particularly in highlighting the role of social dynamics and of non-American participation in early networking development. The book was also praised for its use of archival resources to tell the history. Though some have criticized the work, citing Abbate's computer programming background as causing issues in presenting a non-technical narrative.
Recoding Gender also received positive reviews, especially for its incorporation of interviews with women in the field and for providing a historical overview of how women and gender have shaped computer programming. However, the book has also been criticized for being disjointed, claiming that the link of women in computing is not strong enough to hold the different chapters together. The book received the 2014 Computer History Museum prize.
References
1962 births
Living people
Virginia Tech faculty
Science and technology studies scholars
University of Pennsylvania alumni
American women computer scientists
American computer scientists
Harvard College alumni
American women academics
21st-century American women |
496618 | https://en.wikipedia.org/wiki/Text-based%20user%20interface | Text-based user interface | In computing, text-based user interfaces (TUI) (alternately terminal user interfaces, to reflect a dependence upon the properties of computer terminals and not just text), is a retronym describing a type of user interface (UI) common as an early form of human–computer interaction, before the advent of graphical user interfaces (GUIs). Like GUIs, they may use the entire screen area and accept mouse and other inputs. They may also use color and often structure the display using special graphical characters such as ┌ and ╣, referred to in Unicode as the "box drawing" set. The modern context of use is usually a terminal emulator.
Types of text terminals
From text application's point of view, a text screen (and communications with it) can belong to one of three types (here ordered in order of decreasing accessibility):
A genuine text mode display, controlled by a video adapter or the central processor itself. This is a normal condition for a locally running application on various types of personal computers and mobile devices. If not deterred by the operating system, a smart program may exploit the full power of a hardware text mode.
A text mode emulator. Examples are xterm for X Window System and win32 console (in a window mode) for Microsoft Windows. This usually supports programs which expect a real text mode display, but may run considerably slower. Certain functions of an advanced text mode, such as an own font uploading, almost certainly become unavailable.
A remote text terminal. The communication capabilities usually become reduced to a serial line or its emulation, possibly with few ioctl()s as an out-of-band channel in such cases as Telnet and Secure Shell. This is the worst case, because software restrictions hinder the use of capabilities of a remote display device.
Under Linux and other Unix-like systems, a program easily accommodates to any of the three cases because the same interface (namely, standard streams) controls the display and keyboard. Also, specialized programming libraries help to output the text in a way appropriate to the given display device and interface to it. See below for a comparison to Windows.
On ANSI-compatible terminals
American National Standards Institute (ANSI) standard ANSI X3.64 defines a standard set of escape sequences that can be used to drive terminals to create TUIs (see ANSI escape code). Escape sequences may be supported for all three cases mentioned in the above section, allowing arbitrary cursor movements and color changes.
However, not all terminals follow this standard, and many non-compatible but functionally equivalent sequences exist.
Under DOS and Microsoft Windows
On IBM Personal Computers and compatibles, the Basic Input Output System (BIOS) and DOS system calls provide a way to write text on the screen, and the ANSI.SYS driver could process standard ANSI escape sequences. However, programmers soon learned that writing data directly to the screen buffer was far faster and simpler to program, and less error-prone; see VGA-compatible text mode for details. This change in programming methods resulted in many DOS TUI programs. The win32 console environment is notorious for its emulation of certain EGA/VGA text mode features, particularly random access to the text buffer, even if the application runs in a window. On the other hand, programs running under Windows (both native and DOS applications) have much less control of the display and keyboard than Linux and DOS programs can have, because of aforementioned win32 console layer.
Most often those programs used a blue background for the main screen, with white or yellow characters, although commonly they had also user color customization. They often used box-drawing characters in IBM's code page 437. Later, the interface became deeply influenced by graphical user interfaces (GUI), adding pull-down menus, overlapping windows, dialog boxes and GUI widgets operated by mnemonics or keyboard shortcuts. Soon mouse input was added – either at text resolution as a simple colored box or at graphical resolution thanks to the ability of the Enhanced Graphics Adapter (EGA) and Video Graphics Array (VGA) display adapters to redefine the text character shapes by software – providing additional functions.
Some notable programs of this kind were Microsoft Word, DOS Shell, WordPerfect, Norton Commander, Turbo Vision based Borland Turbo Pascal and Turbo C (the latter included the conio library), Lotus 1-2-3 and many others. Some of these interfaces survived even during the Microsoft Windows 3.1x period in the early 1990s. For example, the Microsoft C 6.0 compiler, used to write true GUI programs under 16-bit Windows, still has its own TUI.
Since its start, Microsoft Windows includes a console to display DOS software. Later versions added the Win32 console as a native interface for command-line interface and TUI programs. The console usually opens in window mode, but it can be switched to full, true text mode screen and vice versa by pressing the Alt and Enter keys together. Full-screen mode is not available in Windows Vista and later, but may be used with some workarounds.
Under Unix-like systems
In Unix-like operating systems, TUIs are often constructed using the terminal control library curses, or ncurses (a mostly compatible library), or
the alternative S-Lang library.
The advent of the curses library with Berkeley Unix created a portable and stable API for which to write TUIs. The ability to talk to various text terminal types using the same interfaces led to more widespread use of "visual" Unix programs, which occupied the entire terminal screen instead of using a simple line interface. This can be seen in text editors such as vi, mail clients such as pine or mutt, system management tools such as SMIT, SAM, FreeBSD's Sysinstall and web browsers such as lynx. Some applications, such as w3m, and older versions of pine and vi use the less-able termcap library, performing many of the functions associated with curses within the application. Custom TUI applications based on widgets can be easily developed using the dialog program (based on ncurses), or the Whiptail program (based on S-Lang).
In addition, the rise in popularity of Linux brought many former DOS users to a Unix-like platform, which has fostered a DOS influence in many TUIs. The program minicom, for example, is modeled after the popular DOS program Telix. Some other TUI programs, such as the Twin desktop, were ported over.
Most Unix-like operating systems (Linux, FreeBSD, etc.) support virtual consoles, typically accessed through a Ctrl-Alt-F key combination. For example, under Linux up to 64 consoles may be accessed (12 via function keys), each displaying in full-screen text mode.
The free software program GNU Screen provides for managing multiple sessions inside a single TUI, and so can be thought of as being like a window manager for text-mode and command-line interfaces. Tmux can also do this.
The proprietary macOS text editor BBEdit includes a shell worksheet function that works as a full-screen shell window. The free Emacs text editor can run a shell inside of one of its buffers to provide similar functionality. There are several shell implementations in Emacs, but only ansi-term is suitable for running TUI programs. The other common shell modes, shell and eshell only emulate command lines and TUI programs will complain "Terminal is not fully functional" or display a garbled interface. The free Vim and Neovim text editors have terminal windows (simulating xterm). The feature is intended for running jobs, parallel builds, or tests, but can also be used (with window splits and tab pages) as a lightweight terminal multiplexer.
OpenVMS
VAX/VMS (later known as OpenVMS) had a similar facility to curses known as the Screen Management facility or SMG. This could be invoked from the command line or called from programs using the SMG$ library.
Oberon
Another kind of TUI is the primary interface of the Oberon operating system, first released in 1988 and still maintained. Unlike most other text-based user interfaces, Oberon does not use a text-mode console or terminal, but requires a large bit-mapped display, on which text is the primary target for mouse clicks. Commands in the format Module.Procedure parameters ~ can be activated with a middle-click, like hyperlinks. Text displayed anywhere on the screen can be edited, and if formatted with the required command syntax, can be middle-clicked and executed. Any text file containing suitably-formatted commands can be used as a so-called tool text, thus serving as a user-configurable menu. Even the output of a previous command can be edited and used as a new command. This approach is radically different from both conventional dialogue-oriented console menus or command line interfaces.
Since it does not use graphical widgets, only plain text, but offers comparable functionality to a GUI with a tiling window manager, it is referred to as a Text User Interface or TUI. For a short introduction, see the 2nd paragraph on page four of the first published Report on the Oberon System.
Oberon's UI influenced the design of the Acme text editor and email client for the Plan 9 from Bell Labs operating system.
In embedded systems
Modern embedded systems are capable of displaying TUI on a monitor like personal computers. This functionality is usually implemented using specialized integrated circuits, modules, or using FPGA.
Video circuits or modules are usually controlled using VT100-compatible command set over UART, FPGA designs usually allow direct video memory access.
Other uses
The full screen editor of the Commodore 64 8-bit computers was advanced in its market segment for its time. Users could move the cursor over the entire screen area, entering and editing BASIC program lines, as well as direct mode commands. All Commodore 8-bit computers used the PETSCII character set, which included character glyphs suitable for making a TUI.
Apple's Macintosh Programmer's Workshop programming environment included Commando, a TUI shell. It was the inspiration for BBEdit's shell worksheet.
Later Apple II models included MouseText, a set of graphical glyphs used for making a TUI.
The Corvus Concept computer of 1982 used a function key-based text interface on a full-page pivoting display.
See also
Command-line interface
Console application
Natural language user interface
Text-based game, a game using a TUI
Examples of programming libraries
curses (programming library)
ncurses
CDK
Newt, a widget-based toolkit
Turbo Vision
Early versions of Visual Basic
References
User interfaces |
18888090 | https://en.wikipedia.org/wiki/Dual-homed | Dual-homed | Dual-homed or dual-homing can refer to either an Ethernet device that has more than one network interface, for redundancy purposes, or in firewall technology, one of the firewall architectures for implementing preventive security.
An example of dual-homed devices are enthusiast computing motherboards that incorporate dual Ethernet network interface cards.
Usage
In Ethernet LANs, dual-homing is a network topology whereby a networked device is built with more than one network interface. Each interface or port is connected to the network, but only one connection is active at a time. The other connection is activated only if the primary connection fails. Traffic is quickly rerouted to the backup connection in the event of link failure. This feature was designed to provide telecommunications grade reliability and redundancy to Ethernet networks. Multihoming is a more general category, referring to a device having more than one network connection.
In firewalls
Firewall dual-homing provides the first-line defense and protection technology for keeping untrusted bodies from compromising information security by violating trusted network space.
A dual-homed host (or dual-homed gateway) is a system fitted with two network interfaces (NICs) that sits between an untrusted network (like the Internet) and trusted network (such as a corporate network) to provide secure access. Dual-homed is a general term for proxies, gateways, firewalls, or any server that provides secured applications or services directly to an untrusted network.
Dual-homed hosts can be seen as a special case of bastion hosts and multi-homed hosts. They fall into the category of application-based firewalls.
Dual-homed hosts can act as firewalls provided that they do not forward IP datagrams unconditionally.
Other firewall architectures include the network-layer firewall types screening router, screened-host, and screened subnet.
See also
Multihoming
Firewall (computing)
Router (computing)
References
Computer network security |
31494055 | https://en.wikipedia.org/wiki/Vista%20Equity%20Partners | Vista Equity Partners | Vista Equity Partners is an American investment firm focused on financing and forwarding software, data and technology-enabled startup businesses. Vista has invested in hundreds of companies, including Misys, Ping Identity, and Marketo.
The company has offices in several cities, including Austin, Texas, New York City, and San Francisco.
History
2000-2009
Vista Equity Partners was founded in 2000 by American businessman and investor Robert F. Smith, who serves as chairman and CEO. Vista opened its first office in San Francisco in 2000. In November 2008, the company closed a funding round for its first institutional fund with a total of $1.3 billion raised.
Vista is known for applying detailed scrutiny in human resources when investing in firms, in a procedure it calls Vista Standard Operating Procedures.
2010-2019
In 2010, Brian N. Sheth was promoted to president and awarded the title of co-founder of the firm. He remained in this role until his departure in 2020.
In 2011, the company opened an office in Austin, Texas. Over the years, Vista has added several private equity funds and credit funds to its portfolio, including its first fund, the Vista Credit Opportunities Fund, which raised $196 million. During that time, Vista opened several funds that specifically target middle-market companies and emerging technology companies. The company also has a permanent capital investment fund, Vista Equity Partners Perennial, which focuses on growing vertical market software companies.
In 2018, Vista was named the top software investor of the past decade by Pitchbook. Also in 2018, Vista's sale of Marketo to Adobe was named Deal of the Year by Buyouts magazine. In 2019, Vista was named Dealmaker of the Year at the PitchBook Private Equity Awards.
2020-present
As of May 2020, Vista had more than $57 billion in capital commitments. In 2020, Vista joined Diligent Corporation's modern leadership initiative and pledged to create five new board roles among its portfolio companies for racially diverse candidates.
As of June 2021, Vista had more than $81 billion in assets under management. In August 2021, its chief operating officer David Breach was announced as president of Vista, the previous incumbent, Brian N. Sheth, a Vista co-founder, resigning in 2020 following the criminal tax investigation of CEO Smith.
Tax evasion
In October 2020, Vista's CEO Robert Smith and investor Robert T. Brockman were named in a tax evasion case. That month, Smith signed a non-prosecution agreement with the IRS, agreeing to pay $139 million and testify against Brockman. The following month, Vista's president and co-founder Brian Sheth departed the company, stating that his leaving was unrelated to the tax situation.
Investments
2000-2009
In August 2000, Vista invested in SourceNet Solutions, a provider of finance and accounting business process outsourcing services. Between 2001 an 2005, Vista invested in several software companies, including BigMachines, a provider of product configuration software, and Aspect Communications, a contact center technology company, among others. In 2004, Vista acquired Applied Systems, Inc., an insurance software company. In 2005, Vista invested in MDSI Mobile Data Solutions.
In 2006, Vista invested in Reynolds and Reynolds, an auto technology company. The following year, Vista acquired Indus International, which it later merged with MDSI Mobile Data Solutions to form Ventyx, Inc. That year, the company also invested in SirsiDynix, a library software company. In 2008, Vista acquired P2 Energy Solutions Inc, a software company that helps oil and gas producers keep track of drilling leases. The following year, the company acquired SumTotal Systems from Accel Partners and Kohlberg Kravis Roberts, as well as MicroEdge from Advent Software.
2010-2019
In 2011, the firm acquired multiple companies, including Sage Healthcare, an electronic health records company, which it renamed Vitera Healthcare Solutions.
Between 2012 and 2013, Vista acquired companies including Bullhorn, Inc. a CRM software provider; Misys, a British software company; Websense; and Active Network, Inc, an online event registration service. Between 2014 and 2015, Vista acquired Automated Insights, a software company specializing in natural-language generation; Main Street Hub, a social-media company; PowerSchool, a provider of educational technology, and TIBCO Software, a company that specializes in big data and software integrations, among others.
In 2016, Vista acquired Solera Holdings for $6.5 billion and Cvent for $1.65 billion, and announced agreements to acquire Marketo (marketing automation software), Ping Identity (single sign-on digital security), and GovDelivery (technology platform for government bodies). The company invested in Granicus and Vivid Seats that same year. Vista subsequently merged GovDelivery and Granicus into one company. In 2017, Vista acquired several software companies, including NAVEX Global. That year, the company also invested in Upserve, a restaurant software provider, and Market Track.
In 2018, Vista acquired Apptio, a cloud-based business management software, Alegeus, and entered into an agreement to acquire Mindbody for $1.9 billion. That year, Vista invested in several other companies, including Wrike, a provider of project management software. In 2019, Vista bought a majority stake in Acquia, and completed its first IPO when it took Ping Identity public. Also in 2019, Vista purchased Sonatype, a cybersecurity, open-source automation company and Accelya, a technology provider for the airline industry.
2020-present
In 2020, Vista acquired Tripleseat, a web-based sales and event management company, and purchased a 2.32% stake in Jio Platforms. In November 2020, Vista acquired CRM-Software provider Pipedrive for $1.5 billion. In the same month, Vista also acquired Customer Success company Gainsight for $1.1 billion.
In April 2021, Vista completed its acquisition of Pluralsight for $22.50 per share. In September 2021, Vista acquired Blue Prism for £1.095 billion ($1.5 billion), and announced they intended to merge it into Tibco.
Divestments
In December 2004, Vista sold SourceNet Solutions to Mellon Financial.
Between 2010 and 2013, Vista sold several companies, including Ventyx to ABB Group for over $1 billion, BigMachines to Oracle for over $400 million, and P2 Energy Solutions to Advent International Corp. In September 2014, Vista announced the sale of MicroEdge to Blackbaud for $160 million. In 2015, the company sold Websense to Raytheon for $1.9 billion.
Between 2015 and 2020, Vista divested other companies, including selling BullHorn, Inc to Insight Venture Partners, selling parts of Active Network to Global Payments Inc. and selling NAVEX Global to BC Partners. During this time, Vista also sold SirsiDynix to ICV Partners, and Main Street Hub to GoDaddy for $125 million. In 2018, Vista sold Marketo to Adobe Systems for $4.75 billion.
In 2020, Vista sold Vertafore to Roper Technologies for $5.35 billion and Regulatory DataCorp to moody's for $700 million. In 2021, Vista divested from Aspira, a software provider and Numerator, a market intelligence firm Vista had backed since 2017. Also in 2021, Vista sold Allocate, which it acquired in 2018, to RLDatix for $1.3 billion.
Philanthropy
In September 2017, Vista and their companies pledged $1 million to assist the Akshaya Patra Foundation in delivering meals to Indian school children. In 2019, Robert F. Smith committed to eliminate the student loan debt of the Morehouse College Class of 2019. He paid $34 million to cover the students' loans and the loans of their parents for their studies.
References
External links
Financial services companies established in 2000
Companies based in San Francisco
Private equity firms of the United States
American companies established in 2000 |
60525 | https://en.wikipedia.org/wiki/List%20of%20Intel%20processors | List of Intel processors | This generational list of Intel processors attempts to present all of Intel's processors from the pioneering 4-bit 4004 (1971) to the present high-end offerings. Concise technical data is given for each product.
Latest
12th generation Core
Desktop (codenamed "Alder Lake")
11th generation Core
Desktop (codenamed "Rocket Lake")
Mobile (codenamed "Tiger Lake")
10th generation Core
Desktop (codenamed "Comet Lake")
Mobile (codenamed "Comet Lake", "Ice Lake", "Amber Lake", and "Amber Lake Y")
9th generation Core
Desktop (codenamed "Coffee Lake Refresh")
8th generation Core
Desktop
Mobile
7th generation Core
Desktop (codenamed "Kaby Lake" and "Skylake-X")
Mobile (codenamed "Kaby Lake" and "Apollo Lake")
All processors
All processors are listed in chronological order.
The 4-bit processors
Intel 4004
First microprocessor (single-chip IC processor)
Introduced November 15, 1971
Clock rate 740 kHz
0.07 MIPS
Bus width: 4 bits (multiplexed address/data due to limited pins)
PMOS
2,300 transistors at 10 μm
Addressable memory 640 bytes
Program memory 4 KB (4096 B)
Originally designed to be used in Busicom calculator
MCS-4 family:
4004 – CPU
4001 – ROM & 4-bit Port
4002 – RAM & 4-bit Port
4003 – 10-bit Shift Register
4008 – Memory+I/O Interface
4009 – Memory+I/O Interface
4211 – General Purpose Byte I/O Port
4265 – Programmable General Purpose I/O Device
4269 – Programmable Keyboard Display Device
4289 – Standard Memory Interface for MCS-4/40
4308 – 8192-bit (1024 × 8) ROM w/ 4-bit I/O Ports
4316 – 16384-bit (2048 × 8) Static ROM
4702 – 2048-bit (256 × 8) EPROM
4801 – 5.185 MHz Clock Generator Crystal for 4004/4201A or 4040/4201A
Intel 4040
Introduced in 1971 by Intel
Clock speed was 740 kHz (same as the 4004 microprocessor)
3,000 transistors
Interrupt features were available
Programmable memory size: 8 KB (8192 B)
640 bytes of data memory
24-pin DIP
The 8-bit processors
8008
Introduced April 1, 1972
Clock rate 500 kHz (8008–1: 800 kHz)
0.05 MIPS
Bus width: 8 bits (multiplexed address/data due to limited pins)
Enhancement load PMOS logic
3,500 transistors at 10 μm
Addressable memory 16 KB
Typical in early 8-bit microcomputers, dumb terminals, general calculators, bottling machines
Developed in tandem with 4004
Originally intended for use in the Datapoint 2200 microcomputer
Key volume deployment in Texas Instruments 742 microcomputer in >3,000 Ford dealerships
8080
Introduced April 1, 1974
Clock rate 2 MHz (very rare 8080B: 3 MHz)
0.29 MIPS
Data bus width: 8 bits, address bus: 16 bits
Enhancement load NMOS logic
4,500 transistors at 6 μm
Assembly language downward compatible with 8008
Addressable memory 64 KB (64 x 1024 B)
Up to 10× the performance of the 8008
Used in e.g. the Altair 8800, traffic light controller, cruise missile
Required six support chips versus 20 for the 8008
8085
Introduced March 1976
Clock rate 3 MHz
0.37 MIPS
Data bus width: 8 bits, address bus: 16 bits
Depletion load NMOS logic
6,500 transistors at 3 μm
Binary compatible downward with the 8080
Used in Toledo scales. Also used as a computer peripheral controller – modems, hard disks, printers, etc.
CMOS 80C85 in Mars Sojourner, Radio Shack Model 100 portable
Microcontrollers
They are ICs with CPU, RAM, ROM (or PROM or EPROM), I/O Ports, Timers & Interrupts
Intel 8048
Single accumulator Harvard architecture
MCS-48 family:
Intel 8020 – Single-Component 8-bit Microcontroller, 1KB ROM, 64 Byte RAM, 13 I/O ports
Intel 8021 – Single-Component 8-bit Microcontroller, 1KB ROM, 64 Byte RAM, 21 I/O ports
Intel 8022 – Single-Component 8-bit Microcontroller, With On-Chip A/D Converter
Intel 8035 – Single-Component 8-bit Microcontroller, 64 Byte RAM
Intel 8039 – Single-Component 8-bit Microcontroller, 128 Byte RAM
Intel 8040 – Single-Component 8-bit Microcontroller, 256 Byte RAM
Intel 8048 – Single-Component 8-bit Microcontroller, 1KB ROM, 64 Byte RAM, 27 I/O ports, 0.73 MIPS @ 11 Mhz
Intel 8049 – Single-Component 8-bit Microcontroller, 2KB ROM, 128 Byte RAM, 27 I/O ports,
Intel 8050 – Single-Component 8-bit Microcontroller, 4KB ROM, 256 Byte RAM, 27 I/O ports,
Intel 8748 – Single-Component 8-bit Microcontroller, 1KB EPROM, 64 Byte RAM, 27 I/O ports,
Intel 8749 – Single-Component 8-bit Microcontroller, 2KB EPROM, 128 Byte RAM, 27 I/O ports,
Intel 87P50 – Single-Component 8-bit Microcontroller, ext. ROM socket(2758/2716/2732), 256 Byte RAM, 27 I/O ports
Intel 8648 – Single-Component 8-bit Microcontroller, 1KB OTP EPROM, 64 Byte RAM, 27 I/O ports
Intel 8041 – Universal Peripheral Interface 8-bit Slave Microcontroller, 1KB ROM, 64 Byte RAM
Intel 8041AH – Universal Peripheral Interface 8-bit Slave Microcontroller, 1KB ROM, 128 Byte RAM
Intel 8641 – Universal Peripheral Interface 8-bit Slave Microcontroller ?
Intel 8741 – Universal Peripheral Interface 8-bit Slave Microcontroller, 1KB EPROM, 64 Byte RAM
Intel 8741AH – Universal Peripheral Interface 8-bit Slave Microcontroller, 1KB EPROM, 128 Byte RAM
Intel 8042 – Universal Peripheral Interface 8-bit Slave Microcontroller, 2KB ROM, 256 Byte RAM
Intel 8742 – Universal Peripheral Interface 8-bit Slave Microcontroller, 2KB EPROM, 128 Byte RAM
Intel 8742AH – Universal Peripheral Interface 8-bit Slave Microcontroller, 2KB OTP EPROM, 256 Byte RAM
Intel 8243 – Input/Output Expander
Intel 8244 – General Purpose Graphics Display Device (ASIC NTSC/SECAM)
Intel 8245 – General Purpose Graphics Display Device (ASIC PAL)
Intel 8051
Single accumulator Harvard architecture
MCS-51 family:
8031 – 8-bit Control-Oriented Microcontroller
8032 – 8-bit Control-Oriented Microcontroller
8044 – High Performance 8-bit Microcontroller
8344 – High Performance 8-bit Microcontroller
8744 – High Performance 8-bit Microcontroller
8051 – 8-bit Control-Oriented Microcontroller
8052 – 8-bit Control-Oriented Microcontroller
8054 – 8-bit Control-Oriented Microcontroller
8058 – 8-bit Control-Oriented Microcontroller
8351 – 8-bit Control-Oriented Microcontroller
8352 – 8-bit Control-Oriented Microcontroller
8354 – 8-bit Control-Oriented Microcontroller
8358 – 8-bit Control-Oriented Microcontroller
8751 – 8-bit Control-Oriented Microcontroller
8752 – 8-bit Control-Oriented Microcontroller
8754 – 8-bit Control-Oriented Microcontroller
8758 – 8-bit Control-Oriented Microcontroller
Intel 80151
Single accumulator Harvard architecture
MCS-151 family:
80151 – High Performance 8-bit Control-Oriented Microcontroller
83151 – High Performance 8-bit Control-Oriented Microcontroller
87151 – High Performance 8-bit Control-Oriented Microcontroller
80152 – High Performance 8-bit Control-Oriented Microcontroller
83152 – High Performance 8-bit Control-Oriented Microcontroller
Intel 80251
Single accumulator Harvard architecture
MCS-251 family:
80251 – 8/16/32-bit Microcontroller
80252 – 8/16/32-bit Microcontroller
80452 – 8/16/32-bit Microcontroller
83251 – 8/16/32-bit Microcontroller
87251 – 8/16/32-bit Microcontroller
87253 – 8/16/32-bit Microcontroller
MCS-96 family
8061 – 16-bit Microcontroller (parent of MCS-96 family ROMless With A/D, most sold to Ford)
8094 – 16-bit Microcontroller (48-Pin ROMLess Without A/D)
8095 – 16-bit Microcontroller (48-Pin ROMLess With A/D)
8096 – 16-bit Microcontroller (68-Pin ROMLess Without A/D)
8097 – 16-bit Microcontroller (68-Pin ROMLess With A/D)
8394 – 16-bit Microcontroller (48-Pin With ROM Without A/D)
8395 – 16-bit Microcontroller (48-Pin With ROM With A/D)
8396 – 16-bit Microcontroller (68-Pin With ROM Without A/D)
8397 – 16-bit Microcontroller (68-Pin With ROM With A/D)
8794 – 16-bit Microcontroller (48-Pin With EROM Without A/D)
8795 – 16-bit Microcontroller (48-Pin With EROM With A/D)
8796 – 16-bit Microcontroller (68-Pin With EROM Without A/D)
8797 – 16-bit Microcontroller (68-Pin With EROM With A/D)
8098 – 16-bit Microcontroller
8398 – 16-bit Microcontroller
8798 – 16-bit Microcontroller
80196 – 16-bit Microcontroller
83196 – 16-bit Microcontroller
87196 – 16-bit Microcontroller
80296 – 16-bit Microcontroller
The bit-slice processor
3000 Family
Introduced in the third quarter of 1974, these bit-slicing components used bipolar Schottky transistors. Each component implemented two bits of a processor function; packages could be interconnected to build a processor with any desired word length.
Members of the family:
3001 – Microcontrol Unit
3002 – 2-bit Arithmetic Logic Unit slice
3003 – Look-ahead Carry Generator
3205 – High-performance 1 of 8 Binary Decoder
3207 – Quad Bipolar-to-MOS Level Shifter and Driver
3208 – Hex Sense Amp and Latch for MOS Memories
3210 – TTL-to-MOS Level Shifter and High Voltage Clock Driver
3211 – ECL-to-MOS Level Shifter and High Voltage Clock Driver
3212 – Multimode Latch Buffer
3214 – Interrupt Control Unit
3216 – Parallel, Inverting Bi-Directional Bus Driver
3222 – Refresh Controller for 4K (4096 B) NMOS DRAMs
3226 – Parallel, Inverting Bi-Directional Bus Driver
3232 – Address Multiplexer and Refresh Counter for 4K DRAMs
3242 – Address Multiplexer and Refresh Counter for 16K (16 x 1024 B) DRAMs
3245 – Quad Bipolar TTL-to-MOS Level Shifter and Driver for 4K
3246 – Quad Bipolar ECL-to-MOS Level Shifter and Driver for 4K
3404 – High-performance 6-bit Latch
3408 – Hex Sense Amp and Latch for MOS Memories
3505 – Next generation processor
Bus width 2* n bits data/address (depending on number n of slices used)
The 16-bit processors: MCS-86 family
8086
Introduced June 8, 1978
Clock rates:
5 MHz, 0.33 MIPS
8 MHz, 0.66 MIPS
10 MHz, 0.75 MIPS
The memory is divided into odd and even banks. It accesses both banks concurrently to read 16 bits of data in one clock cycle
Data bus width: 16 bits, address bus: 20 bits
29,000 transistors at 3 μm
Addressable memory 1 megabyte (1024^2 B)
Up to 10× the performance of 8080
First used in the Compaq Deskpro IBM PC-compatible computers. Later used in portable computing, and in the IBM PS/2 Model 25 and Model 30. Also used in the AT&T PC6300 / Olivetti M24, a popular IBM PC-compatible (predating the IBM PS/2 line)
Used segment registers to access more than 64 KB of data at once, which many programmers complained made their work excessively difficult.
The first x86 CPU
Later renamed the iAPX 86
8088
Introduced June 1, 1979
Clock rates:
4.77 MHz, 0.33 MIPS
8 MHz, 0.66 MIPS
16-bit internal architecture
External data bus width: 8 bits, address bus: 20 bits
29,000 transistors at 3 μm
Addressable memory 1 megabyte
Identical to 8086 except for its 8-bit external bus (hence an 8 instead of a 6 at the end); identical Execution Unit (EU), different Bus Interface Unit (BIU)
Used in IBM PC and PC-XT and compatibles
Later renamed the iAPX 88
80186
Introduced 1982
Clock rates
6 MHz, > 1 MIPS
55,000 transistors
Included two timers, a DMA controller, and an interrupt controller on the chip in addition to the processor (these were at fixed addresses which differed from the IBM PC, although it was used by several PC compatible vendors such as Australian company Cleveland)
Added a few opcodes and exceptions to the 8086 design, otherwise identical instruction set to 8086 and 8088
BOUND, ENTER, LEAVE
INS, OUTS
IMUL imm, PUSH imm, PUSHA, POPA
RCL/RCR/ROL/ROR/SHL/SHR/SAL/SAR reg, imm
Address calculation and shift operations are faster than 8086
Used mostly in embedded applications – controllers, point-of-sale systems, terminals, and the like
Used in several non-PC compatible DOS computers including RM Nimbus, Tandy 2000, and CP/M 86 Televideo PM16 server
Later renamed to iAPX 186
80188
A version of the 80186 with an 8-bit external data bus
Later renamed the iAPX 188
80286
Introduced February 2, 1982
Clock rates:
6 MHz, 0.9 MIPS
8 MHz, 10 MHz, 1.5 MIPS
12.5 MHz, 2.66 MIPS
16 MHz, 20 MHz and 25 MHz available.
Data bus width: 16 bits, address bus: 24 bits
Included memory protection hardware to support multitasking operating systems with per-process address space.
134,000 transistors at 1.5 μm
Addressable memory 16 MB
Added protected-mode features to 8086 with essentially the same instruction set
3–6× the performance of the 8086
Widely used in IBM PC AT and AT clones contemporary to it
32-bit processors: the non-x86 microprocessors
iAPX 432
Introduced January 1, 1981 as Intel's first 32-bit microprocessor
Multi-chip CPU
Object/capability architecture
Microcoded operating system primitives
One terabyte virtual address space
Hardware support for fault tolerance
Two-chip General Data Processor (GDP), consists of 43201 and 43202
43203 Interface Processor (IP) interfaces to I/O subsystem
43204 Bus Interface Unit (BIU) simplifies building multiprocessor systems
43205 Memory Control Unit (MCU)
Architecture and execution unit internal data base paths: 32 bits
Clock rates:
5 MHz
7 MHz
8 MHz
i960 a.k.a. 80960
Introduced April 5, 1988
RISC-like 32-bit architecture
Predominantly used in embedded systems
Evolved from the capability processor developed for the BiiN joint venture with Siemens
Many variants identified by two-letter suffixes
i860 a.k.a. 80860
Introduced February 26, 1989
RISC 32/64-bit architecture, with floating point pipeline characteristics very visible to programmer
Used in the Intel iPSC/860 Hypercube parallel supercomputer
Mid-life kicker in the i870 processor (primarily a speed bump, some refinement/extension of instruction set)
Used in the Intel Delta massively parallel supercomputer prototype, emplaced at California Institute of Technology
Used in the Intel Paragon massively parallel supercomputer, emplaced at Sandia National Laboratory
XScale
Introduced August 23, 2000
32-bit RISC microprocessor based on the ARM architecture
Many variants, such as the PXA2xx applications processors, IOP3xx I/O processors and IXP2xxx and IXP4xx network processors
32-bit processors: the 80386 range
80386DX
Introduced October 17, 1985
Clock rates:
16 MHz, 5 MIPS
20 MHz, 6 to 7 MIPS, introduced February 16, 1987
25 MHz, 7.5 MIPS, introduced April 4, 1988
33 MHz, 9.9 MIPS (9.4 SPECint92 on Compaq/i 16 KB L2), introduced April 10, 1989
Data bus width: 32 bits, address bus: 32 bits
275,000 transistors at 1 μm
Addressable memory 4 GB (4 x 1024^3 B)
Virtual memory 64 GB (64 x 1024^4 B)
First x86 chip to handle 32-bit data sets
Reworked and expanded memory protection support including paged virtual memory and virtual-86 mode, features required at the time by Xenix and Unix. This memory capability spurred the development and availability of OS/2 and is a fundamental requirement for modern operating systems like Linux, Windows, and macOS
First used by Compaq in the Deskpro 386. Used in desktop computing
Unlike the DX naming convention of the 486 chips, it had no math co-processor
Later renamed Intel386 DX
80386SX
Introduced June 16, 1988
Clock rates:
16 MHz, 2.5 MIPS
20 MHz, 3.1 MIPS, introduced January 25, 1989
25 MHz, 3.9 MIPS, introduced January 25, 1989
33 MHz, 5.1 MIPS, introduced October 26, 1992
32-bit internal architecture
External data bus width: 16 bits
External address bus width: 24 bits
275,000 transistors at 1 μm
Addressable memory 16 MB
Virtual memory 64 TB
Narrower buses enable low-cost 32-bit processing
Used in entry-level desktop and portable computing
No math co-processor
No commercial software used protected mode or virtual storage for many years
Later renamed Intel386 SX
80376
Introduced January 16, 1989; discontinued June 15, 2001
Variant of 386SX intended for embedded systems
No "real mode", starts up directly in "protected mode"
Replaced by much more successful 80386EX from 1994
80386SL
Introduced October 15, 1990
Clock rates:
20 MHz, 4.21 MIPS
25 MHz, 5.3 MIPS, introduced September 30, 1991
32-bit internal architecture
External bus width: 16 bits
855,000 transistors at 1 μm
Addressable memory 4 GB
Virtual memory 64 TB
First chip specifically made for portable computers because of low power consumption of chip
Highly integrated, includes cache, bus, and memory controllers
80386EX
Introduced August 1994
Variant of 80386SX intended for embedded systems
Static core, i.e. may run as slowly (and thus, power efficiently) as desired, down to full halt
On-chip peripherals:
Clock and power management
Timers/counters
Watchdog timer
Serial I/O units (sync and async) and parallel I/O
DMA
RAM refresh
JTAG test logic
Significantly more successful than the 80376
Used aboard several orbiting satellites and microsatellites
Used in NASA's FlightLinux project
32-bit processors: the 80486 range
80486DX
Introduced April 10, 1989
Clock rates:
25 MHz, 20 MIPS (16.8 SPECint92, 7.40 SPECfp92)
33 MHz, 27 MIPS (22.4 SPECint92 on Micronics M4P 128 KB L2), introduced May 7, 1990
50 MHz, 41 MIPS (33.4 SPECint92, 14.5 SPECfp92 on Compaq/50L 256 KB L2), introduced June 24, 1991
Bus width: 32 bits
1.2 million transistors at 1 μm; the 50 MHz was at 0.8 μm
Addressable memory 4 GB
Virtual memory 64 TB
Level 1 cache of 8 KB on chip
Math coprocessor on chip
50× performance of the 8088
Officially named Intel486 DX
Used in desktop computing and servers
Family 4 model 1
80486SX
Introduced April 22, 1991
Clock rates:
16 MHz, 13 MIPS
20 MHz, 16.5 MIPS, introduced September 16, 1991
25 MHz, 20 MIPS (12 SPECint92), introduced September 16, 1991
33 MHz, 27 MIPS (15.86 SPECint92), introduced September 21, 1992
Bus width: 32 bits
1.185 million transistors at 1 μm and 900,000 at 0.8 μm
Addressable memory 4 GB
Virtual memory 64 TB
Identical in design to 486DX but without a math coprocessor. The first version was an 80486DX with disabled math coprocessor in the chip and different pin configuration. If the user needed math coprocessor capabilities, he must add 487SX which was actually a 486DX with different pin configuration to prevent the user from installing a 486DX instead of 487SX, so with this configuration 486SX+487SX you had 2 identical CPU's with only 1 effectively turned on
Officially named Intel486 SX
Used in low-cost entry to 486 CPU desktop computing, as well as extensively in low cost mobile computing
Upgradable with the Intel OverDrive processor
Family 4 model 2
80486DX2
Introduced March 3, 1992
Runs at twice the speed of the external bus (FSB)
Fits in Socket 3
Clock rates:
40 MHz
50 MHz
66 MHz
Officially named Intel486 DX2
Family 4 model 3
80486SL
Introduced November 9, 1992
Clock rates:
20 MHz, 15.4 MIPS
25 MHz, 19 MIPS
33 MHz, 25 MIPS
Bus width: 32 bits
1.4 million transistors at 0.8 μm
Addressable memory 4 GB
Virtual memory 64 TB
Officially named Intel486 SL
Used in notebook computers
Family 4 model 4
80486DX4
Introduced March 7, 1994
Clock rates:
75 MHz, 53 MIPS (41.3 SPECint92, 20.1 SPECfp92 on Micronics M4P 256 KB L2)
100 MHz, 70.7 MIPS (54.59 SPECint92, 26.91 SPECfp92 on Micronics M4P 256 KB L2)
1.6 million transistors at 0.6 μm
Bus width: 32 bits
Addressable memory 4 GB
Virtual memory 64 TB
Socket 3 168 pin PGA Package, or 208 sq. ftP Package
Officially named Intel486 DX4
Used in high performance entry-level desktops and value notebooks
Family 4 model 8
32-bit processors: P5 microarchitecture
Original Pentium
Bus width: 64 bits
System bus clock rate 60 or 66 MHz
Address bus: 32 bits
Addressable memory 4 GB
Virtual Memory 64 TB
Superscalar architecture
Runs on 3.3 Volts (except the very first generation "P5")
Used in desktops
8 KB of instruction cache
8 KB of data cache
P5 – 0.8 μm process technology
Introduced March 22, 1993
3.1 million transistors
The only Pentium to run on 5 Volts
Socket 4 273 pin PGA Package
Package dimensions 2.16″ × 2.16″
Family 5 model 1
Variants
60 MHz, 100 MIPS (70.4 SPECint92, 55.1 SPECfp92 on Xpress 256 KB L2)
66 MHz, 112 MIPS (77.9 SPECint92, 63.6 SPECfp92 on Xpress 256 KB L2)
P54 – 0.6 μm process technology
Socket 5 296/320 pin PGA package
3.2 million transistors
Variants
75 MHz, 126.5 MIPS (2.31 SPECint95, 2.02 SPECfp95 on Gateway P5 256K L2)
Introduced October 10, 1994
90, 100 MHz, 149.8 and 166.3 MIPS respectively (2.74 SPECint95, 2.39 SPECfp95 on Gateway P5 256K L2 and 3.30 SPECint95, 2.59 SPECfp95 on Xpress 1ML2 respectively)
Introduced March 7, 1994
P54CQS – 0.35 μm process technology
Socket 5 296/320 pin PGA package
3.2 million transistors
Variants
120 MHz, 203 MIPS (3.72 SPECint95, 2.81 SPECfp95 on Xpress 1MB L2)
Introduced March 27, 1995
P54CS – 0.35 μm process technology
3.3 million transistors
90 mm2 die size
Family 5 model 2
Variants
Socket 5 296/320 pin PGA package
133 MHz, 218.9 MIPS (4.14 SPECint95, 3.12 SPECfp95 on Xpress 1MB L2)
Introduced June 12, 1995
150, 166 MHz, 230 and 247 MIPS respectively
Introduced January 4, 1996
Socket 7 296/321 pin PGA package
200 MHz, 270 MIPS (5.47 SPECint95, 3.68 SPECfp95)
Introduced June 10, 1996
Pentium with MMX Technology
P55C – 0.35 μm process technology
Introduced January 8, 1997
Intel MMX (instruction set) support
Socket 7 296/321 pin PGA (pin grid array) package
16 KB L1 instruction cache
16 KB data cache
4.5 million transistors
System bus clock rate 66 MHz
Basic P55C is family 5 model 4, mobile are family 5 model 7 and 8
Variants
166, 200 MHz introduced January 8, 1997
233 MHz introduced June 2, 1997
133 MHz (Mobile)
166, 266 MHz (Mobile) introduced January 12, 1998
200, 233 MHz (Mobile) introduced September 8, 1997
300 MHz (Mobile) introduced January 7, 1999
32-bit processors: P6/Pentium M microarchitecture
Pentium Pro
Introduced November 1, 1995
Multichip Module (2 die)
Precursor to Pentium II and III
Primarily used in server systems
Socket 8 processor package (387 pins) (Dual SPGA)
5.5 million transistors
Family 6 model 1
0.6 μm process technology
16 KB L1 cache
256 KB integrated L2 cache
60 MHz system bus clock rate
Variants
150 MHz
0.35 μm process technology, (two die, a 0.35 μm CPU with 0.6 μm L2 cache)
5.5 million transistors
512 KB or 256 KB integrated L2 cache
60 or 66 MHz system bus clock rate
Variants
150 MHz (60 MHz bus clock rate, 256 KB 0.5 μm cache) introduced November 1, 1995
166 MHz (66 MHz bus clock rate, 512 KB 0.35 μm cache) introduced November 1, 1995
180 MHz (60 MHz bus clock rate, 256 KB 0.6 μm cache) introduced November 1, 1995
200 MHz (66 MHz bus clock rate, 256 KB 0.6 μm cache) introduced November 1, 1995
200 MHz (66 MHz bus clock rate, 512 KB 0.35 μm cache) introduced November 1, 1995
200 MHz (66 MHz bus clock rate, 1 MB 0.35 μm cache) introduced August 18, 1997
Pentium II
Introduced May 7, 1997
Pentium Pro with MMX and improved 16-bit performance
242-pin Slot 1 (SEC) processor package
Voltage identification pins
7.5 million transistors
32 KB L1 cache
512 KB bandwidth external L2 cache
The only Pentium II that did not have the L2 cache at bandwidth of the core was the Pentium II 450 PE.
Klamath – 0.35 μm process technology (233, 266, 300 MHz)
66 MHz system bus clock rate
Family 6 model 3
Variants
233, 266, 300 MHz introduced May 7, 1997
Deschutes – 0.25 μm process technology (333, 350, 400, 450 MHz)
Introduced January 26, 1998
66 MHz system bus clock rate (333 MHz variant), 100 MHz system bus clock rate for all subsequent models
Family 6 model 5
Variants
333 MHz introduced January 26, 1998
350, 400 MHz introduced April 15, 1998
450 MHz introduced August 24, 1998
233, 266 MHz (Mobile) introduced April 2, 1998
333 MHz Pentium II Overdrive processor for Socket 8 Introduced August 10, 1998
300 MHz (Mobile) introduced September 9, 1998
333 MHz (Mobile) introduced January 25, 1999
Celeron (Pentium II-based)
Covington – 0.25 μm process technology
Introduced April 15, 1998
242-pin Slot 1 SEPP (Single Edge Processor Package)
7.5 million transistors
66 MHz system bus clock rate
Slot 1
32 KB L1 cache
No L2 cache
Variants
266 MHz introduced April 15, 1998
300 MHz introduced June 9, 1998
Mendocino – 0.25 μm process technology
Introduced August 24, 1998
242-pin Slot 1 SEPP (Single Edge Processor Package), Socket 370 PPGA package
19 million transistors
66 MHz system bus clock rate
Slot 1, Socket 370
32 KB L1 cache
128 KB integrated cache
Family 6 model 6
Variants
300, 333 MHz introduced August 24, 1998
366, 400 MHz introduced January 4, 1999
433 MHz introduced March 22, 1999
466 MHz
500 MHz introduced August 2, 1999
533 MHz introduced January 4, 2000
266 MHz (Mobile)
300 MHz (Mobile)
333 MHz (Mobile) introduced April 5, 1999
366 MHz (Mobile)
400 MHz (Mobile)
433 MHz (Mobile)
450 MHz (Mobile) introduced February 14, 2000
466 MHz (Mobile)
500 MHz (Mobile) introduced February 14, 2000
Pentium II Xeon (chronological entry)
Introduced June 29, 1998
See main entry
Pentium III
Katmai – 0.25 μm process technology
Introduced February 26, 1999
Improved PII, i.e. P6-based core, now including Streaming SIMD Extensions (SSE)
9.5 million transistors
512 KB (512 x 1024 B) bandwidth L2 External cache
242-pin Slot 1 SECC2 (Single Edge Contact cartridge 2) processor package
System Bus clock rate 100 MHz, 133 MHz (B-models)
Slot 1
Family 6 model 7
Variants
450, 500 MHz introduced February 26, 1999
550 MHz introduced May 17, 1999
600 MHz introduced August 2, 1999
533, 600 MHz introduced (133 MHz bus clock rate) September 27, 1999
Coppermine – 0.18 μm process technology
Introduced October 25, 1999
28.1 million transistors
256 KB (512 x 1024 B) Advanced Transfer L2 cache (Integrated)
242-pin Slot-1 SECC2 (Single Edge Contact cartridge 2) processor package, 370-pin FC-PGA (Flip-chip pin grid array) package
System Bus clock rate 100 MHz (E-models), 133 MHz (EB models)
Slot 1, Socket 370
Family 6 model 8
Variants
500 MHz (100 MHz bus clock rate)
533 MHz
550 MHz (100 MHz bus clock rate)
600 MHz
600 MHz (100 MHz bus clock rate)
650 MHz (100 MHz bus clock rate) introduced October 25, 1999
667 MHz introduced October 25, 1999
700 MHz (100 MHz bus clock rate) introduced October 25, 1999
733 MHz introduced October 25, 1999
750, 800 MHz (100 MHz bus clock rate) introduced December 20, 1999
850 MHz (100 MHz bus clock rate) introduced March 20, 2000
866 MHz introduced March 20, 2000
933 MHz introduced May 24, 2000
1000 MHz introduced March 8, 2000 (not widely available at time of release)
1100 MHz
1133 MHz (first version recalled, later re-released)
400, 450, 500 MHz (Mobile) introduced October 25, 1999
600, 650 MHz (Mobile) introduced January 18, 2000
700 MHz (Mobile) introduced April 24, 2000
750 MHz (Mobile) introduced June 19, 2000
800, 850 MHz (Mobile) introduced September 25, 2000
900, 1000 MHz (Mobile) introduced March 19, 2001
Tualatin – 0.13 μm process technology
Introduced July 2001
28.1 million transistors
32 KB (32 x 1024 B) L1 cache
256 KB or 512 KB Advanced Transfer L2 cache (integrated)
370-pin FC-PGA2 (flip-chip pin grid array) package
133 MHz system bus clock rate
Socket 370
Family 6 model 11
Variants
1133 MHz (256 KB L2)
1133 MHz (512 KB L2)
1200 MHz
1266 MHz (512 KB L2)
1333 MHz
1400 MHz (512 KB L2)
Pentium II Xeon and Pentium III Xeon
PII Xeon
Variants
400 MHz introduced June 29, 1998
450 MHz (512 KB L2 cache) introduced October 6, 1998
450 MHz (1 MB and 2 MB L2 cache) introduced January 5, 1999
PIII Xeon
Introduced October 25, 1999
9.5 million transistors at 0.25 μm or 28 million at 0.18 μm
L2 cache is 256 KB, 1 MB, or 2 MB Advanced Transfer Cache (Integrated)
Processor Package Style is Single Edge Contact Cartridge (S.E.C.C.2) or SC330
System Bus clock rate 133 MHz (256 KB L2 cache) or 100 MHz (1–2 MB L2 cache)
System Bus width: 64 bits
Addressable memory: 64 GB
Used in two-way servers and workstations (256 KB L2) or 4- and 8-way servers (1–2 MB L2)
Family 6 model 10
Variants
500 MHz (0.25 μm process) introduced March 17, 1999
550 MHz (0.25 μm process) introduced August 23, 1999
600 MHz (0.18 μm process, 256 KB L2 cache) introduced October 25, 1999
667 MHz (0.18 μm process, 256 KB L2 cache) introduced October 25, 1999
733 MHz (0.18 μm process, 256 KB L2 cache) introduced October 25, 1999
800 MHz (0.18 μm process, 256 KB L2 cache) introduced January 12, 2000
866 MHz (0.18 μm process, 256 KB L2 cache) introduced April 10, 2000
933 MHz (0.18 μm process, 256 KB L2 cache)
1000 MHz (0.18 μm process, 256 KB L2 cache) introduced August 22, 2000
700 MHz (0.18 μm process, 1–2 MB L2 cache) introduced May 22, 2000
Celeron (Pentium III Coppermine-based)
Coppermine-128, 0.18 μm process technology
Introduced March, 2000
Streaming SIMD Extensions (SSE)
Socket 370, FC-PGA processor package
28.1 million transistors
66 MHz system bus clock rate, 100 MHz system bus clock rate from January 3, 2001
32 KB L1 cache
128 KB Advanced Transfer L2 cache
Family 6 model 8
Variants
533 MHz
566 MHz
600 MHz
633, 667, 700 MHz introduced June 26, 2000
733, 766 MHz introduced November 13, 2000
800 MHz introduced January 3, 2001
850 MHz introduced April 9, 2001
900 MHz introduced July 2, 2001
950, 1000, 1100 MHz introduced August 31, 2001
550 MHz (Mobile)
600, 650 MHz (Mobile) introduced June 19, 2000
700 MHz (Mobile) introduced September 25, 2000
750 MHz (Mobile) introduced March 19, 2001
800 MHz (Mobile)
850 MHz (Mobile) introduced July 2, 2001
600 MHz (LV Mobile)
500 MHz (ULV Mobile) introduced January 30, 2001
600 MHz (ULV Mobile)
XScale (chronological entry – non-x86 architecture)
Introduced August 23, 2000
See main entry
Pentium 4 (not 4EE, 4E, 4F), Itanium, P4-based Xeon, Itanium 2 (chronological entries)
Introduced April 2000 – July 2002
See main entries
Pentium III Tualatin-based
Tualatin – 0.13 μm process technology
32 KB L1 cache
512KB Advanced Transfer L2 cache
133 MHz system bus clock rate
Socket 370
Variants
1.0 GHz
1.13 GHz
1.26 GHz
1.4 GHz
Celeron (Pentium III Tualatin-based)
Tualatin Celeron – 0.13 μm process technology
32 KB L1 cache
256 KB Advanced Transfer L2 cache
100 MHz system bus clock rate
Socket 370
Family 6 model 11
Variants
1.0 GHz
1.1 GHz
1.2 GHz
1.3 GHz
1.4 GHz
Pentium M
Banias 0.13 μm process technology
Introduced March 2003
64 KB L1 cache
1 MB L2 cache (integrated)
Based on Pentium III core, with SSE2 SIMD instructions and deeper pipeline
77 million transistors
Micro-FCPGA, Micro-FCBGA processor package
Heart of the Intel mobile Centrino system
400 MHz Netburst-style system bus
Family 6 model 9
Variants
900 MHz (ultra low voltage)
1.0 GHz (ultra low voltage)
1.1 GHz (low voltage)
1.2 GHz (low voltage)
1.3 GHz
1.4 GHz
1.5 GHz
1.6 GHz
1.7 GHz
Dothan 0.09 μm (90 nm) process technology
Introduced May 2004
2 MB L2 cache
140 million transistors
Revised data prefetch unit
400 MHz Netburst-style system bus
21W TDP
Family 6 model 13
Variants
1.00 GHz (Pentium M 723) (ultra low voltage, 5 W TDP)
1.10 GHz (Pentium M 733) (ultra low voltage, 5 W TDP)
1.20 GHz (Pentium M 753) (ultra low voltage, 5 W TDP)
1.30 GHz (Pentium M 718) (low voltage, 10 W TDP)
1.40 GHz (Pentium M 738) (low voltage, 10 W TDP)
1.50 GHz (Pentium M 758) (low voltage, 10 W TDP)
1.60 GHz (Pentium M 778) (low voltage, 10 W TDP)
1.40 GHz (Pentium M 710)
1.50 GHz (Pentium M 715)
1.60 GHz (Pentium M 725)
1.70 GHz (Pentium M 735)
1.80 GHz (Pentium M 745)
2.00 GHz (Pentium M 755)
2.10 GHz (Pentium M 765)
Dothan 533 0.09 μm (90 nm) process technology
Introduced Q1 2005
Same as Dothan except with a 533 MHz NetBurst-style system bus and 27W TDP
Variants
1.60 GHz (Pentium M 730)
1.73 GHz (Pentium M 740)
1.86 GHz (Pentium M 750)
2.00 GHz (Pentium M 760)
2.13 GHz (Pentium M 770)
2.26 GHz (Pentium M 780)
Stealey 0.09 μm (90 nm) process technology
Introduced Q2 2007
512 KB L2, 3 W TDP
Variants
600 MHz (A100)
800 MHz (A110)
Celeron M
Banias-512 0.13 μm process technology
Introduced March 2003
64 KB L1 cache
512 KB L2 cache (integrated)
SSE2 SIMD instructions
No SpeedStep technology, is not part of the 'Centrino' package
Family 6 model 9
Variants
310 – 1.20 GHz
320 – 1.30 GHz
330 – 1.40 GHz
340 – 1.50 GHz
Dothan-1024 90 nm process technology
64 KB L1 cache
1 MB L2 cache (integrated)
SSE2 SIMD instructions
No SpeedStep technology, is not part of the 'Centrino' package
Variants
350 – 1.30 GHz
350J – 1.30 GHz, with Execute Disable bit
360 – 1.40 GHz
360J – 1.40 GHz, with Execute Disable bit
370 – 1.50 GHz, with Execute Disable bit
Family 6, Model 13, Stepping 8
380 – 1.60 GHz, with Execute Disable bit
390 – 1.70 GHz, with Execute Disable bit
Yonah-1024 65 nm process technology
64 KB L1 cache
1 MB L2 cache (integrated)
SSE3 SIMD instructions, 533 MHz front-side bus, execute-disable bit
No SpeedStep technology, is not part of the 'Centrino' package
Variants
410 – 1.46 GHz
420 – 1.60 GHz,
423 – 1.06 GHz (ultra low voltage)
430 – 1.73 GHz
440 – 1.86 GHz
443 – 1.20 GHz (ultra low voltage)
450 – 2.00 GHz
Intel Core
Yonah 0.065 μm (65 nm) process technology
Introduced January 2006
533/667 MHz front-side bus
2 MB (Shared on Duo) L2 cache
SSE3 SIMD instructions
31W TDP (T versions)
Family 6, Model 14
Variants:
Intel Core Duo T2700 2.33 GHz
Intel Core Duo T2600 2.16 GHz
Intel Core Duo T2500 2 GHz
Intel Core Duo T2450 2 GHz
Intel Core Duo T2400 1.83 GHz
Intel Core Duo T2300 1.66 GHz
Intel Core Duo T2050 1.6 GHz
Intel Core Duo T2300e 1.66 GHz
Intel Core Duo T2080 1.73 GHz
Intel Core Duo L2500 1.83 GHz (low voltage, 15W TDP)
Intel Core Duo L2400 1.66 GHz (low voltage, 15 W TDP)
Intel Core Duo L2300 1.5 GHz (low voltage, 15 W TDP)
Intel Core Duo U2500 1.2 GHz (ultra low voltage, 9 W TDP)
Intel Core Solo T1350 1.86 GHz (533 FSB)
Intel Core Solo T1300 1.66 GHz
Intel Core Solo T1200 1.5 GHz
Dual-Core Xeon LV
Sossaman 0.065 μm (65 nm) process technology
Introduced March 2006
Based on Yonah core, with SSE3 SIMD instructions
667 MHz frontside bus
2 MB Shared L2 cache
Variants
2.0 GHz
32-bit processors: NetBurst microarchitecture
Pentium 4
0.18 μm process technology (1.40 and 1.50 GHz)
Introduced November 20, 2000
L2 cache was 256 KB Advanced Transfer Cache (Integrated)
Processor Package Style was PGA423, PGA478
System Bus clock rate 400 MHz
SSE2 SIMD Extensions
42 million transistors
Used in desktops and entry-level workstations
0.18 μm process technology (1.7 GHz)
Introduced April 23, 2001
See the 1.4 and 1.5 chips for details
0.18 μm process technology (1.6 and 1.8 GHz)
Introduced July 2, 2001
See 1.4 and 1.5 chips for details
Core Voltage is 1.15 volts in Maximum Performance Mode; 1.05 volts in Battery Optimized Mode
Power <1 watt in Battery Optimized Mode
Used in full-size and then light mobile PCs
0.18 μm process technology Willamette (1.9 and 2.0 GHz)
Introduced August 27, 2001
See 1.4 and 1.5 chips for details
Family 15 model 1
Pentium 4 (2 GHz, 2.20 GHz)
Introduced January 7, 2002
Pentium 4 (2.4 GHz)
Introduced April 2, 2002
0.13 μm process technology Northwood A (1.7, 1.8, 1.9, 2, 2.2, 2.4, 2.5, 2.6, 2.8 (OEM), 3.0 (OEM) GHz)
Improved branch prediction and other microcodes tweaks
512 KB integrated L2 cache
55 million transistors
400 MHz system bus
Family 15 model 2
0.13 μm process technology Northwood B (2.26, 2.4, 2.53, 2.66, 2.8, 3.06 GHz)
533 MHz system bus. (3.06 includes Intel's Hyper-Threading technology)
0.13 μm process technology Northwood C (2.4, 2.6, 2.8, 3.0, 3.2, 3.4 GHz)
800 MHz system bus (all versions include Hyper-Threading)
6500 to 10,000 MIPS
Itanium (chronological entry – new non-x86 architecture)
Introduced 2001
See main entry
Xeon (32-bit NetBurst)
Official designation now Xeon, i.e. not "Pentium 4 Xeon"
Xeon 1.4, 1.5, 1.7 GHz
Introduced May 21, 2001
L2 cache was 256 KB Advanced Transfer Cache (Integrated)
Processor Package Style was Organic Land Grid Array 603 (OLGA 603)
System Bus clock rate 400 MHz
SSE2 SIMD Extensions
Used in high-performance and mid-range dual processor enabled workstations
Xeon 2.0 GHz and up to 3.6 GHz
Introduced September 25, 2001
Itanium 2 (chronological entry – new non-x86 architecture)
Introduced July 2002
See main entry
Mobile Pentium 4-M
0.13 μm process technology
55 million transistors
512 KB L2 cache
BUS a 400 MHz
Supports up to 1 GB of DDR 266 MHz memory
Supports ACPI 2.0 and APM 1.2 System Power Management
1.3–1.2 V (SpeedStep)
Power: 1.2 GHz 20.8 W, 1.6 GHz 30 W, 2.6 GHz 35 W
Sleep Power 5 W (1.2 V)
Deeper Sleep Power = 2.9 W (1.0 V)
1.40 GHz – 23 April 2002
1.50 GHz – 23 April 2002
1.60 GHz – 4 March 2002
1.70 GHz – 4 March 2002
1.80 GHz – 23 April 2002
1.90 GHz – 24 June 2002
2.00 GHz – 24 June 2002
2.20 GHz – 16 September 2002
2.40 GHz – 14 January 2003
2.50 GHz – 16 April 2003
2.60 GHz – 11 June 2003
Pentium 4 EE
Introduced September 2003
EE = "Extreme Edition"
Built from the Xeon's "Gallatin" core, but with 2 MB cache
Pentium 4E
Introduced February 2004
built on 0.09 μm (90 nm) process technology Prescott (2.4 A, 2.8, 2.8 A, 3.0, 3.2, 3.4, 3.6, 3.8 ) 1 MB L2 cache
533 MHz system bus (2.4A and 2.8A only)
800 MHz system bus (all other models)
125 million transistors in 1 MB Models
169 million transistors in 2 MB Models
Hyper-Threading support is only available on CPUs using the 800 MHz system bus.
The processor's integer instruction pipeline has been increased from 20 stages to 31 stages, which theoretically allows for even greater bandwidth
7500 to 11,000 MIPS
LGA 775 versions are in the 5xx series (32-bit) and 5x1 series (with Intel 64)
The 6xx series has 2 MB L2 cache and Intel 64
64-bit processors: IA-64
New instruction set, not at all related to x86
Before the feature was eliminated (Montecito, July 2006) IA-64 processors supported 32-bit x86 in hardware, but slowly (see its 2001 market reception and 2006 architectural changes)
Itanium
Code name Merced
Family 7
Released May 29, 2001
733 MHz and 800 MHz
2 MB cache
All recalled and replaced by Itanium 2
Itanium 2
Family 0x1F
Released July 2002
900 MHz – 1.6 GHz
McKinley 900 MHz 1.5 MB cache, Model 0x0
McKinley 1 GHz, 3 MB cache, Model 0x0
Deerfield 1 GHz, 1.5 MB cache, Model 0x1
Madison 1.3 GHz, 3 MB cache, Model 0x1
Madison 1.4 GHz, 4 MB cache, Model 0x1
Madison 1.5 GHz, 6 MB cache, Model 0x1
Madison 1.67 GHz, 9 MB cache, Model 0x1
Hondo 1.4 GHz, 4 MB cache, dual-core MCM, Model 0x1
64-bit processors: Intel 64 – NetBurst microarchitecture
Intel Extended Memory 64 Technology
Mostly compatible with AMD's AMD64 architecture
Introduced Spring 2004, with the Pentium 4F (D0 and later P4 steppings)
Pentium 4F
Prescott-2M built on 0.09 μm (90 nm) process technology
2.8–3.8 GHz (model numbers 6x0)
Introduced February 20, 2005
Same features as Prescott with the addition of:
2 MB cache
Intel 64-bit
Enhanced Intel SpeedStep Technology (EIST)
Cedar Mill built on 0.065 μm (65 nm) process technology
3.0–3.6 GHz (model numbers 6x1)
Introduced January 16, 2006
Die shrink of Prescott-2M
Same features as Prescott-2M
Family 15 Model 4
Pentium D
Dual-core microprocessor
No Hyper-Threading
800 (4×200) MHz front-side bus
LGA 775 (Socket T)
Smithfield (Pentium D) – 90 nm process technology (2.66–3.2 GHz)
Introduced May 26, 2005
2.66–3.2 GHz (model numbers 805–840)
230 million transistors
1 MB × 2 (non-shared, 2 MB total) L2 cache
Cache coherency between cores requires communication over the FSB
Performance increase of 60% over similarly clocked Prescott
2.66 GHz (533 MHz FSB) Pentium D 805 introduced December 2005
Contains 2x Prescott dies in one package
Family 15 Model 4
Presler (Pentium D) – 65 nm process technology (2.8–3.6 GHz)
Introduced January 16, 2006
2.8–3.6 GHz (model numbers 915–960)
376 million transistors
2× 2 MB (non-shared, 4 MB total) L2 cache
Contains 2x Cedar Mill dies in one package
Variants
Pentium D 945
Pentium Extreme Edition
Dual-core microprocessor
Enabled Hyper-Threading
800 (4×200) MHz front-side bus
Smithfield (Pentium Extreme Edition) – 90 nm process technology (3.2 GHz)
Variants
Pentium 840 EE – 3.20 GHz (2 × 1 MB L2)
Presler (Pentium Extreme Edition) – 65 nm process technology (3.46, 3.73)
2 MB × 2 (non-shared, 4 MB total) L2 cache
Variants
Pentium 955 EE – 3.46 GHz, 1066 MHz front-side bus
Pentium 965 EE – 3.73 GHz, 1066 MHz front-side bus
Pentium 969 EE – 3.73 GHz, 1066 MHz front-side bus
Xeon (64-bit NetBurst)
Nocona
Introduced 2004
Irwindale
Introduced 2004
Cranford
Introduced April 2005
MP version of Nocona
Potomac
Introduced April 2005
Cranford with 8 MB of L3 cache
Paxville DP (2.8 GHz)
Introduced October 10, 2005
Dual-core version of Irwindale, with 4 MB of L2 cache (2 MB per core)
2.8 GHz
800 MT/s front-side bus
Paxville MP – 90 nm process (2.67 – 3.0 GHz)
Introduced November 1, 2005
Dual-core Xeon 7000 series
MP-capable version of Paxville DP
2 MB of L2 cache (1 MB per core) or 4 MB of L2 (2 MB per core)
667 MT/s FSB or 800 MT/s FSB
Dempsey – 65 nm process (2.67 – 3.73 GHz)
Introduced May 23, 2006
Dual-core Xeon 5000 series
MP version of Presler
667 MT/s or 1066 MT/s FSB
4 MB of L2 cache (2 MB per core)
LGA 771 (Socket J).
Tulsa – 65 nm process (2.5 – 3.4 GHz)
Introduced August 29, 2006
Dual-core Xeon 7100-series
Improved version of Paxville MP
667 MT/s or 800 MT/s FSB
64-bit processors: Intel 64 – Core microarchitecture
Xeon (64-bit Core microarchitecture)
Woodcrest – 65 nm process technology
Server and Workstation CPU (SMP support for dual CPU system)
Introduced June 26, 2006
Intel VT-x, multiple OS support
EIST (Enhanced Intel SpeedStep Technology) in 5140, 5148LV, 5150, 5160
Execute Disable Bit
TXT, enhanced security hardware extensions
SSSE3 SIMD instructions
iAMT2 (Intel Active Management Technology), remotely manage computers
Variants
Xeon 5160 – 3.00 GHz (4 MB L2, 1333 MHz FSB, 80 W)
Xeon 5150 – 2.66 GHz (4 MB L2, 1333 MHz FSB, 65 W)
Xeon 5140 – 2.33 GHz (4 MB L2, 1333 MHz FSB, 65 W)
Xeon 5130 – 2.00 GHz (4 MB L2, 1333 MHz FSB, 65 W)
Xeon 5120 – 1.86 GHz (4 MB L2, 1066 MHz FSB, 65 W)
Xeon 5110 – 1.60 GHz (4 MB L2, 1066 MHz FSB, 65 W)
Xeon 5148LV – 2.33 GHz (4 MB L2, 1333 MHz FSB, 40 W) (low voltage edition)
Clovertown – 65 nm process technology
Server and Workstation CPU (SMP support for dual CPU system)
Introduced December 13, 2006
Quad-core
Intel VT-x, multiple OS support
EIST (Enhanced Intel SpeedStep Technology) in E5365, L5335
Execute Disable Bit
TXT, enhanced security hardware extensions
SSSE3 SIMD instructions
iAMT2 (Intel Active Management Technology), remotely manage computers
Variants
Xeon X5355 – 2.66 GHz (2×4 MB L2, 1333 MHz FSB, 105 W)
Xeon E5345 – 2.33 GHz (2×4 MB L2, 1333 MHz FSB, 80 W)
Xeon E5335 – 2.00 GHz (2×4 MB L2, 1333 MHz FSB, 80 W)
Xeon E5320 – 1.86 GHz (2×4 MB L2, 1066 MHz FSB, 65 W)
Xeon E5310 – 1.60 GHz (2×4 MB L2, 1066 MHz FSB, 65 W)
Xeon L5320 – 1.86 GHz (2×4 MB L2, 1066 MHz FSB, 50 W) (low voltage edition)
Intel Core 2
Conroe – 65 nm process technology
Desktop CPU (SMP support restricted to 2 CPUs)
Two cores on one die
Introduced July 27, 2006
SSSE3 SIMD instructions
291 million transistors
64 KB of L1 cache per core (32+32 KB 8-way)
Intel VT-x, multiple OS support
TXT, enhanced security hardware extensions
Execute Disable Bit
EIST (Enhanced Intel SpeedStep Technology)
iAMT2 (Intel Active Management Technology), remotely manage computers
Intel Management Engine introduced
LGA 775
Variants
Core 2 Duo E6850 – 3.00 GHz (4 MB L2, 1333 MHz FSB)
Core 2 Duo E6800 – 2.93 GHz (4 MB L2, 1066 MHz FSB)
Core 2 Duo E6750 – 2.67 GHz (4 MB L2, 1333 MHz FSB, 65W)
Core 2 Duo E6700 – 2.67 GHz (4 MB L2, 1066 MHz FSB)
Core 2 Duo E6600 – 2.40 GHz (4 MB L2, 1066 MHz FSB, 65W)
Core 2 Duo E6550 – 2.33 GHz (4 MB L2, 1333 MHz FSB)
Core 2 Duo E6420 – 2.13 GHz (4 MB L2, 1066 MHz FSB)
Core 2 Duo E6400 – 2.13 GHz (2 MB L2, 1066 MHz FSB)
Core 2 Duo E6320 – 1.86 GHz (4 MB L2, 1066 MHz FSB) Family 6, Model 15, Stepping 6
Core 2 Duo E6300 – 1.86 GHz (2 MB L2, 1066 MHz FSB)
Conroe XE – 65 nm process technology
Desktop Extreme Edition CPU (SMP support restricted to 2 CPUs)
Introduced July 27, 2006
Same features as Conroe
LGA 775
Variants
Core 2 Extreme X6800 – 2.93 GHz (4 MB L2, 1066 MHz FSB)
Allendale (Intel Core 2) – 65 nm process technology
Desktop CPU (SMP support restricted to 2 CPUs)
Two CPUs on one die
Introduced January 21, 2007
SSSE3 SIMD instructions
167 million transistors
TXT, enhanced security hardware extensions
Execute Disable Bit
EIST (Enhanced Intel SpeedStep Technology)
iAMT2 (Intel Active Management Technology), remotely manage computers
LGA 775
Variants
Core 2 Duo E4700 – 2.60 GHz (2 MB L2, 800 MHz FSB)
Core 2 Duo E4600 – 2.40 GHz (2 MB L2, 800 MHz FSB)
Core 2 Duo E4500 – 2.20 GHz (2 MB L2, 800 MHz FSB)
Core 2 Duo E4400 – 2.00 GHz (2 MB L2, 800 MHz FSB)
Core 2 Duo E4300 – 1.80 GHz (2 MB L2, 800 MHz FSB) Family 6, Model 15, Stepping 2
Merom – 65 nm process technology
Mobile CPU (SMP support restricted to 2 CPUs)
Introduced July 27, 2006
Family 6, Model 15
Same features as Conroe
Socket M / Socket P
Variants
Core 2 Duo T7800 – 2.60 GHz (4 MB L2, 800 MHz FSB) (Santa Rosa platform)
Core 2 Duo T7700 – 2.40 GHz (4 MB L2, 800 MHz FSB)
Core 2 Duo T7600 – 2.33 GHz (4 MB L2, 667 MHz FSB)
Core 2 Duo T7500 – 2.20 GHz (4 MB L2, 800 MHz FSB)
Core 2 Duo T7400 – 2.16 GHz (4 MB L2, 667 MHz FSB)
Core 2 Duo T7300 – 2.00 GHz (4 MB L2, 800 MHz FSB)
Core 2 Duo T7250 – 2.00 GHz (2 MB L2, 800 MHz FSB)
Core 2 Duo T7200 – 2.00 GHz (4 MB L2, 667 MHz FSB)
Core 2 Duo T7100 – 1.80 GHz (2 MB L2, 800 MHz FSB)
Core 2 Duo T5600 – 1.83 GHz (2 MB L2, 667 MHz FSB) Family 6, Model 15, Stepping 6
Core 2 Duo T5550 – 1.83 GHz (2 MB L2, 667 MHz FSB, no VT)
Core 2 Duo T5500 – 1.66 GHz (2 MB L2, 667 MHz FSB, no VT)
Core 2 Duo T5470 – 1.60 GHz (2 MB L2, 800 MHz FSB, no VT) Family 6, Model 15, Stepping 13
Core 2 Duo T5450 – 1.66 GHz (2 MB L2, 667 MHz FSB, no VT)
Core 2 Duo T5300 – 1.73 GHz (2 MB L2, 533 MHz FSB, no VT)
Core 2 Duo T5270 – 1.40 GHz (2 MB L2, 800 MHz FSB, no VT)
Core 2 Duo T5250 – 1.50 GHz (2 MB L2, 667 MHz FSB, no VT)
Core 2 Duo T5200 – 1.60 GHz (2 MB L2, 533 MHz FSB, no VT)
Core 2 Duo L7500 – 1.60 GHz (4 MB L2, 800 MHz FSB) (low voltage)
Core 2 Duo L7400 – 1.50 GHz (4 MB L2, 667 MHz FSB) (low voltage)
Core 2 Duo L7300 – 1.40 GHz (4 MB L2, 800 MHz FSB) (low voltage)
Core 2 Duo L7200 – 1.33 GHz (4 MB L2, 667 MHz FSB) (low voltage)
Core 2 Duo U7700 – 1.33 GHz (2 MB L2, 533 MHz FSB) (ultra low voltage)
Core 2 Duo U7600 – 1.20 GHz (2 MB L2, 533 MHz FSB) (ultra low voltage)
Core 2 Duo U7500 – 1.06 GHz (2 MB L2, 533 MHz FSB) (ultra low voltage)
Kentsfield – 65 nm process technology
Two dual-core CPU dies in one package
Desktop CPU quad-core (SMP support restricted to 4 CPUs)
Introduced December 13, 2006
Same features as Conroe but with 4 CPU cores
586 million transistors
LGA 775
Family 6, Model 15, Stepping 11
Variants
Core 2 Extreme QX6850 – 3 GHz (2×4 MB L2 cache, 1333 MHz FSB)
Core 2 Extreme QX6800 – 2.93 GHz (2×4 MB L2 cache, 1066 MHz FSB) (April 9, 2007)
Core 2 Extreme QX6700 – 2.66 GHz (2×4 MB L2 cache, 1066 MHz FSB) (November 14, 2006)
Core 2 Quad Q6700 – 2.66 GHz (2×4 MB L2 cache, 1066 MHz FSB) (July 22, 2007)
Core 2 Quad Q6600 – 2.40 GHz (2×4 MB L2 cache, 1066 MHz FSB) (January 7, 2007)
Wolfdale – 45 nm process technology
Die shrink of Conroe
Same features as Conroe with the addition of:
50% more cache, 6 MB as opposed to 4 MB
Intel Trusted Execution Technology
SSE4 SIMD instructions
410 million transistors
Variants
Core 2 Duo E8600 – 3.33 GHz (6 MB L2, 1333 MHz FSB)
Core 2 Duo E8500 – 3.16 GHz (6 MB L2, 1333 MHz FSB)
Core 2 Duo E8435 – 3.07 GHz (6 MB L2, 1066 MHz FSB)
Core 2 Duo E8400 – 3.00 GHz (6 MB L2, 1333 MHz FSB)
Core 2 Duo E8335 – 2.93 GHz (6 MB L2, 1066 MHz FSB)
Core 2 Duo E8300 – 2.83 GHz (6 MB L2, 1333 MHz FSB)
Core 2 Duo E8235 – 2.80 GHz (6 MB L2, 1066 MHz FSB)
Core 2 Duo E8200 – 2.66 GHz (6 MB L2, 1333 MHz FSB)
Core 2 Duo E8135 – 2.66 GHz (6 MB L2, 1066 MHz FSB)
Core 2 Duo E8190 – 2.66 GHz (6 MB L2, 1333 MHz FSB, no TXT, no VT)
Wolfdale-3M (Intel Core 2) – 45 nm process technology
Intel Trusted Execution Technology
Variants
Core 2 Duo E7600 – 3.06 GHz (3 MB L2, 1066 MHz FSB)
Core 2 Duo E7500 – 2.93 GHz (3 MB L2, 1066 MHz FSB)
Core 2 Duo E7400 – 2.80 GHz (3 MB L2, 1066 MHz FSB)
Core 2 Duo E7300 – 2.66 GHz (3 MB L2, 1066 MHz FSB)
Core 2 Duo E7200 – 2.53 GHz (3 MB L2, 1066 MHz FSB)
Yorkfield – 45 nm process technology
Quad-core CPU
Die shrink of Kentsfield
Contains 2x Wolfdale dual-core dies in one package
Same features as Wolfdale
820 million transistors
Variants
Core 2 Extreme QX9770 – 3.20 GHz (2×6 MB L2, 1600 MHz FSB)
Core 2 Extreme QX9650 – 3.00 GHz (2×6 MB L2, 1333 MHz FSB)
Core 2 Quad Q9705 – 3.16 GHz (2×3 MB L2, 1333 MHz FSB)
Core 2 Quad Q9700 – 3.16 GHz (2×3 MB L2, 1333 MHz FSB)
Core 2 Quad Q9650 – 3 GHz (2×6 MB L2, 1333 MHz FSB)
Core 2 Quad Q9550 – 2.83 GHz (2×6 MB L2, 1333 MHz FSB, 95 W TDP)
Core 2 Quad Q9550s – 2.83 GHz (2×6 MB L2, 1333 MHz FSB, 65 W TDP)
Core 2 Quad Q9450 – 2.66 GHz (2×6 MB L2, 1333 MHz FSB, 95 W TDP)
Core 2 Quad Q9505 – 2.83 GHz (2×3 MB L2, 1333 MHz FSB, 95 W TDP)
Core 2 Quad Q9505s – 2.83 GHz (2×3 MB L2, 1333 MHz FSB, 65 W TDP)
Core 2 Quad Q9500 – 2.83 GHz (2×3 MB L2, 1333 MHz FSB, 95 W TDP, no TXT)
Core 2 Quad Q9400 – 2.66 GHz (2×3 MB L2, 1333 MHz FSB, 95 W TDP)
Core 2 Quad Q9400s – 2.66 GHz (2×3 MB L2, 1333 MHz FSB, 65 W TDP)
Core 2 Quad Q9300 – 2.50 GHz (2×3 MB L2, 1333 MHz FSB, 95 W TDP)
Core 2 Quad Q8400 – 2.66 GHz (2×2 MB L2, 1333 MHz FSB, 95 W TDP)
Core 2 Quad Q8400s – 2.66 GHz (2×2 MB L2, 1333 MHz FSB, 65 W TDP)
Core 2 Quad Q8300 – 2.50 GHz (2×2 MB L2, 1333 MHz FSB, 95 W TDP)
Core 2 Quad Q8300s – 2.50 GHz (2×2 MB L2, 1333 MHz FSB, 65 W TDP)
Core 2 Quad Q8200 – 2.33 GHz (2×2 MB L2, 1333 MHz FSB, 95 W TDP)
Core 2 Quad Q8200s – 2.33 GHz (2×2 MB L2, 1333 MHz FSB, 65 W TDP)
Core 2 Quad Q7600 – 2.70 GHz (2×1 MB L2, 800 MHz FSB, no SSE4) (no Q7600 listed here)
Intel Core2 Quad Mobile processor family – 45 nm process technology
Quad-core CPU
Variants
Core 2 Quad Q9100 – 2.26 GHz (2×6 MB L2, 1066 MHz FSB, 45 W TDP)
Core 2 Quad Q9000 – 2.00 GHz (2×3 MB L2, 1066 MHz FSB, 45 W TDP)
Pentium Dual-Core
Allendale (Pentium Dual-Core) – 65 nm process technology
Desktop CPU (SMP support restricted to 2 CPUs)
Two cores on one die
Introduced January 21, 2007
SSSE3 SIMD instructions
167 million transistors
TXT, enhanced security hardware extensions
Execute Disable Bit
EIST (Enhanced Intel SpeedStep Technology)
Variants
Intel Pentium E2220 – 2.40 GHz (1 MB L2, 800 MHz FSB)
Intel Pentium E2200 – 2.20 GHz (1 MB L2, 800 MHz FSB)
Intel Pentium E2180 – 2.00 GHz (1 MB L2, 800 MHz FSB)
Intel Pentium E2160 – 1.80 GHz (1 MB L2, 800 MHz FSB)
Intel Pentium E2140 – 1.60 GHz (1 MB L2, 800 MHz FSB)
Wolfdale-3M (Pentium Dual-Core) – 45 nm process technology
Intel Pentium E6800 – 3.33 GHz (2 MB L2,1066 MHz FSB)
Intel Pentium E6700 – 3.20 GHz (2 MB L2,1066 MHz FSB)
Intel Pentium E6600 – 3.06 GHz (2 MB L2,1066 MHz FSB)
Intel Pentium E6500 – 2.93 GHz (2 MB L2,1066 MHz FSB)
Intel Pentium E6300 – 2.80 GHz (2 MB L2,1066 MHz FSB)
Intel Pentium E5800 – 3.20 GHz (2 MB L2, 800 MHz FSB)
Intel Pentium E5700 – 3.00 GHz (2 MB L2, 800 MHz FSB)
Intel Pentium E5500 – 2.80 GHz (2 MB L2, 800 MHz FSB)
Intel Pentium E5400 – 2.70 GHz (2 MB L2, 800 MHz FSB)
Intel Pentium E5300 – 2.60 GHz (2 MB L2, 800 MHz FSB)
Intel Pentium E5200 – 2.50 GHz (2 MB L2, 800 MHz FSB)
Intel Pentium E2210 – 2.20 GHz (1 MB L2, 800 MHz FSB)
Celeron (64-bit Core microarchitecture)
Allendale (Celeron, 64-bit Core microarchitecture) – 65 nm process technology
Variants
Intel Celeron E1600 – 2.40 GHz (512 KB L2, 800 MHz FSB)
Intel Celeron E1500 – 2.20 GHz (512 KB L2, 800 MHz FSB)
Intel Celeron E1400 – 2.00 GHz (512 KB L2, 800 MHz FSB)
Intel Celeron E1300 – 1.80 GHz (512 KB L2, 800 MHz FSB) (does it exist?)
Intel Celeron E1200 – 1.60 GHz (512 KB L2, 800 MHz FSB)
Wolfdale-3M (Celeron, 64-bit Core microarchitecture) – 45 nm process technology
Variants
Intel Celeron E3500 – 2.70 GHz (1 MB L2, 800 MHz FSB)
Intel Celeron E3400 – 2.60 GHz (1 MB L2, 800 MHz FSB)
Intel Celeron E3300 – 2.50 GHz (1 MB L2, 800 MHz FSB)
Intel Celeron E3200 – 2.40 GHz (1 MB L2, 800 MHz FSB)
Conroe-L (Celeron, 64-bit Core microarchitecture) – 65 nm process technology
Variants
Intel Celeron 450 – 2.20 GHz (512 KB L2, 800 MHz FSB)
Intel Celeron 440 – 2.00 GHz (512 KB L2, 800 MHz FSB)
Intel Celeron 430 – 1.80 GHz (512 KB L2, 800 MHz FSB)
Intel Celeron 420 – 1.60 GHz (512 KB L2, 800 MHz FSB)
Intel Celeron 220 – 1.20 GHz (512 KB L2, 533 MHz FSB)
Conroe-CL (Celeron, 64-bit Core microarchitecture) – 65 nm process technology
LGA 771 package
Variants
Intel Celeron 445 – 1.87 GHz (512 KB L2, 1066 MHz FSB)
Celeron M (64-bit Core microarchitecture)
Merom-L 65 nm process technology
64 KB L1 cache
1 MB L2 cache (integrated)
SSE3 SIMD instructions, 533 MHz/667 MHz front-side bus, execute-disable bit, 64-bit
No SpeedStep technology, is not part of the 'Centrino' package
Variants
520 – 1.60 GHz
530 – 1.73 GHz
540 – 1.86 GHz
550 – 2.00 GHz
560 – 2.13 GHz
570 – 2.26 GHz
667 MHz FSB
575 – 2.00 GHz
585 – 2.16 GHz
64-bit processors: Intel 64 – Nehalem microarchitecture
Intel Pentium (Nehalem)
Clarkdale (Pentium, Nehalem microarchitecture) – 32 nm process technology (manufacturing 7 Jan 2010)
2 physical cores/2 threads
32+32 KB L1 cache
256 KB L2 cache
3 MB L3 cache
Introduced January 2010
Socket 1156 LGA
2-channel DDR3
Integrated HD GPU
Variants
G6950 – 2.8 GHz (no Hyper-Threading)
G6960 – 2.933 GHz (no Hyper-Threading)
Core i3 (1st Generation)
Clarkdale (Core i3 1st Generation) – 32 nm process technology
2 physical cores/4 threads
32+32 KB L1 cache
256 KB L2 cache
4 MB L3 cache
Introduced on January 7, 2010
Socket 1156 LGA
2-channel DDR3
Integrated HD GPU
Variants
530 – 2.93 GHz Hyper-Threading
540 – 3.06 GHz Hyper-Threading
550 – 3.2 GHz Hyper-Threading
560 – 3.33 GHz Hyper-Threading
Core i5 (1st Generation)
Lynnfield (Core i5 1st Generation) – 45 nm process technology
4 physical cores/4 threads
32+32 KB L1 cache
256 KB L2 cache
8 MB L3 cache
Introduced September 8, 2009
Family 6 Model E (Ext. Model 1E)
Socket 1156 LGA
2-channel DDR3
Variants
750S – 2.40 GHz/3.20 GHz Turbo Boost
750 – 2.66 GHz/3.20 GHz Turbo Boost
760 – 2.80 GHz/3.33 GHz Turbo Boost
Clarkdale (Core i5 1st Generation) – 32 nm process technology
2 physical cores/4 threads
32+32 KB L1 cache
256 KB L2 cache
4 MB L3 cache
Introduced January, 2010
Socket 1156 LGA
2-channel DDR3
Integrated HD GPU
AES Support
Variants
650/655K – 3.2 GHz Hyper-Threading Turbo Boost
660/661 – 3.33 GHz Hyper-Threading Turbo Boost
670 – 3.46 GHz Hyper-Threading Turbo Boost
680 – 3.60 GHz Hyper-Threading Turbo Boost
Core i7 (1st Generation)
Bloomfield (Core i7 1st Generation) – 45 nm process technology
4 physical cores/8 threads
256 KB L2 cache
8 MB L3 cache
Front-side bus replaced with QuickPath up to 6.4 GT/s
Hyper-Threading is again included. This had previously been removed at the introduction of Core line
781 million transistors
Intel Turbo Boost Technology
TDP 130W
Introduced November 17, 2008
Socket 1366 LGA
3-channel DDR3
Variants
975 (extreme edition) – 3.33 GHz/3.60 GHz Turbo Boost
965 (extreme edition) – 3.20 GHz/3.46 GHz Turbo Boost
960 – 3.20 GHz/3.46 GHz Turbo Boost
950 – 3.06 GHz/3.33 GHz Turbo Boost
940 – 2.93 GHz/3.20 GHz Turbo Boost
930 – 2.80 GHz/3.06 GHz Turbo Boost
920 – 2.66 GHz/2.93 GHz Turbo Boost
Lynnfield (Core i7 1st Generation) – 45 nm process technology
4 physical cores/8 threads
32+32 KB L1 cache
256 KB L2 cache
8 MB L3 cache
No QuickPath, instead compatible with slower DMI interface
Hyper-Threading is included
Introduced September 8, 2009
Socket 1156 LGA
2-channel DDR3
Variants
880 – 3.06 GHz/3.73 GHz Turbo Boost (TDP 95W)
870/875K – 2.93 GHz/3.60 GHz Turbo Boost (TDP 95W)
870S – 2.67 GHz/3.60 GHz Turbo Boost (TDP 82W)
860 – 2.80 GHz/3.46 GHz Turbo Boost (TDP 95W)
860S – 2.53 GHz/3.46 GHz Turbo Boost (TDP 82W)
Westmere
Gulftown – 32 nm process technology
6 physical cores
256 KB L2 cache
12 MB L3 cache
Front-side bus replaced with QuickPath up to 6.4 GT/s
Hyper-Threading is included
Intel Turbo Boost Technology
Socket 1366 LGA
TDP 130W
Introduced 16 March 2010
Variants
990X Extreme Edition – 3.46 GHz/3.73 GHz Turbo Boost
980X Extreme Edition – 3.33 GHz/3.60 GHz Turbo Boost
970 – 3.20 GHz/3.46 GHz Turbo Boost
Clarksfield – Intel Core i7 Mobile processor family – 45 nm process technology
4 physical cores
Hyper-Threading is included
Intel Turbo Boost Technology
Variants
940XM Extreme Edition – 2.13 GHz/3.33 GHz Turbo Boost (8 MB L3, TDP 55W)
920XM Extreme Edition – 2.00 GHz/3.20 GHz Turbo Boost (8 MB L3, TDP 55W)
840QM – 1.86 GHz/3.20 GHz Turbo Boost (8 MB L3, TDP 45W)
820QM – 1.73 GHz/3.06 GHz Turbo Boost (8 MB L3, TDP 45W)
740QM – 1.73 GHz/2.93 GHz Turbo Boost (6 MB L3, TDP 45W)
720QM – 1.60 GHz/2.80 GHz Turbo Boost (6 MB L3, TDP 45W)
Xeon (Nehalem Microarchitecture)
Gainestown – 45 nm process technology
Same processor dies as Bloomfield
256 KB L2 cache
8 MB L3 cache, 4 MB may be disabled
QuickPath up to 6.4 GT/s
Hyper-Threading is included in some models
781 million transistors
Introduced March 29, 2009
Variants
W5590, X5570, X5560, X5550, E5540, E5530, L5530, E5520, L5520, L5518 – 4 cores, 8 MB L3 cache, HT
E5506, L5506, E5504 – 4 cores, 4 MB L3 cache, no HT
L5508, E5502, E5502 – 2 cores, 4 MB L3 cache, no HT
64-bit processors: Intel 64 – Sandy Bridge / Ivy Bridge microarchitecture
Celeron (Sandy Bridge/Ivy Bridge Microarchitecture)
Sandy Bridge (Celeron-branded) – 32 nm process technology
2 physical cores/2 threads (500 series), 1 physical core/1 thread (model G440) or 1 physical core/2 threads (models G460 & G465)
2 MB L3 cache (500 series), 1 MB (model G440) or 1.5 MB (models G460 & G465)
Introduced 3rd quarter, 2011
Socket 1155 LGA
2-channel DDR3-1066
400 series has max TDP of 35 W
500-series variants ending in 'T' have a peak TDP of 35 W, others – 65 W
Integrated GPU
All variants have peak GPU turbo frequencies of 1 GHz
Variants in the 400 series have GPUs running at a base frequency of 650 MHz
Variants in the 500 series ending in 'T' have GPUs running at a base frequency of 650 MHz; others at 850 MHz
All variants have 6 GPU execution units
Variants
G440 – 1.6 GHz
G460 – 1.8 GHz
G465 – 1.9 GHz
G470 – 2.0 GHz
G530T – 2.0 GHz
G540T – 2.1 GHz
G550T – 2.2 GHz
G530 – 2.4 GHz
G540 – 2.5 GHz
G550 – 2.6 GHz
G555 – 2.7 GHz
Pentium (Sandy Bridge/Ivy Bridge Microarchitecture)
Sandy Bridge (Pentium-branded) – 32 nm process technology
2 physical cores/2 threads
3 MB L3 cache
624 million transistors
Introduced May, 2011
Socket 1155 LGA
2-channel DDR3-1333 (800 series) or DDR3-1066 (600 series)
Variants ending in 'T' have a peak TDP of 35 W, others 65 W
Integrated GPU (HD 2000)
All variants have peak GPU turbo frequencies of 1.1 GHz
Variants ending in 'T' have GPUs running at a base frequency of 650 MHz; others at 850 MHz
All variants have 6 GPU execution units
Variants
G620T – 2.2 GHz
G630T – 2.3 GHz
G640T – 2.4 GHz
G645T – 2.5 GHz
G860T – 2.6 GHz
G620 – 2.6 GHz
G622 – 2.6 GHz
G630 – 2.7 GHz
G632 – 2.7 GHz
G640 – 2.8 GHz
G840 – 2.8 GHz
G645 – 2.9 GHz
G850 – 2.9 GHz
G860 – 3.0 GHz
G870 – 3.1 GHz
Ivy Bridge (Pentium-branded) – 22 nm Tri-gate transistor process technology
2 physical cores/2 threads
32+32 KB (per core) L1 cache
256 KB (per core) L2 cache
3 MB L3 cache
Introduced September, 2012
Socket 1155 LGA
2-channel DDR3-1333 for G2000 series
2-channel DDR3-1600 for G2100 series
All variants have GPU base frequencies of 650 MHz and peak GPU turbo frequencies of 1.05 GHz
Variants ending in 'T' have a peak TDP of 35 W, others – TDP of 55 W
Variants
G2020T – 2.5 GHz
G2030T – 2.6 GHz
G2100T – 2.6 GHz
G2120T – 2.7 GHz
G2010 – 2.8 GHz
G2020 – 2.9 GHz
G2030 – 3.0 GHz
G2120 – 3.1 GHz
G2130 – 3.2 GHz
G2140 – 3.3 GHz
Core i3 (2nd and 3rd Generation)
Sandy Bridge (Core i3 2nd Generation) – 32 nm process technology
2 physical cores/4 threads
32+32 KB (per core) L1 cache
256 KB (per core) L2 cache
3 MB L3 cache
624 million transistors
Introduced January, 2011
Socket 1155 LGA
2-channel DDR3-1333
Variants ending in 'T' have a peak TDP of 35 W, others 65 W
Integrated GPU
All variants have peak GPU turbo frequencies of 1.1 GHz
Variants ending in 'T' have GPUs running at a base frequency of 650 MHz; others at 850 MHz
Variants ending in '5' have Intel HD Graphics 3000 (12 execution units); others have Intel HD Graphics 2000 (6 execution units)
Variants
i3-2100T – 2.5 GHz
i3-2120T – 2.6 GHz
i3-2100 – 3.1 GHz
i3-2102 – 3.1 GHz
i3-2105 – 3.1 GHz
i3-2120 – 3.3 GHz
i3-2125 – 3.3 GHz
i3-2130 – 3.4 GHz
Ivy Bridge (Core i3 3rd Generation) – 22 nm Tri-gate transistor process technology
2 physical cores/4 threads
32+32 KB (per core) L1 cache
256 KB (per core) L2 cache
3 MB L3 cache
Introduced September, 2012
Socket 1155 LGA
2-channel DDR3-1600
Variants ending in '5' have Intel HD Graphics 4000; others have Intel HD Graphics 2500
All variants have GPU base frequencies of 650 MHz and peak GPU turbo frequencies of 1.05 GHz
TDP 55 W
Variants
i3-3220T – 2.8 GHz
i3-3240T – 2.9 GHz
i3-3210 – 3.2 GHz
i3-3220 – 3.3 GHz
i3-3225 – 3.3 GHz
i3-3240 – 3.4 GHz
i3-3250 – 3.5 GHz
Core i5 (2nd and 3rd Generation)
Sandy Bridge (Core i5 2nd Generation) – 32 nm process technology
4 physical cores/4 threads (except for i5-2390T which has 2 physical cores/4 threads)
32+32 KB (per core) L1 cache
256 KB (per core) L2 cache
6 MB L3 cache (except for i5-2390T which has 3 MB)
995 million transistors
Introduced January, 2011
Socket 1155 LGA
2-channel DDR3-1333
Variants ending in 'S' have a peak TDP of 65 W, others – 95 W except where noted
Variants ending in 'K' have unlocked multipliers; others cannot be overclocked
Integrated GPU
i5-2500T has a peak GPU turbo frequency of 1.25 GHz, others 1.1 GHz
Variants ending in 'T' have GPUs running at a base frequency of 650 MHz; others at 850 MHz
Variants ending in '5' or 'K' have Intel HD Graphics 3000 (12 execution units), except i5-2550K which has no GPU; others have Intel HD Graphics 2000 (6 execution units)
Variants ending in 'P' and the i5-2550K have no GPU
Variants
i5-2390T – 2.7 GHz/3.5 GHz Turbo Boost (35 W max TDP)
i5-2500T – 2.3 GHz/3.3 GHz Turbo Boost (45 W max TDP)
i5-2400S – 2.5 GHz/3.3 GHz Turbo Boost
i5-2405S – 2.5 GHz/3.3 GHz Turbo Boost
i5-2500S – 2.7 GHz/3.7 GHz Turbo Boost
i5-2300 – 2.8 GHz/3.1 GHz Turbo Boost
i5-2310 – 2.9 GHz/3.2 GHz Turbo Boost
i5-2320 – 3.0 GHz/3.3 GHz Turbo Boost
i5-2380P – 3.1 GHz/3.4 GHz Turbo Boost
i5-2400 – 3.1 GHz/3.4 GHz Turbo Boost
i5-2450P – 3.2 GHz/3.5 GHz Turbo Boost
i5-2500 – 3.3 GHz/3.7 GHz Turbo Boost
i5-2500K – 3.3 GHz/3.7 GHz Turbo Boost
i5-2550K – 3.4 GHz/3.8 GHz Turbo Boost
Ivy Bridge (Core i5 3rd Generation) – 22 nm Tri-gate transistor process technology
4 physical cores/4 threads (except for i5-3470T which has 2 physical cores/4 threads)
32+32 KB (per core) L1 cache
256 KB (per core) L2 cache
6 MB L3 cache (except for i5-3470T which has 3 MB)
Introduced April, 2012
Socket 1155 LGA
2-channel DDR3-1600
Variants ending in 'S' have a peak TDP of 65 W, Variants ending in 'T' have a peak TDP of 35 or 45 W (see variants), others – 77 W except where noted
Variants ending in 'K' have unlocked multipliers; others cannot be overclocked
Variants ending in 'P' have no integrated GPU; others have Intel HD Graphics 2500 or Intel HD Graphics 4000 (i5-3475S and i5-3570K only)
Variants
i5-3470T – 2.9 GHz/3.6 GHz max Turbo Boost (35 W TDP)
i5-3570T – 2.3 GHz/3.3 GHz max Turbo Boost (45 W TDP)
i5-3330S – 2.7 GHz/3.2 GHz max Turbo Boost
i5-3450S – 2.8 GHz/3.5 GHz max Turbo Boost
i5-3470S – 2.9 GHz/3.6 GHz max Turbo Boost
i5-3475S – 2.9 GHz/3.6 GHz max Turbo Boost
i5-3550S – 3.0 GHz/3.7 GHz max Turbo Boost
i5-3570S – 3.1 GHz/3.8 GHz max Turbo Boost
i5-3330 – 3.0 GHz/3.2 GHz max Turbo Boost
i5-3350P – 3.1 GHz/3.3 GHz max Turbo Boost (69 W TDP)
i5-3450 – 3.1 GHz/3.5 GHz max Turbo Boost
i5-3470 – 3.2 GHz/3.6 GHz max Turbo Boost
i5-3550 – 3.3 GHz/3.7 GHz max Turbo Boost
i5-3570 – 3.4 GHz/3.8 GHz max Turbo Boost
i5-3570K – 3.4 GHz/3.8 GHz max Turbo Boost
Core i7 (2nd and 3rd Generation)
Sandy Bridge (Core i7 2nd Generation) – 32 nm process technology
4 physical cores/8 threads
32+32 KB (per core) L1 cache
256 KB (per core) L2 cache
8 MB L3 cache
995 million transistors
Introduced January, 2011
Socket 1155 LGA
2-channel DDR3-1333
Variants ending in 'S' have a peak TDP of 65 W, others – 95 W
Variants ending in 'K' have unlocked multipliers; others cannot be overclocked
Integrated GPU
All variants have base GPU frequencies of 850 MHz and peak GPU turbo frequencies of 1.35 GHz
Variants ending in 'K' have Intel HD Graphics 3000 (12 execution units); others have Intel HD Graphics 2000 (6 execution units)
Variants
i7-2600S – 2.8 GHz/3.8 GHz Turbo Boost
i7-2600 – 3.4 GHz/3.8 GHz Turbo Boost
i7-2600K – 3.4 GHz/3.8 GHz Turbo Boost
i7-2700K – 3.5 GHz/3.9 GHz Turbo Boost
Sandy Bridge-E (Core i7 3rd Generation X-Series) – 32 nm process technology
Up to 6 physical cores/12 threads depending on model number
32+32 KB (per core) L1 cache
256 KB (per core) L2 cache
Up to 20 MB L3 cache depending on model number
2.27 billion transistors
Introduced November, 2011
Socket 2011 LGA
4-channel DDR3-1600
All variants have a peak TDP of 130 W
No integrated GPU
Variants (all marketed under "Intel Core X-series Processors")
i7-3820 – 3.6 GHz/3.8 GHz Turbo Boost, 4 cores, 10 MB L3 cache
i7-3930K – 3.2 GHz/3.8 GHz Turbo Boost, 6 cores, 12 MB L3 cache
i7-3960X – 3.3 GHz/3.9 GHz Turbo Boost, 6 cores, 15 MB L3 cache
i7-3970X – 3.5 GHz/4.0 GHz Turbo Boost, 6 cores, 15 MB L3 cache
Ivy Bridge (Core i7 3rd Generation) – 22 nm Tri-gate transistor process technology
4 physical cores/8 threads
32+32 KB (per core) L1 cache
256 KB (per core) L2 cache
8 MB L3 cache
Introduced April, 2012
Socket 1155 LGA
2-channel DDR3-1600
Variants ending in 'S' have a peak TDP of 65 W, variants ending in 'T' have a peak TDP of 45 W, others – 77 W
Variants ending in 'K' have unlocked multipliers; others cannot be overclocked
Integrated GPU Intel HD Graphics 4000
Variants
i7-3770T – 2.5 GHz/3.7 GHz Turbo Boost
i7-3770S – 3.1 GHz/3.9 GHz Turbo Boost
i7-3770 – 3.4 GHz/3.9 GHz Turbo Boost
i7-3770K – 3.5 GHz/3.9 GHz Turbo Boost
64-bit processors: Intel 64 – Haswell microarchitecture
Core i3 (4th Generation)
Haswell (Core i3 4th Generation) – 22nm process technology
64-bit processors: Intel 64 – Broadwell microarchitecture
Core i3 (5th Generation)
Broadwell (Core i3 5th Generation) – 14nm process technology
Core i5 (5th Generation)
Broadwell (Core i5 5th Generation) – 14nm process technology
4 physical cores/4 threads
4 MB L3 cache
Introduced Q2'15
Socket 1150 LGA
2-channel DDR3L-1333/1600
Integrated GPU
Variants
i5-5575R – 2.80 GHz/3.30 GHz Turbo Boost
i5-5675C – 3.10 GHz/3.60 GHz Turbo Boost
i5-5675R – 3.10 GHz/3.60 GHz Turbo Boost
Core i7 (5th Generation, Including Core-X Series)
Broadwell (Core i7 5th Generation) – 14nm process technology
4 physical cores/8 threads
6 MB L3 cache
Introduced Q2'15
Socket 1150 LGA
2-channel DDR3L-1333/1600
Integrated GPU
Variants
i7-5775C – 3.30 GHz/3.70 GHz Turbo Boost
i7-5775R – 3.30 GHz/3.80 GHz Turbo Boost
Broadwell-E – 14nm process technology
6–10 physical cores/12–20 threads
15–25 MB L3 cache
Introduced Q2'16
Socket 2011-v3 LGA
4-channel DDR4-2133/2400
No Integrated GPU
Variants (all marketed under "Intel Core X-series Processors")
i7-6800K – 3.40 GHz/3.60 GHz Turbo Boost/3.80 GHz Turbo Boost Max Technology 3.0 Frequency 15 MB L3 cache
i7-6850K – 3.60 GHz/3.80 GHz Turbo Boost/4.00 GHz Turbo Boost Max Technology 3.0 Frequency 15 MB L3 cache
i7-6900K – 3.20 GHz/3.70 GHz Turbo Boost/4.00 GHz Turbo Boost Max Technology 3.0 Frequency 20 MB L3 cache
i7-6950X – 3.00 GHz/3.50 GHz Turbo Boost/4.00 GHz Turbo Boost Max Technology 3.0 Frequency 25 MB L3 cache
Other Broadwell CPUs
Not listed (yet) are several Broadwell-based CPU models:
Server and workstation CPUs
single-CPU: Pentium D15nn, Xeon D-15nn, Xeon E3-12nn v4, Xeon E5-16nn v4
dual-CPU: Xeon E5-26nn v4
quad-CPU: Xeon E5-46nn v4, Xeon E7-48nn v4
octo-CPU: Xeon E7-88nn v4
Embedded CPUs
Core i7-57nnEQ, Core i7-58nnEQ
Mobile CPUs
Celeron 32nnU, Celeron 37nnU
Pentium 38nnU
Core M-5Ynn
Core i3-50nnU
Core i5-5nnnU
Core i7-55nnU, Core i7-56nnU, Core i7-57nnHQ, Core i7-59nnHQ
Note: this list does not say that all processors that match these patterns are Broadwell-based or fit into this scheme. The model numbers may have suffixes that are not shown here.
64-bit processors: Intel 64 – Skylake microarchitecture
Core i3 (6th Generation)
Skylake (Core i3 6th Generation) – 14 nm process technology
2 physical cores/4 threads
3–4 MB L3 cache
Introduced Q3'15
Socket 1151 LGA
2-channel DDR3L-1333/1600, DDR4-1866/2133
Integrated GPU Intel HD Graphics 530 (only i3-6098P have HD Graphics 510)
Variants
i3-6098P – 3.60 GHz
i3-6100T – 3.20 GHz
i3-6100 – 3.70 GHz
i3-6300T – 3.30 GHz
i3-6300 – 3.80 GHz
i3-6320 – 3.90 GHz
Core i5 (6th Generation)
Skylake (Core i5 6th Generation) – 14nm process technology
4 physical cores/4 threads
6 MB L3 cache
Introduced Q3'15
Socket 1151 LGA
2-channel DDR3L-1333/1600, DDR4-1866/2133
Integrated GPU Intel HD Graphics 530
Variants
i5-6300HQ – 2.30/3.20 GHz Turbo Boost
i5-6400T – 2.20 GHz/2.80 GHz Turbo Boost
i5-6400 – 2.70 GHz/3.30 GHz Turbo Boost
i5-6500T – 2.50 GHz/3.10 GHz Turbo Boost
i5-6500 – 3.20 GHz/3.60 GHz Turbo Boost
i5-6600T – 2.70 GHz/3.50 GHz Turbo Boost
i5-6600 – 3.30 GHz/3.90 GHz Turbo Boost
i5-6600K – 3.50 GHz/3.90 GHz Turbo Boost
Core i7 (6th Generation)
Skylake (Core i7 6th Generation) – 14nm process technology
4 physical cores/8 threads
8 MB L3 cache
Introduced Q3'15
Socket 1151 LGA
2-channel DDR3L-1333/1600, DDR4-1866/2133
Integrated GPU Intel HD Graphics 530
Variants
i7-6700T – 2.80 GHz/3.60 GHz Turbo Boost
i7-6700 – 3.40 GHz/4.00 GHz Turbo Boost
i7-6700K – 4.00 GHz/4.20 GHz Turbo Boost
Other Skylake Processors
Many Skylake-based processors are not yet listed in this section: mobile i3/i5/i7 processors (U, H, and M suffixes), embedded i3/i5/i7 processors (E suffix), certain i7-67nn/i7-68nn/i7-69nn.
Skylake-based "Core X-series" processors (certain i7-78nn and i9-79nn models) can be found under current models.
64-bit processors: Intel 64 (7th Generation) – Kaby Lake microarchitecture
64-bit processors: Intel 64 (8th and 9th Generation) – Coffee Lake microarchitecture
64-bit processors: Intel 64 – Cannon Lake microarchitecture
64-bit processors: Intel 64 (10th Generation) – Ice Lake microarchitecture
64-bit processors: Intel 64 (10th Generation) – Comet Lake microarchitecture
64-bit processors: Intel 64 (11th Generation) – Tiger Lake microarchitecture
64-bit processors: Intel 64 (12th Generation) – Alder Lake microarchitecture
Intel Tera-Scale
2007: Teraflops Research Chip, an 80 cores processor prototype.
2009: Single-chip Cloud Computer, a research microprocessor containing the most Intel Architecture cores ever integrated on a silicon CPU chip: 48.
Intel 805xx product codes
Intel discontinued the use of part numbers such as 80486 in the marketing of mainstream x86-architecture microprocessors with the introduction of the Pentium brand in 1993. However, numerical codes, in the 805xx range, continued to be assigned to these processors for internal and part numbering uses. The following is a list of such product codes in numerical order:
Intel 806xx product codes
Intel 807xx Product Codes
See also
List of AMD processors
List of PowerPC processors
List of Freescale products
List of Intel Atom microprocessors
List of Intel Xeon processors
List of Intel P6-based Xeon microprocessors
List of Intel NetBurst-based Xeon microprocessors
List of Intel Pentium M (Yonah)-based Xeon microprocessors
List of Intel Core-based Xeon microprocessors
List of Intel Nehalem-based Xeon microprocessors
List of Intel Sandy Bridge-based Xeon microprocessors
List of Intel Ivy Bridge-based Xeon microprocessors
List of Intel Haswell-based Xeon microprocessors
List of Intel Broadwell-based Xeon microprocessors
List of Intel Skylake-based Xeon microprocessors
List of Intel Kaby Lake-based Xeon microprocessors
List of Intel Coffee Lake-based Xeon microprocessors
List of Intel Cascade Lake-based Xeon microprocessors
List of Intel Comet Lake-based Xeon microprocessors
List of Intel Cooper Lake-based Xeon microprocessors
List of Intel Ice Lake-based Xeon microprocessors
List of Intel Rocket Lake-based Xeon microprocessors
List of Intel Tiger Lake-based Xeon microprocessors
List of Intel Itanium microprocessors
List of Intel Celeron microprocessors
List of Intel Pentium processors
List of Intel Pentium Pro microprocessors
List of Intel Pentium II microprocessors
List of Intel Pentium III microprocessors
List of Intel Pentium 4 processors
List of Intel Pentium D microprocessors
List of Intel Pentium M microprocessors
List of Intel Core processors
List of Intel Core M processors
List of Intel Core 2 microprocessors
List of Intel Core i3 microprocessors
List of Intel Core i5 processors
List of Intel Core i7 processors
List of Intel Core i9 processors
List of Intel CPU microarchitectures
List of quantum processors
References
External links
Intel Museum: History of the Microprocessor
Stealey A100 and A110
Intel Product Specifications
Intel Processors and Chipsets by Platform Code Name
Intel Processors information
Intel microprocessors
Intel |
12027634 | https://en.wikipedia.org/wiki/ASK%20Group | ASK Group | ASK Group, Inc., formerly ASK Computer Systems, Inc., was a producer of business and manufacturing software. It is best remembered for its Manman enterprise resource planning (ERP) software and for Sandra Kurtzig, the company's founder and one of the early female pioneers in the computer industry. At its peak, ASK had 91 offices in 15 countries before Computer Associates acquired the company in 1994.
Beginning and growth (1972–1982)
ASK was started in 1972 by Sandra Kurtzig in California. She left her job as a marketing specialist at General Electric and invested $2,000 of her savings to start the company in the apartment she shared with her HP salesman husband.
At first, the firm built software for a variety of business applications. ASK was incorporated in 1974.
In 1978, Kurtzig came up with ASK's most significant product, named Manman (originally "MaMa"), a contraction of manufacturing management. Manman was an ERP program that ran on Hewlett-Packard HP-3000 minicomputers. Manman helped manufacturing companies plan materials purchases, production schedules, and other administrative functions on a scale that was previously possible only on large, costly mainframe computers. Manman initially had a five-figure software price and was aimed at small and medium-sized manufacturers. Small companies desiring the least expensive implementation could use the software on a time-sharing contract.
During the era when Manman was only running on HP-3000 systems, ASK would buy systems at a discount and resell them "with its programs for $125,000 to $300,000" as turnkey systems.
Although ASK was initially named "standing for Arie and Sandra Kurtzig, although he is not an employee." Somewhat later, with her husband working for Hewlett Packard (HP); with the software being subsequently marketed both for HP's computers and those sold by Digital Equipment Corporation (DEC), Kurtzig said that "A" was for Associates.
Manman was an enormous success and quickly came to dominate the market for manufacturing systems and software. ASK's fortunes rose as a result. The corporation went public in 1981. Two years later, Sandra Kurtzig's personal stake in the firm was worth more than $40 million.
Plateau (1983–1989)
Software Dimensions: (March 1983 - June 1984)
In March 1983 ASK made its first acquisition, purchasing a privately held software company named Software Dimensions, Inc., publisher of Accounting Plus, for $6 million. After acquiring Software Dimensions, Kurtzig renamed it ASK Micro and launched an aggressive marketing program. ASK over-hired and mismanaged the sales channel for the product, angering existing sellers and ballooning the cash burn rate for the company; the product faltered. In June 1984, Kurtzig announced that she was shutting down ASK Micro, at a cost of $1 million, and auctioning off the rights to Accounting Plus. ASK also failed at rescaling Manman to run on personal computers. Of the company's failings in the emerging personal computer market, Kurtzig told BusinessWeek, "We have our fingerprints all over the murder weapon" that killed Software Dimensions. ASK never truly found its footing in the microcomputer market, and struggled to keep its market share from being eroded by competitors who offered similar solutions on smaller platforms.
Manman: lower prices, other declines (1984-1989)
By the fall of 1984, ASK planned to offer a version of its original product, Manman, for about one-third of its previous price. Lower-priced minicomputers from Hewlett-Packard and Digital Equipment Corporation (DEC), the product's two hardware platforms, made this possible. The company hoped to protect its market share with smaller companies and emergent middle-range manufacturers. However, by 1985, ASK declined as its customers reduced expenditures. Exacerbating the problem, Kurtzig and her family also began selling off large blocks of their stock holdings in the company, which triggered a shareholder lawsuit. Kurtzig also backed away from ASK's day-to-day operations. In 1984, Kurtzig named Ronald W. Branniff president of the company, and in 1985 he took over her post of chief executive officer as well. Kurtzig attributed her declining interest in the business to family pressures, along with other factors. Divorced from her husband, Kurtzig devoted more time to raising her two sons, who were aged 12 and 9 at the time.
Although the company remained profitable, ASK's earnings and sales declined in 1986, falling to $5.89 million on revenues of $76 million. ASK acquired NCA Corporation for $43 million in cash in 1987 which was a significant premium for a competitor that was beating them in two out of every three deals. Despite these small advances, ASK was losing ground to its competitors. In its research and development activities, ASK began to focus nearly all of its resources on upgrading and improving existing products instead of creating new ones. Salespeople had long been bedeviled with having to sell a primitive, conversational, scrolling user interface (not long afterwards, the problem was that although not everyone knew what a relational database was, everyone wanted one.) ASK had lost its entrepreneurial edge.
In the meantime, Kurtzig had spent her time traveling, writing her autobiography, and investing in other technology companies, but this proved to be unfulfilling. In mid-1989 the ASK managing board approached Kurtzig and asked her to resume an active role in the company, and she accepted their invitation. Kurtzig spearheaded ASK's purchase of Data 3 Systems for $18.7 million, a privately owned competitor to ASK. In addition to this complementary expansion, Kurtzig began to revamp the way her old company had been run, shifting organization and priorities to new products. She changed such minor, but important, details as the quality of the food and beer at the company's Friday evening celebrations in an effort to reconnect upper level management with the company's employees. As part of this effort, Kurtzig instituted 360 degree reviews (where employees review bosses), hired entrepreneurial managers, spearheaded product entry into IBM and Sun Microsystems platforms, and opened international offices in Europe and Asia. The improvements resulted in 1989's earnings of $13.5 million.
Decline and sale (1990–1994)
In 1990, ASK purchased the Ingres Corporation, a declining software company that developed the database management system called Ingres. The deal called for 30 percent of ASK to be sold to Hewlett-Packard and Electronic Data Systems (EDS) for a total of $60 million, which in turn enabled ASK to pay $110 million for Ingres. ASK's stockholders complained about this strange multi-way financing move. Shareholder James Lennane, who held ten percent of the company's shares, announced he would try to oust the company's board of directors at the next shareholders' meeting. Despite this, Kurtzig's deal proceeded as planned. ASK already made use of Ingres software in its own work, linking the accounting and manufacturing departments of its clients to its own database. Hewlett-Packard made the hardware upon which much of ASK's software ran, and the ASK resold Hewlett-Packard products as part of its software packages. Both Hewlett-Packard and EDS had strong histories of involvement with manufacturing businesses, and this heritage promised to open more potential markets for ASK.
Although this seemed like good news, ASK had mediocre results over the next several quarters, due to a lull in business while the company tried to bring new products to market. With its new purchases, ASK had moved beyond its original scope to become a much larger, global, diversified company. The unified ASK and Ingres group had yearly revenues of $400 million.
In the early 1990s, ASK concentrated on the development and introduction of new products designed to provide communication between different computer systems and programs. In 1992 the company introduced Manman/X, an update of its flagship product. Manman/x was built on the code base of a product called Triton 2.2d, from a little known Dutch company called Baan. ASK had acquired the rights to the code base and distribution in the 1990s.
In 1992 ASK was restructured to better reflect the nature of its operations. The company was renamed ASK Group, Inc., and comprised three business units — ASK Computer Systems, Data 3, and Ingres. With the merger of ASK and Ingres completed, Kurtzig replaced herself as CEO in 1991, but remained non-executive chairman until 1992. Although ASK appeared to be on solid footing to face the computer industry's challenging, competitive environment, its fortunes continued to decline. ASK annual revenues reached nearly $1 billion before being acquired by Computer Associates in 1994.
ManMan product family
Manman was a family of Enterprise resource planning (ERP) marketed for
Hewlett Packard HP-3000 and Digital Equipment Corporation (DEC) minicomputers. It's vendor, ASK Group, founded by Sandra Kurtzig,
was selling this software from, at ASK's peak, 91 offices in 15 countries. By 1994 annual sales reached nearly $1 billion,
and the company was acquired by Computer Associates (CA); both the software and CA subsequently declined.
The product family's name, Manman, was "short for manufacturing management," Its components included:
Manman/AP: an accounts payable program. Since both HP and DEC's computers were time-sharing systems, the entry of data was done interactively. Vendor names and supplier payables could be viewed and, if necessary, revised from a computer terminal.
Manman/MFG: to help plan and track the manufacturing process.
Manman/OMAR: order management/AR. Orders were tracked by this software "until payment is received."
Manman/GL: general ledger
Some of the ideas for these application programs came from founder Kurtzig's exposure to several areas within "General Electric, known to be synonymous with a well-run manufacturing operation." Modules for payroll, budgeting and other analysis were also sold by ASK.
During Manman's early era when it was only running on HP-3000 systems, ASK would buy systems at a discount and resell them
"with its programs for $125,000 to $300,000" as turnkey systems.
References
Defunct companies based in California
CA Technologies
History of software
Software companies established in 1972
Software companies disestablished in 1994
1994 disestablishments in California |
354015 | https://en.wikipedia.org/wiki/The%20Oregon%20Trail%20%281971%20video%20game%29 | The Oregon Trail (1971 video game) | The Oregon Trail is a text-based strategy video game developed by Don Rawitsch, Bill Heinemann, and Paul Dillenberger in 1971 and produced by the Minnesota Educational Computing Consortium (MECC) beginning in 1975. It was developed by the three as a computer game to teach school children about the realities of 19th-century pioneer life on the Oregon Trail. In the game, the player assumes the role of a wagon leader guiding a party of settlers from Independence, Missouri, to Oregon City, Oregon via a covered wagon in 1847. Along the way the player must purchase supplies, hunt for food, and make choices on how to proceed along the trail while encountering random events such as storms and wagon breakdowns. The original versions of the game contain no graphics, as they were developed for computers that used teleprinters instead of computer monitors. A later Apple II port added a graphical shooting minigame.
The first version of the game was developed over the course of two weeks for use by Rawitsch in a history unit in a Minneapolis junior high school. Despite its popularity with the students, it was deleted from the school district's mainframe computer at the end of the school semester. Rawitsch recreated the game in 1974 for the MECC, which distributed educational software for free in Minnesota and for sale elsewhere, and recalibrated the probabilities of events based on historical journals and diaries for the game's release the following year. After the rise of microcomputers in the 1970s, the MECC released several versions of the game over the next decade for the Apple II, Atari 8-bit family, and Commodore 64 computers, before redesigning it as a graphical commercial game for the Apple II under the same name in 1985.
The game is the first entry in The Oregon Trail series; games in the series have since been released in many editions by various developers and publishers, many titled The Oregon Trail. The multiple games in the series are often considered to be iterations on the same title, and have collectively sold over 65 million copies and have been inducted into the World Video Game Hall of Fame. The series has also inspired a number of spinoffs such as The Yukon Trail and The Amazon Trail.
Gameplay
The Oregon Trail is a text-based strategy video game in which the player, as the leader of a wagon train, controls a group journeying down the Oregon Trail from Independence, Missouri to Oregon City, Oregon in 1847. The player purchases supplies, then plays through approximately twelve rounds of decision making, each representing two weeks on the trail. Each round begins with the player being told their current distance along the trail and the date, along with their current supplies. Supplies consist of food, bullets, clothing, miscellaneous supplies, and cash, each given as a number. Players are given the option to hunt for food, and in some rounds to stop at a fort to purchase supplies, and then choose how much food to consume that round. The game closes the round by randomly selecting one or two events and weather conditions. The events include storms damaging supplies, wagons breaking down, and attacks by wild animals or "hostile riders"; weather conditions can slow down the rate of travel, which can result in additional rounds needed to reach Oregon.
When hunting, or when attacked, the game prompts the player to type a word—"BANG" in the original version, or a randomly selected word like "BANG" or "POW" in later versions—with misspellings resulting in no effect. When hunting, the faster the word is typed, the more food is gathered. The game ends when the player reaches Oregon, or if they die along the trail; death can occur due to an attack or by running out of supplies. Running out of food results in starvation, while lack of clothing in cold weather, low levels of food, or random events such as snakebite or a hunting accident lead to illness; this results in death if the player does not have miscellaneous supplies for minor or regular illnesses, or cannot afford a doctor in the case of serious illnesses.
Development
Original version
In 1971, Don Rawitsch, a history major and senior at Carleton College in Northfield, Minnesota, taught an 8th grade history class at a junior high school in Minneapolis as a student teacher. His supervising teacher assigned him to prepare a unit on "The Western Expansion of the Mid-19th Century", and Rawitch decided to create a board game activity about the Oregon Trail for the students. After one week of planning the lessons, he was in the process of drawing out the trail on sheets of paper on the floor of his apartment when his roommates, fellow Carleton students Bill Heinemann and Paul Dillenberger, came in. Heinemann, who along with Dillenberger was a math student and student teacher with experience in programming, discussed the project with Rawitsch, and told him that it would be well-suited to a computer program, as it could keep track of the player's progress and calculate their chances of success based on their supplies instead of a dice roll. Rawitsch was initially hesitant, as the unit needed to be complete within two weeks, but Heinemann and Dillenberger felt it could be done if they worked long hours each day on it. The trio then spent the weekend designing and coding the game on paper.
The Minneapolis school district had recently purchased an HP 2100 minicomputer, and the schools the trio were teaching in, like the other schools in the district, were connected to it via a single teleprinter. These teleprinters could send and print messages from programs running on the central computer. The video game industry was in its infancy in 1971, and the three had no resources to draw on to develop the game software beyond their own programming knowledge; instead, they spent two weeks working and coding HP Time-Shared BASIC on their own. Rawlitsch focused on the design and historical portions of the game, while Heinemann and Dillenberger did the programming, working on the teleprinter kept in a small room that was formerly a janitor's closet at the school they taught at, Bryant Junior High School, as well as bringing it to the apartment to continue working. Heinemann focused on the overall programming flow, and came up with the hunting minigame, while Dillenberger made subroutines for the game to use, wrote much of the text displayed to the player, and tested for bugs in the code. As there was only one terminal, Heinemann wrote code on paper while Dillenberger entered it into the system along with his own.
They implemented the basics of the game in those two weeks, including purchasing supplies, making choices at specific points of the journey, and the hunting minigame. They also included the random events happening to the player, and Heinemann had the idea to make the random events tied to the geography of the trail, so that cold weather events would be more likely in the mountains and attacks more likely in the plains. They also added small randomization of outcomes such as the amount of food gained from hunting; they expected that in order for the children to be interested in playing the game multiple times there needed to be variations between plays. Prior to the start of Rawitsch's history unit, Heinemann and Dillenberger let some students at their school play it to test; the students were enthusiastic about the game, staying late at school to play. The other teachers were not as interested, but did recommend changes to the game, particularly removing negative depictions of Native Americans as they were based more on Western movies and television than history, and could be problematic towards the several students with Native American ancestry at the schools.
The Oregon Trail debuted to Rawitsch's classes on December 3, 1971. He was unsure how interested the students would be in the game, as they had limited exposure to computers and several seemed uninterested in history altogether, but after he showed them the game students would line up outside the door for their turn and stay after school for another chance. Rawitsch has recounted that, as only one student could use the teleprinter at one time, the students organized themselves into voting for responses and delegating students to handle hunting, following the map, and keeping track of supplies. Other teachers at the school came up with "flimsy excuses" for their students to try the game as well. The trio adjusted the game's code as the students played in response to bugs found, such as purchasing clothes for negative money. As the school district shared a single central minicomputer, schools across the city began to play the game as well. When the semester and their student teaching term ended, the team printed out copies of the source code—about 800 lines of code—and deleted the program from the computer.
MECC version
In 1974, Rawitsch was hired by the Minnesota Educational Computing Consortium (MECC), a state-funded organization that developed educational software for the classroom, as an entry-level liaison for local community colleges. The MECC had a similar system to the Minneapolis school district's setup in 1971, with a CDC Cyber 70/73-26 mainframe computer which schools across the state could connect to via terminals. The system contained several educational programs, and Rawitsch's boss let him know that it was open to submissions. Rawitsch, with permission from Heinemann and Dillenberger, spent the 1974 Thanksgiving weekend copying and adjusting the printed BASIC source code into the system. Rather than submit the recreated copy, he instead enhanced the game with research on the events of the Oregon Trail that he had not had time for with the original version, and changed the frequency and types of random events, such as bad weather or wagons breaking down, to be based on the actual historical probabilities for what happened to travelers on the trail at each location in the game. Rawitsch calculated the probabilities himself, basing them on historical diaries and narratives of people on the trail that he read. He also added in more positive depictions of Native Americans, as his research indicated that many settlers received assistance from them along the trail. He placed The Oregon Trail into the organization's time-sharing network in 1975, where it could be accessed by schools across Minnesota.
Legacy
The 1975 mainframe game was the most popular software in the system for Minnesota schools for five years, with thousands of players monthly. Rawitsch, Heinemann, and Dillenberger were not publicly acknowledged as the creators of the original game until 1995, when MECC honored them in a ceremony at the Mall of America. By then, several versions of the game had been created. Rawitsch published the source code of The Oregon Trail in Creative Computings May–June 1978 issue, along with some of the historical information he had used to refine the statistics. That year MECC began encouraging schools to adopt the Apple II microcomputer, purchasing large amounts at a discount and reselling them to schools. MECC began converting several of their products to run on microcomputers, and John Cook adapted the game for the Apple II; though the text-based gameplay remained largely the same, he added a display of the player's position along the trail on a map between rounds, and replaced the typing in the hunting and attack minigame with a graphical version in which a deer or attacker moves across the screen and the player presses a key to fire at it. A version for the Atari 8-bit family, again titled The Oregon Trail, was released in 1982. The Apple II version was included under the name Oregon as part of MECC's Elementary series, distributed to Minnesota schools for free and for profit to schools outside of the state, on Elementary Volume 6 in 1980. Oregon was ported to the Commodore 64 in 1984 as part of a collection like Elementary Volume 6 titled Expeditions. By the mid-1980s, MECC was selling their educational software to schools around the country, and The Oregon Trail was their most popular product by far.
In 1985, MECC produced a fully-graphical version of the game for Apple II computers, redesigned by R. Philip Bouchard as a greatly expanded product for home consumers under the same name. The Oregon Trail was extremely successful, and along with successive versions of the game it sold over 65 million copies. Several further games have been released in The Oregon Trail series, many under the title The Oregon Trail, as well as a number of spinoffs such as The Yukon Trail and The Amazon Trail.
The original Oregon Trail has been described in Serious Games and Edutainment Applications as "one of the most famous ancestors" of the serious game subgenre. The text-based and graphical versions of The Oregon Trail are often described as different iterations of the same game when discussing the game's legacy; Colin Campbell of Polygon, for example, has described it collectively as one of the most successful games of all time, calling it a cultural icon. Kevin Wong of Vice claimed that the collective game was "synonymous with edutainment". Due to its widespread popularity, The Oregon Trail, referring to all versions of the game released over 40 years, was inducted into the World Video Game Hall of Fame in 2016. Time named the game as one of the 100 greatest video games in 2012, and placed it 9th on its list of the 50 best games in 2016.
References
External links
The 1975 version of The Oregon Trail can be played for free in the browser at the Internet Archive
The Oregon Trail (series)
1971 video games
1975 video games
Apple II games
Children's educational video games
Commercial video games with freely available source code
Classic Mac OS games
History educational video games
Mainframe games
Survival video games
Video games developed in the United States
Video games set in the 19th century
Video games with textual graphics
Western (genre) video games |
5577839 | https://en.wikipedia.org/wiki/InVesalius | InVesalius | InVesalius is a free medical software used to generate virtual reconstructions of structures in the human body. Based on two-dimensional images, acquired using computed tomography or magnetic resonance imaging equipment, the software generates virtual three-dimensional models correspondent to anatomical parts of the human body. After constructing three-dimensional DICOM images, the software allows the generation of STL (stereolithography) files. These files can be used for rapid prototyping.
InVesalius was developed at CTI (Renato Archer Information Technology Center), a research institute of the Brazilian Science and Technology Center and is available at no cost at the homepage of Public Software Portal homepage. The software license is CC-GPL 2. It is available in English, Japanese, Czech, Portuguese (Brazil), Russian, Spanish, Italian, German, Portuguese, Turkish (Turkey), Romanian, French, Korean, Catalan, Chinese (Taiwan) and Greek.
InVesalius was developed using Python and works under Linux, Windows and Mac OS X. It also uses graphic libraries VTK, wxPython, Numpy, Scipy and GDCM.
The software's name is a tribute to Belgian physician Andreas Vesalius (1514–1564), considered the "father of modern anatomy".
Developed since 2001 for attending Brazilian Public Hospitals demands, InVesalius development was directed for promoting social inclusion of individuals with severe facial deformities. Since then, however, it has been employed in various research areas of dentistry, medicine, veterinary medicine, paleontology and anthropology. It has been used not only in public hospitals, but also in private clinics and hospitals.
Until 2017, the software had already been used for generating more than 5000 rapid prototyping models of anatomical structures at Promed project.
External links
Official InVesalius website
Alternative InVesalius website
InVesalius source code
InVesalius Translation page at Transifex
InVesalius at Ohloh
InVesalius at Twitter
Public Software Portal (Portuguese)
Rapid Prototyping for Medicine(Portuguese)
Related works
Confex.com (in English)
Studierfenster
Free science software
Medical software
Neuroimaging software
Free health care software
Free bioimaging software
Free DICOM software
Software that uses VTK |
13613956 | https://en.wikipedia.org/wiki/Sri%20Lanka%20Police | Sri Lanka Police | The Sri Lanka Police (; ) is the civilian national police force of the Democratic Socialist Republic of Sri Lanka. The police force is responsible for enforcing criminal and traffic law, enhancing public safety, maintaining order and keeping the peace throughout Sri Lanka. The police force consists of 43 Territorial Divisions, 67 Functional Divisions, 432 Police Stations with more than 84,000 people. The professional head of the police is the Inspector General of Police who reports to the Minister of Law and Order as well as the National Police Commission. The current Inspector General of Police is C.D. Wickramaratna.
During the Sri Lankan civil war, the police service became an integral part of maintaining of the nation's security, primarily focusing on internal security. Many police officers have been killed in the line of duty mainly due to terrorist attacks. At the same time, the police (and military) were accused of being corrupt or being too heavy handed.
Specially trained commando/counter-terrorist units named Special Task Force are deployed in joint operations with the armed forces for counter-terrorism operations and VVIP protection. The police command structure in Northern and Eastern provinces is closely integrated with the other security organisations under the authority of the Joint Operations Command.
The Police service can be reached across Sri Lanka on the 119 emergency number.
Roles
Law enforcement
Fighting crime
Carrying out investigations
Drug enforcement
Security of police
Keeping public security
Maintaining public order
Counter-terrorism
Securing public events, rallies and holidays
Riot control / crowd control
Intelligence services
Providing VIP security (VVIP security is handled by the Special Task Force)
Handling suspicious objects and bomb disposal (EOD) (handled by the Special Task Force)
Handling the local command of the Home Guard
Assisting the Prison Service in prisoner transport and control of prison unrest
Traffic control
Coordinating emergency services
Police and community
Handling civilian complaints
Handling youth violence and crime
Educating the community and participating in educational campaigns
Providing ceremonial escorts to the President, the Prime Minister and foreign ambassadors on state functions
Assist and coordinate community policing
Offences investigated
Offences against the State.
Offences relating to the Navy, Army and Air Force.
Offences relating to the Elections.
Offences relating to Coins, Currency and Government Stamps.
Any Offence committed against the President.
Any Offence committed against a Public Officer, a Judicial Officer, or the Speaker, or the Prime Minister or a Minister, or a Member of the Judicial Service Commission, or a Member of the Public Service Commission or a Deputy Minister or a Member of Parliament or the Secretary General of Parliament or a Member of the President's Staff or a Member of the Staff of the Secretary General of Parliament.
Any Offence relating to property belonging to the State or a State Corporation or Company or Establishment, the whole or part of the capital whereof has been provided by the State.
Any Offence prejudicial to National Security or the maintenance of Essential Services.
Any Offence under any law relating to any matter in the Reserve List other than such offences as the President may, by order published in the Gazette, exclude.
Any Offence in respect of which Courts in more than one Province have jurisdiction.
International Crimes.
History
Timeline of significant events:
1797: The office of Fiscal was created. Fredric Barron Mylius was appointed as Fiscal of Colombo and entrusted with responsibility of policing Colombo.
1806: The regulation No. 6 of 1806 appointed a Vidane Arachchi to each town or village, for prevention and detection of crime in rural areas.
1832: A committee appointed by the governor was instructed to form a police force. It was decided by this committee that the new police force was to be funded by a tax to be paid by the public. It consisted of one Superintendent, one Chief Constable, five Constables, ten Sergeants and 150 Peons. They were responsible for maintaining law and order in the capital city of Colombo.
1844: As the police force was restricted to coastal areas only, a second police force was created to cater to the country's interior.
1858: The police force in the coastal area and the police force in the hill country were unified and amalgamated.
1864: The first death of a police officer whilst on duty occurred when he attempted to apprehend a brigand by the name of "Saradiel", who was subsequently compared to Robin Hood.
1865: The Police Ordinance was enacted to stipulate the powers and responsibilities of policemen.
1866: William Robert Campbell, then the chief of police in the Indian province of Rathnageri, was appointed as Chief Superintendent of Police in Ceylon on 3 September 1866. This date is considered as the beginning of the Sri Lanka Police Service.
1867: The Chief of Police was designated as the Inspector General of Police. William Robert Campbell became the first Inspector General of Police. The Police Headquarters was founded at Maradana, in the City of Colombo.
1870: Muslim rioters attacked the Police Headquarters. The police were successful in repulsing the attack, but the building was damaged. This year, the Criminal Investigations Department (CID) was formed.
1879: The strength of the police force had tripled from 585 when IGP Campbell was appointed, to a force of 1528. The first police firing range, training college and the publishing of the annual administration report emerged during this year.
1892: The Depot Police presently known as the Field Force Headquarters was formed. Uniforms and housing were made free for police officers. The payment of a Good Conduct Allowance was initiated.
1908: Fingerprinting and photographing of criminals were initiated, along with the direct recruitment to the rank of Assistant Superintendents of Police.
1913: Herbert Layard Dowbiggin was appointed as the 8th Inspector General of Police. 119 police stations were in operation with a total strength of 2306.
1915: For the first time two officers were appointed as Deputy Inspectors General of Police.
1916: 0.22-caliber rifles were issued in place of shotguns.
1920: For the first time, police officers were deployed for the purpose of controlling traffic.
1923: A book containing comprehensive details regarding all aspects of the police, the Departmental Order Book, was formulated.
1926: The Sport Division was established.
1930: A handbook of traffic rules and regulations was issued for traffic duties.
1932: The Police Headquarters was moved from Maradana to its present location in Colombo Fort.
1938: Police telephone boxes were deployed throughout the city of Colombo.
1942: Temporary forces were employed, known as Temporary Police Constables.
1945: Police units were deployed at all hospitals. Additional units were also deployed for railway security. However, in the following year, the railway police force was discontinued as a necessity for it did not arise.
1952: Women were enrolled to the police force for the first time. VHF radios were introduced for communication. It was decided that in honour of police officers killed in the line of duty, state funerals with full police honours would be held. In addition the police flag would be flown at half mast throughout the country.
1954: Police stations were graded into five classifications, Grades "E" to "A". The grading of police stations was considered depending on the workload, population, locality, crimes, important institutions, etc., in the area.
1963: Divisions in the police were made as North, Central, South, Administration, and Criminal Investigation Department. D. B. I. P. S. Siriwardane, a civil servant, was the first civilian to be appointed as the Deputy Inspector of Police in charge of Administration.
1966: The Police Public Relations Division was established on 1 October 1966, at Police Headquarters, Colombo.
1969: The Tourist Police and the Illicit Immigration sector were established in March 1969.
1972: The Crime Detective Bureau was started on 1 August 1972.
1973: On 15 August 1973 the Police Narcotics Bureau was started. The Colombo Fraud Investigation Bureau was also established.
1974: The uniforms for constables and sergeants were changed.
1976: The rank of Woman Police Sub Inspector was introduced. Two women police officers were promoted to the rank of Sub Inspector.
1978: The Police Higher Training Institute was established.
1979: The Children & Women Bureau was established.
1983: The Police Special Task Force was established.
1985: A new promotion scheme was introduced from the rank of Police Constable up to the rank of Inspector of Police.
1988: A Woman Police Inspector was promoted to the rank of Assistant Superintendent of Police.
1989: Women were recruited and enlisted as Sub Inspectors.
1991: The Sri Lanka Police celebrated 125 years of policing in Sri Lanka.
1993: The Police Information Technology Division was established.
1998: The Marine Division was established.
1999: The Ombudsman Division was established.
2000: The Police Examination Division was established.
2002: Human Rights Division and Disappearances Investigation Unit established.
2004: The Judicial Security Division was established.
2005: The Colombo Crime Division was established.
2006: The Reserve Police Force was abolished and its officers were transferred to the regular police force.
2008: The Police Academy was established in 2008 with the amalgamation of the Police Higher Training Institute and the In-Service Training Division, which are now divisions of the Sri Lanka Police Academy.
Organisation
The Sri Lanka Police is headed by the Inspector General of Police, who has, in theory, autonomy to commanding the service from the Police Headquarters in Colombo, and support by the Police Field Force Headquarters. However, in the recent past the Police Service has come under the purview of the Ministry of Defence (MoD), with the exception of several years when it came under the Ministry of Internal Affairs but was transferred to the MoD. In the last few years there have been calls to reestablish the independent National Police Commission to oversee transfers and promotions, thereby making the service autonomous and free from any influence.
The police service is organised into five primary geographic commands, known as ranges (Range I, II, III, IV, V), covering the northern, western, eastern and southern sectors of the island under the command of a Senior Deputy Inspector General of Police (SDIG). The ranges were subdivided into divisions, districts, and police stations; Colombo was designated as a special range. Each police division headed by a Deputy Inspector General of Police (DIG) covers a single province, and a police district headed by a Senior Superintendent of Police (SSP) covers a single district of the country. In 1913 there were a total of 119 police stations throughout the country, that number has increased to 432 in 2020.
With the escalation of the Sri Lankan Civil War the strength and the number of stations have increased. Since 1971 the police service has suffered large number of casualties, with officers and constables killed and wounded as a result of terrorists and insurgents. In more remote rural areas beyond the immediate range of existing police stations, enforcement of simple crimes are carried out by the Grama Seva Niladhari (village service officers), but this has now become rare, with most villages covered by new police stations.
In addition to its regular forces, the police service operated a reserve contingent until 2007 when the Reserve Police Force was disbanded and its personnel transferred to the regular police force. The police service has a number of specialised units responsible for investigative, protective, counter-terrorism and paramilitary functions.
Investigation of organised criminal activity and detective work are handled by the Criminal Investigation Department (CID) under the command of a Deputy Inspector General of Police (DIG). More coordinated threats to internal security, such as that posed by the radical Sinhalese JVP in the 1980s, were the responsibility of the Counter Subversive Division, which was primarily an investigative division, and which has since been replaced by the Terrorist Investigation Department (TID). The TID carries out counter-terrorism investigations and threats to internal security from the LTTE.
Protective security units which are entrusted the security includes the Ministerial Security Division (elected public figures), Diplomatic Security Division (foreign diplomats) and Judicial Security Division (judges). The President's Security Division and the Prime Minister's Security Division function independently but consist of mostly police personnel.
Other specialised units includes the Information Technology Division, the Mounted Division, the Anti-riot Squad, Traffic Police, K9 units, the Marine Division, the Police Narcotic Bureau, and the Children & Women Bureau. The police service also operates the Sri Lanka Police College of personnel training and the Police Hospital.
Special Task Force
Special Task Force is one of the special operational units in the Police Service. This police paramilitary force was set up on 1 March 1983 with the assistance of foreign advisers (primarily former British Special Air Service personnel under the auspices of Keeny Meeny Services). Its 1,100-member force was organised into seven companies and trained in counterinsurgency techniques. It played a major role in the government's combined force operations against the Tamil Tigers in Eastern Province before July 1987. Following the signing of the Indo-Sri Lankan Accord, the Special Task Force was redesignated the Police Special Force, and deployed in the Southern Province, where it immediately went into action against the JVP terrorists. Companies of the force also served in rotation as part of the presidential security guard.
Internal intelligence
Until 1984 the police were responsible for national (local) intelligence functions, first under the Special Branch (est. 1966 as part of the CID), and later under the Intelligence Services Division. The perceived failure of the Intelligence Services Division during the riots of July 1983 led the J.R. Jayawardene government to reevaluate the nation's intelligence network, and in 1984 the president set up a National Intelligence Bureau. The new organisation combined intelligence units from the army, navy, air force, and police. It was headed by a deputy inspector general of police who reported directly to the Ministry of Defence.
Specialised units and divisions
Protective units
President's Security Division
Prime Minister's Security Division
Ministerial Security Division
Parliament Police Division
Judicial Security Division
Diplomatic Security Division
Counter-terrorist units
Special Task Force (STF)
Terrorist Investigation Division (TID)
Crime-investigation units
Criminal Investigation Department (CID)
Colombo Crime Division
Police Narcotic Bureau
Financial Crimes Investigation Division (FCID)
Children & Women Bureau
Disappearances Division
Human Rights Division
Law enforcement
Traffic Police
Tourist Police
Anti-Riot Squad
Police Kennels (K9 units)
Ombudsman Division
Strategic Development Division (community policing)
Support units
Mounted Division
Marine Division
Sri Lanka Police Academy
Police Examination Division
Police Hospital, Colombo
Technology infrastructure
Police Information Technology Division
Police Communication Division
Police CCTV Division
Police Public Relations Division
Police Tell IGP Unit
Police 119 Call Center
Peacekeeping and international deployments
In recent years members of the Sri Lanka Police have taken part in international deployments either as advisers, observers or seconded police officers for United Nations missions. These include:
Since 2002, Sri Lankan Police personnel have taken part in several United Nations peacekeeping missions worldwide;
United Nations Mission of Support to East Timor
United Nations Stabilization Mission in Haiti
United Nations Mission in Sudan
United Nations Mission in Liberia
Special Task Force personnel have been assisting the Chinese police for the 2008 Beijing Olympics in dealing with possible terrorist threats.
Ranks
Senior Offficers
Other Ranks
Requirement
Requirement to the police service is carried out at four stages. These stages are based upon the entry ranks and educational qualifications of the recruits.
Probationary Assistant Superintendent of Police - Male/female graduates (aged 22–26 years) may apply and must face an entrance exam.
Probationary Sub Inspector of Police - Males/females who have passed GCE Advanced Levels (aged 18–25 years) may apply and must face an endurance test and a written exam.
Police Constable - Males who have passed GCE Ordinary Levels (aged 18–25 years) may apply and must face an endurance test and a written exam.
Women Police Constable - Females who have passed GCE Ordinary Levels (aged 18–25 years) may apply and must face an endurance test and a written exam.
Police Constable Drivers - Those who complete up to grade 7 at school or higher with valid driving license (aged 19–35 years) may apply and must face an endurance test and a written exam.
Composition of the police service
Since its establishment in the 19th century, the police service has been a centrally controlled national police force. Due to this, its personnel are not recruited and deployed provincially. During the colonial period much of its senior officers were British, with lower ranks made up of natives. However this composition did not mirror the racial composition of the island. Many of the locals in the Ceylon Police Force were Burghers, followed by Sinhalese and Tamils. This was common in the government sector and continued until the mid-1950s. Following political efforts to balance the racial composition of the police service to mirror that of society, and due to the civil war, the composition has become imbalanced once again, with the majority of the officers being Sinhalese. Currently steps are being taken to address this and personnel of all entry levels are recruited from all racial groups of the island.
Uniforms
Historical
With the establishment of the Ceylon Police in 1866, standard uniforms based on the ones of the British police forces were adapted. Officers of the grade of Inspector and above who were mostly British wore white colonial uniforms, which are still used today for ceremonial occasions. Constables wore dark blue tunics, shorts and a black round cap with a collar number. Khaki uniforms were adopted by the beginning of the 20th century for practical reasons, along with other military and police units of the British Empire. This was common for all ranks, with the constables wearing khaki tunics, shorts and hat, while always armed with a baton until 1974.
Current
The current standard uniform comes from the last major changes made in 1974. However, several additions have been made since then for practical reasons. The old white uniform still remain as the full-dress uniform of gazetted officers above the rank of sub inspector SI, only worn for ceremonial occasions and weddings. This includes white tunic, trousers (or skirt), and medals, and is adorned with black epaulettes with rank insignia, a black leather cross belt with the lion head badge with whistle and chain, police badge-affixed black leather pouch, sword, and a white pith helmet. Senior gazetted officers (of and above ranks of ASP) may wear a waist sash in gold colour instead of the cross belt. Mounted officers wear a red tunic for ceremonial occasions with a gold cross belt and a black custodian helmet. Gazetted officers above the rank of sub inspector (SI), carry swords, and constables carry lances with a police pennant.
The No.01 khaki uniform is worn for most formal occasions. This consists of a khaki jacket adorned with black epaulettes (Gazetted officers above the rank of sub inspector - SI), white shirt, black tie with khaki trousers or a skirt, black peaked cap and medals.
The No.02 khaki uniform is the normal working uniform of all police officers. It consists of a khaki shirt (long or short sleeved), khaki trousers or a skirt, black peaked cap, and medals ribbons. Gazetted officers of and above the grade of superintendent wear black "gorget patches" on all types of uniforms. Officers above the rank of sub inspector SI, tend to wear a short sleeve tunic like a "bush jacket" as part of their No.02 khaki uniform. Black sam browne belts are worn by Officers above the rank of sub inspector - SI, with traffic policemen wearing white peak caps and the white belt with sam browne belts on their khaki uniforms. Constables and sergeants wear their service numbers on their uniforms. For practical reasons overalls of green or black may be worn with boots when necessary.
Special Task Force personnel usually wear khaki uniforms which are slightly lighter in colour. They tend to wear DPM camouflage uniforms with boots and bright green berets.
Awards and decorations
The Sri Lanka Police has its own awards and decorations that are awarded to its officers for services in the line of duty.
Weapons
Sri Lanka Police officers normally don't carry weapons (but are advised to). The Special Task Force with its wide range of duties is equipped with a greater variety of firearms and a higher degree of firepower to carry out military type counter-terrorism operations.
Handguns
Glock 17
Beretta 92
Glock 19
Beretta M9 series pistols
Browning 9mm
Assault rifles
Type 56 assault rifles for ceremonial purposes
Type 56-2 assault rifles
M4 Carbine assault rifles
Sub-machine guns
H&K MP5 submachine guns
Uzi submachine gun
Sniper rifles
Heckler & Koch PSG1 sniper rifles
Grenade launchers
HK 69 breech-loading grenade launcher (to fire tear gas for riot control)
Vehicles
Hyundai Elantra, Volkswagen, Mazda and Subaru patrol cars
Mitsubishi Galant cars
Proton cars
Mazda BT-50 pick-ups
Tata Safari SUVs
Kawasaki 750cc motorcycles
Hero Honda 200cc motorcycles
Tata Sumo SUVs
Suzuki 500cc motorcycles
Mahindra Scorpio SUVs
Yamaha 600cc patrol bikes
Bicycles
Notable officers killed in the line of duty or assassinated
SDIG T.N. De Silva - Senior DIG Colombo Range, killed by a LTTE suicide bomb attack on 18 December 1999
DIG Bennet Perera - Director, Criminal Investigation Department (CID); shot dead on 1 May 1989 in Mount Lavinia; JVP suspected.
SSP Ranwalage Sirimal Perera - Superintendent of Police; killed with president Premadasa by a LTTE suicide bomb attack on 1 May 1993
DIG Terrance Perera - Director, Counter Subversive Division; shot dead on 12 December 1987 in Talangama; JVP suspected.
DIG Upul Seneviratne - Director of Training, Special Task Force; killed in a roadside bombing on 7 August 2006, LTTE suspected
DIG Charles Wijewardene - Superintendent of Police, Jaffna; abducted and killed in Jaffna on 5 August 2005, LTTE suspected
Constable Sabhan - The origin of the annual Police Day commemoration dates back to 21 March 1864, when Constable Sabhan died of gunshot injuries received during a police raid to apprehend the notorious bandit Utuwankande Sura Saradiel.
See also
Law enforcement in Sri Lanka
Awards and decorations of the Sri Lanka Police
Home Guard Service
Department of Prisons
List of Sri Lankan mobsters
Vidane
Police community support officer
References
External links
Official website of Sri Lanka Police
Official website of National Police Commission
Official website of Ministry of Law and Order
Police Service |
94419 | https://en.wikipedia.org/wiki/Gentoo%20Linux | Gentoo Linux | Gentoo Linux (pronounced ) is a Linux distribution built using the Portage package management system. Unlike a binary software distribution, the source code is compiled locally according to the user's preferences and is often optimized for the specific type of computer. Precompiled binaries are available for some larger packages or those with no available source code.
Gentoo Linux was named after the gentoo penguin, the fastest swimming species of penguin. The name was chosen to reflect the potential speed improvements of machine-specific optimization, which is a major feature of Gentoo. Gentoo package management is designed to be modular, portable, easy to maintain, and flexible. Gentoo describes itself as a meta-distribution because of its adaptability, in that the majority of users have configurations and sets of installed programs which are unique to the system and the applications they use.
History
Gentoo Linux was initially created by Daniel Robbins as the Enoch Linux distribution. The goal was to create a distribution without precompiled binaries that was tuned to the hardware and only included required programs. At least one version of Enoch was distributed: version 0.75, in December 1999.
Daniel Robbins and the other contributors experimented with a fork of GCC known as EGCS, developed by Cygnus Solutions. At this point, "Enoch" was renamed "Gentoo" Linux (the gentoo species is the fastest-swimming penguin). The modifications to EGCS eventually became part of the official GCC (version 2.95), and other Linux distributions experienced similar speed increases.
After problems with a bug on his own system, Robbins halted Gentoo development and switched to FreeBSD for several months, later saying, "I decided to add several FreeBSD features to make our autobuild system (now called Portage) a true next-generation ports system."
Gentoo Linux 1.0 was released on March 31, 2002. In 2004, Robbins set up the non-profit Gentoo Foundation, transferred all copyrights and trademarks to it, and stepped down as chief architect of the project.
The current board of trustees is composed of five members who were announced (following an election) on March 2, 2008. There is also a seven-member Gentoo Council that oversees the technical issues and policies of Gentoo. The Gentoo Council members are elected annually, for a period of one year, by the active Gentoo developers. When a member of the Council retires, the successor is voted into place by the existing Council members.
The Gentoo Foundation is a domestic non-profit corporation, registered in the State of New Mexico. In late 2007, the Foundation's charter was revoked, but by May 2008 the State of New Mexico declared that the Gentoo Foundation, Inc. had returned to good standing and was free to do business.
Features
Gentoo appeals to Linux users who want full control of the software that is installed and running on their computer. People who are prepared to invest the time required to configure and tune a Gentoo system can build very efficient desktops and servers. Gentoo encourages users to build a Linux kernel tailored to their particular hardware. It allows very fine control of which services are installed and running. Memory usage can also be reduced compared to other distributions by omitting unnecessary kernel features and services.
Gentoo's package repositories provide a large collection of software. Each package contains details of any dependencies, so only the minimum set of packages need to be installed. Optional features of individual packages, such as whether they require LDAP or Qt support, can be selected by the user and any resulting package requirements are automatically included in the set of dependencies.
As Gentoo does not impose a standard look and feel, installed packages usually appear as their authors intended.
Portage
Portage is Gentoo's software distribution and package management system. The original design was based on the ports system used by the Berkeley Software Distribution (BSD) operating systems. The Gentoo repository contains over 19,000 packages ready for installation in a Gentoo system.
A single invocation of portage's command can update the local copy of the Gentoo repository, search for a package, or download, compile, and install one or more packages and their dependencies. The built-in features can be set for individual packages, or globally, with so-called "USE flags".
Pre-compiled binaries are provided for some applications with long build times, such as LibreOffice and Mozilla Firefox, but users lose the ability to customize optional features. There are configuration options to reduce compilation times, such as by enabling parallel compilation or using pipes instead of temporary files. Package compilation may also be distributed over multiple computers. Additionally, the user may be able to mount a large filesystem in memory to further speed up the process of building packages. Some approaches have drawbacks and are not enabled by default. When installing the same package on multiple computers with sufficiently similar hardware, the package may be compiled once and a binary package created for quick installation on the other computers.
Portability
As Gentoo is a source-based distribution with a repository describing how to build the packages, adding instructions to build on different machine architectures is particularly easy.
Originally built on the IA-32 architecture, Gentoo has since been ported to many others. It is officially supported and considered stable on IA-32, x86-64, IA-64, PA-RISC, 32-bit and 64-bit PowerPC, 64-bit SPARC, DEC Alpha, and both 32- and 64-bit ARM architectures. It is also officially supported but considered in-development state on MIPS, PS3 Cell Processor, System Z/s390,. Official support for 32-bit SPARC hardware and SuperH have been dropped.
Portability towards other operating systems, such as those derived from BSD, including macOS, is under active development by the Gentoo/Alt project. The Gentoo/FreeBSD project already has a working guide based on FreeSBIE, while Gentoo/NetBSD, Gentoo/OpenBSD and Gentoo/DragonFly are being developed. There is also a project to get Portage working on OpenSolaris. There was an unofficial project to create a Gentoo port to GNU Hurd, but it has been inactive since 2006.
It is also possible to install a Gentoo Prefix (provided by a project that maintains alternative installation methods for Gentoo) in a Cygwin environment on Windows, but this configuration is experimental.
Installation
Gentoo may be installed in several ways. The most common way is to use the Gentoo minimal CD with a stage3 tarball (explained below). As with many Linux distributions, Gentoo may be installed from almost any Linux environment, such as another Linux distribution's Live CD, Live USB, or Network Booting using the "Gentoo Alternative Install Guide". A normal install requires a connection to the Internet, but there is also a guide for a network-less install.
Previously, Gentoo supported installation from stage1 and stage2 tarballs; however, the Gentoo Foundation no longer recommends them. Stage1 and stage2 are meant only for Gentoo developers.
Following the initial install steps, the Gentoo Linux install process in the Gentoo Handbook describes compiling a new Linux kernel. This process is generally not required by other Linux distributions. Although this is widely regarded as a complex task, Gentoo provides documentation and tools such as Genkernel to simplify the process. In addition, users may also use an existing kernel known to work on their system by simply copying it to the boot directory, or installing one of the provided pre-compiled kernel packages, and updating their bootloader. Support for installation is provided on the Gentoo forum, Reddit, IRC.
A Live USB of Gentoo Linux can be created manually, by using UNetbootin or with dd as described in the handbook.
Stages
Before October 2005, installation could be started from any of three base stages:
Stage1 begins with only what is necessary to build a toolchain (the various compilers, linkers, and language libraries necessary to compile other software) for the target system; compiling this target toolchain from another, pre-existing host system is known as bootstrapping the target system.
Stage2 begins with a self-hosting (bootstrapped) toolchain for the target system, which is then used to compile all other core userland software for the target.
Stage3 begins with a minimal set of compiled user software, with which the kernel and any other additional software are then configured and compiled.
Since October 2005, only the stage3 installations have been officially supported, due to the inherent complexities of bootstrapping from earlier stages (which requires resolving and then breaking numerous circular dependencies). Tarballs for stage1 and stage2 were distributed for some time after this, although the instructions for installing from these stages had been removed from the handbook and moved into the Gentoo FAQ. , only the supported stage3 tarballs are publicly available; stage1 and stage2 tarballs are only "officially" generated and used internally by Gentoo development teams. However, if so desired, a user may still rebuild the toolchain or reinstall the base system software during or after a normal stage3 installation, effectively simulating the old bootstrap process.
Gentoo Reference Platform
From 2003 until 2008, the Gentoo Reference Platform (GRP) was a snapshot of prebuilt packages that users could quickly install during the Gentoo installation process, to give faster access to a fully functional Gentoo installation. These packages included KDE, X Window System, OpenOffice, GNOME, and Mozilla. Once the installation was complete, the packages installed as part of the GRP were intended to be replaced by the user with the same or newer versions built through Portage that would be built using the user's system configuration rather than the generic builds provided by the GRP. As of 2011, the GRP is discontinued, the final reference to it appearing in the 2008.0 handbook.
Versions
Gentoo follows a rolling release model.
Like other Linux distributions, Gentoo systems have an /etc/gentoo-release file, but this contains the version of the installed sys-apps/baselayout package.
In 2004, Gentoo began to version its Live media by year rather than numerically. This continued until 2008, when it was announced that the 2008.1 Live CD release had been cancelled in favour of weekly automated builds of both Stages 3 and Minimal CDs. On December 20, 2008, the first weekly builds were published. In 2009, a special Live DVD was created to celebrate the Gentoo 10-year anniversary.
Release media version history
Special releases
In 2009, a special Live DVD was released to celebrate Gentoo's tenth anniversary. Initially planned as a one-off, the Live DVD was updated to the latest package versions in 2011 due to its popularity among new users.
Profiles
Although Gentoo does not have a concept of versioning the entire system, it does make use of "profiles", which define build configuration for all packages in the system. Major changes, such as changing the layout of how files are installed across the entire system, typically involve a profile upgrade and may require rebuilding all installed software. These profiles are versioned based on the year they were released, and include several variants for each release targeted towards different types of systems (such as servers and desktops). Profiles formerly tracked the versioning of install media, and switched to two-digit year naming after the discontinuation of versioned media. The following new profile versions have been released after 2008.0:
Hardened Gentoo
Hardened Gentoo is a project designed to develop and designate a set of add-ons that are useful when a more security focused installation is required. Previously, the project included patches to produce a hardened kernel, but these were discontinued. Other parts of the hardened set, such as SELinux, and userspace hardening remain.
Incidents
In June 2018 the Gentoo GitHub code repository mirror used mainly by developers was hacked after an attacker gained access to an organization administrator's account via deducing the password. Gentoo promptly responded by containing the attack and improving security practices. No Gentoo cryptography keys or signed packages were compromised, and the repository was restored after five days.
Logo and mascots
The gentoo penguin is thought to be the fastest underwater-swimming penguin. The name "Gentoo Linux" acknowledges both the Linux mascot a penguin called Tux and the project's aim to produce a high-performance operating system.
The official Gentoo logo is a stylized 'g' resembling a silver magatama. Unofficial mascots include Larry The Cow and Znurt the Flying Saucer.
Derived distributions
There are a number of independently developed variants of Gentoo Linux, including Chromium OS and Container Linux.
See also
GoboLinux
Linux From Scratch
Lunar Linux
Source Mage
References
External links
Official documentation
, allowing collaboration of developers and users
, lists all packages currently available in the Gentoo repository
2002 software
PowerPC operating systems
Source-based Linux distributions
X86-64 Linux distributions
Linux distributions without systemd
PowerPC Linux distributions
Rolling Release Linux distributions
Linux distributions |
28814 | https://en.wikipedia.org/wiki/Secure%20Shell | Secure Shell | The Secure Shell Protocol (SSH) is a cryptographic network protocol for operating network services securely over an unsecured network. Its most notable applications are remote login and command-line execution.
SSH applications are based on a client–server architecture, connecting an SSH client instance with an SSH server. SSH operates as a layered protocol suite comprising three principal hierarchical components: the transport layer provides server authentication, confidentiality, and integrity; the user authentication protocol validates the user to the server; and the connection protocol multiplexes the encrypted tunnel into multiple logical communication channels.
SSH was designed on Unix-like operating systems, as a replacement for Telnet and for unsecured remote Unix shell protocols, such as the Berkeley Remote Shell (rsh) and the related rlogin and rexec protocols, which all use insecure, plaintext transmission of authentication tokens.
SSH was first designed in 1995 by Finnish computer scientist Tatu Ylönen. Subsequent development of the protocol suite proceeded in several developer groups, producing several variants of implementation. The protocol specification distinguishes two major versions, referred to as SSH-1 and SSH-2. The most commonly implemented software track is OpenSSH, released in 1999 as open-source software by the OpenBSD developers. Implementations are distributed for all types of operating systems in common use, including embedded systems.
Definition
SSH uses public-key cryptography to authenticate the remote computer and allow it to authenticate the user, if necessary.
SSH may be used in several methodologies. In the simplest manner, both ends of a communication channel use automatically generated public-private key pairs to encrypt a network connection, and then use a password to authenticate the user.
When the public-private key pair is generated by the user manually, the authentication is essentially performed when the key pair is created, and a session may then be opened automatically without a password prompt. In this scenario, the public key is placed on all computers that must allow access to the owner of the matching private key, which the owner keeps private. While authentication is based on the private key, the key is never transferred through the network during authentication. SSH only verifies that the same person offering the public key also owns the matching private key.
In all versions of SSH it is important to verify unknown public keys, i.e. associate the public keys with identities, before accepting them as valid. Accepting an attacker's public key without validation will authorize an unauthorized attacker as a valid user.
Authentication: OpenSSH key management
On Unix-like systems, the list of authorized public keys is typically stored in the home directory of the user that is allowed to log in remotely, in the file ~/.ssh/authorized_keys. This file is respected by SSH only if it is not writable by anything apart from the owner and root. When the public key is present on the remote end and the matching private key is present on the local end, typing in the password is no longer required. However, for additional security the private key itself can be locked with a passphrase.
The private key can also be looked for in standard places, and its full path can be specified as a command line setting (the option -i for ssh). The ssh-keygen utility produces the public and private keys, always in pairs.
SSH also supports password-based authentication that is encrypted by automatically generated keys. In this case, the attacker could imitate the legitimate server side, ask for the password, and obtain it (man-in-the-middle attack). However, this is possible only if the two sides have never authenticated before, as SSH remembers the key that the server side previously used. The SSH client raises a warning before accepting the key of a new, previously unknown server. Password authentication can be disabled from the server side.
Usage
SSH is typically used to log into a remote machine and execute commands, but it also supports tunneling, forwarding TCP ports and X11 connections; it can transfer files using the associated SSH file transfer (SFTP) or secure copy (SCP) protocols. SSH uses the client–server model.
An SSH client program is typically used for establishing connections to an SSH daemon accepting remote connections. Both are commonly present on most modern operating systems, including macOS, most distributions of Linux, OpenBSD, FreeBSD, NetBSD, Solaris and OpenVMS. Notably, versions of Windows prior to Windows 10 version 1709 do not include SSH by default. Proprietary, freeware and open source (e.g. PuTTY, and the version of OpenSSH which is part of Cygwin) versions of various levels of complexity and completeness exist. File managers for UNIX-like systems (e.g. Konqueror) can use the FISH protocol to provide a split-pane GUI with drag-and-drop. The open source Windows program WinSCP provides similar file management (synchronization, copy, remote delete) capability using PuTTY as a back-end. Both WinSCP and PuTTY are available packaged to run directly off a USB drive, without requiring installation on the client machine. Setting up an SSH server in Windows typically involves enabling a feature in Settings app. In Windows 10 version 1709, an official Win32 port of OpenSSH is available.
SSH is important in cloud computing to solve connectivity problems, avoiding the security issues of exposing a cloud-based virtual machine directly on the Internet. An SSH tunnel can provide a secure path over the Internet, through a firewall to a virtual machine.
The IANA has assigned TCP port 22, UDP port 22 and SCTP port 22 for this protocol. IANA had listed the standard TCP port 22 for SSH servers as one of the well-known ports as early as 2001. SSH can also be run using SCTP rather than TCP as the connection oriented transport layer protocol.
Historical development
Version 1
In 1995, Tatu Ylönen, a researcher at Helsinki University of Technology, Finland, designed the first version of the protocol (now called SSH-1) prompted by a password-sniffing attack at his university network. The goal of SSH was to replace the earlier rlogin, TELNET, FTP and rsh protocols, which did not provide strong authentication nor guarantee confidentiality. Ylönen released his implementation as freeware in July 1995, and the tool quickly gained in popularity. Towards the end of 1995, the SSH user base had grown to 20,000 users in fifty countries.
In December 1995, Ylönen founded SSH Communications Security to market and develop SSH. The original version of the SSH software used various pieces of free software, such as GNU libgmp, but later versions released by SSH Communications Security evolved into increasingly proprietary software.
It was estimated that by the year 2000 the number of users had grown to 2 million.
Version 2
"Secsh" was the official Internet Engineering Task Force's (IETF) name for the IETF working group responsible for version 2 of the SSH protocol. In 2006, a revised version of the protocol, SSH-2, was adopted as a standard. This version is incompatible with SSH-1. SSH-2 features both security and feature improvements over SSH-1. Better security, for example, comes through Diffie–Hellman key exchange and strong integrity checking via message authentication codes. New features of SSH-2 include the ability to run any number of shell sessions over a single SSH connection. Due to SSH-2's superiority and popularity over SSH-1, some implementations such as libssh (v0.8.0+), Lsh and Dropbear support only the SSH-2 protocol.
Version 1.99
In January 2006, well after version 2.1 was established, RFC 4253 specified that an SSH server supporting 2.0 as well as prior versions should identify its protocol version as 1.99. This version number does not reflect a historical software revision, but a method to identify backward compatibility.
OpenSSH and OSSH
In 1999, developers, desiring availability of a free software version, restarted software development from the 1.2.12 release of the original SSH program, which was the last released under an open source license. This served as a code base for Björn Grönvall's OSSH software. Shortly thereafter, OpenBSD developers forked Grönvall's code and created OpenSSH, which shipped with Release 2.6 of OpenBSD. From this version, a "portability" branch was formed to port OpenSSH to other operating systems.
, OpenSSH was the single most popular SSH implementation, being the default version in a large number of operating system distributions. OSSH meanwhile has become obsolete. OpenSSH continues to be maintained and supports the SSH-2 protocol, having expunged SSH-1 support from the codebase in the OpenSSH 7.6 release.
Uses
SSH is a protocol that can be used for many applications across many platforms including most Unix variants (Linux, the BSDs including Apple's macOS, and Solaris), as well as Microsoft Windows. Some of the applications below may require features that are only available or compatible with specific SSH clients or servers. For example, using the SSH protocol to implement a VPN is possible, but presently only with the OpenSSH server and client implementation.
For login to a shell on a remote host (replacing Telnet and rlogin)
For executing a single command on a remote host (replacing rsh)
For setting up automatic (passwordless) login to a remote server (for example, using OpenSSH)
In combination with rsync to back up, copy and mirror files efficiently and securely
For forwarding a port
For tunneling (not to be confused with a VPN, which routes packets between different networks, or bridges two broadcast domains into one).
For using as a full-fledged encrypted VPN. Note that only OpenSSH server and client supports this feature.
For forwarding X from a remote host (possible through multiple intermediate hosts)
For browsing the web through an encrypted proxy connection with SSH clients that support the SOCKS protocol.
For securely mounting a directory on a remote server as a filesystem on a local computer using SSHFS.
For automated remote monitoring and management of servers through one or more of the mechanisms discussed above.
For development on a mobile or embedded device that supports SSH.
For securing file transfer protocols.
File transfer protocols
The Secure Shell protocols are used in several file transfer mechanisms.
Secure copy (SCP), which evolved from RCP protocol over SSH
rsync, intended to be more efficient than SCP. Generally runs over an SSH connection.
SSH File Transfer Protocol (SFTP), a secure alternative to FTP (not to be confused with FTP over SSH or FTPS)
Files transferred over shell protocol (a.k.a. FISH), released in 1998, which evolved from Unix shell commands over SSH
Fast and Secure Protocol (FASP), aka Aspera, uses SSH for control and UDP ports for data transfer.
Architecture
The SSH protocol has a layered architecture with separates three components:
The transport layer (RFC 4253) typically uses the Transmission Control Protocol (TCP) of TCP/IP, reserving port number 22 as a server listening port. This layer handles initial key exchange as well as server authentication, and sets up encryption, compression, and integrity verification. It exposes to the upper layer an interface for sending and receiving plaintext packets with a size of up to 32,768 bytes each, but more can be allowed by each implementation. The transport layer also arranges for key re-exchange, usually after 1 GB of data has been transferred or after one hour has passed, whichever occurs first.
The user authentication layer (RFC 4252) handles client authentication, and provides a suite of authentication algorithms. Authentication is client-driven: when one is prompted for a password, it may be the SSH client prompting, not the server. The server merely responds to the client's authentication requests. Widely used user-authentication methods include the following:
password: a method for straightforward password authentication, including a facility allowing a password to be changed. Not all programs implement this method.
publickey: a method for public-key-based authentication, usually supporting at least DSA, ECDSA or RSA keypairs, with other implementations also supporting X.509 certificates.
keyboard-interactive (RFC 4256): a versatile method where the server sends one or more prompts to enter information and the client displays them and sends back responses keyed-in by the user. Used to provide one-time password authentication such as S/Key or SecurID. Used by some OpenSSH configurations when PAM is the underlying host-authentication provider to effectively provide password authentication, sometimes leading to inability to log in with a client that supports just the plain password authentication method.
GSSAPI authentication methods which provide an extensible scheme to perform SSH authentication using external mechanisms such as Kerberos 5 or NTLM, providing single sign-on capability to SSH sessions. These methods are usually implemented by commercial SSH implementations for use in organizations, though OpenSSH does have a working GSSAPI implementation.
The connection layer (RFC 4254) defines the concept of channels, channel requests, and global requests, which define the SSH services provided. A single SSH connection can be multiplexed into multiple logical channels simultaneously, each transferring data bidirectionally. Channel requests are used to relay out-of-band channel-specific data, such as the changed size of a terminal window, or the exit code of a server-side process. Additionally, each channel performs its own flow control using the receive window size. The SSH client requests a server-side port to be forwarded using a global request. Standard channel types include:
shell for terminal shells, SFTP and exec requests (including SCP transfers)
direct-tcpip for client-to-server forwarded connections
forwarded-tcpip for server-to-client forwarded connections
The SSHFP DNS record (RFC 4255) provides the public host key fingerprints in order to aid in verifying the authenticity of the host.
This open architecture provides considerable flexibility, allowing the use of SSH for a variety of purposes beyond a secure shell. The functionality of the transport layer alone is comparable to Transport Layer Security (TLS); the user-authentication layer is highly extensible with custom authentication methods; and the connection layer provides the ability to multiplex many secondary sessions into a single SSH connection, a feature comparable to BEEP and not available in TLS.
Algorithms
EdDSA, ECDSA, RSA and DSA for public-key cryptography.
ECDH and Diffie–Hellman for key exchange.
HMAC, AEAD and UMAC for MAC.
AES (and deprecated RC4, 3DES, DES) for symmetric encryption.
AES-GCM and ChaCha20-Poly1305 for AEAD encryption.
SHA (and deprecated MD5) for key fingerprint.
Vulnerabilities
SSH-1
In 1998, a vulnerability was described in SSH 1.5 which allowed the unauthorized insertion of content into an encrypted SSH stream due to insufficient data integrity protection from CRC-32 used in this version of the protocol. A fix known as SSH Compensation Attack Detector was introduced into most implementations. Many of these updated implementations contained a new integer overflow vulnerability that allowed attackers to execute arbitrary code with the privileges of the SSH daemon, typically root.
In January 2001 a vulnerability was discovered that allows attackers to modify the last block of an IDEA-encrypted session. The same month, another vulnerability was discovered that allowed a malicious server to forward a client authentication to another server.
Since SSH-1 has inherent design flaws which make it vulnerable, it is now generally considered obsolete and should be avoided by explicitly disabling fallback to SSH-1. Most modern servers and clients support SSH-2.
CBC plaintext recovery
In November 2008, a theoretical vulnerability was discovered for all versions of SSH which allowed recovery of up to 32 bits of plaintext from a block of ciphertext that was encrypted using what was then the standard default encryption mode, CBC. The most straightforward solution is to use CTR, counter mode, instead of CBC mode, since this renders SSH resistant to the attack.
Suspected decryption by NSA
On December 28, 2014 Der Spiegel published classified information leaked by whistleblower Edward Snowden which suggests that the National Security Agency may be able to decrypt some SSH traffic. The technical details associated with such a process were not disclosed. A 2017 analysis of the CIA hacking tools BothanSpy and Gyrfalcon suggested that the SSH protocol was not compromised.
Standards documentation
The following RFC publications by the IETF "secsh" working group document SSH-2 as a proposed Internet standard.
– The Secure Shell (SSH) Protocol Assigned Numbers
– The Secure Shell (SSH) Protocol Architecture
– The Secure Shell (SSH) Authentication Protocol
– The Secure Shell (SSH) Transport Layer Protocol
– The Secure Shell (SSH) Connection Protocol
– Using DNS to Securely Publish Secure Shell (SSH) Key Fingerprints
– Generic Message Exchange Authentication for the Secure Shell Protocol (SSH)
– The Secure Shell (SSH) Session Channel Break Extension
– The Secure Shell (SSH) Transport Layer Encryption Modes
– Improved Arcfour Modes for the Secure Shell (SSH) Transport Layer Protocol
The protocol specifications were later updated by the following publications:
– Diffie-Hellman Group Exchange for the Secure Shell (SSH) Transport Layer Protocol (March 2006)
– RSA Key Exchange for the Secure Shell (SSH) Transport Layer Protocol (March 2006)
– Generic Security Service Application Program Interface (GSS-API) Authentication and Key Exchange for the Secure Shell (SSH) Protocol (May 2006)
– The Secure Shell (SSH) Public Key File Format (November 2006)
– Secure Shell Public Key Subsystem (March 2007)
– AES Galois Counter Mode for the Secure Shell Transport Layer Protocol (August 2009)
– Elliptic Curve Algorithm Integration in the Secure Shell Transport Layer (December 2009)
– X.509v3 Certificates for Secure Shell Authentication (March 2011)
– Suite B Cryptographic Suites for Secure Shell (SSH) (May 2011)
– Use of the SHA-256 Algorithm with RSA, Digital Signature Algorithm (DSA), and Elliptic Curve DSA (ECDSA) in SSHFP Resource Records (April 2012)
– SHA-2 Data Integrity Verification for the Secure Shell (SSH) Transport Layer Protocol (July 2012)
– Ed25519 SSHFP Resource Records (March 2015)
– Secure Shell Transport Model for the Simple Network Management Protocol (SNMP) (June 2009)
– Using the NETCONF Protocol over Secure Shell (SSH) (June 2011)
draft-gerhards-syslog-transport-ssh-00 – SSH transport mapping for SYSLOG (July 2006)
draft-ietf-secsh-filexfer-13 – SSH File Transfer Protocol (July 2006)
In addition, the OpenSSH project includes several vendor protocol specifications/extensions:
OpenSSH PROTOCOL overview
OpenSSH certificate/key overview
draft-miller-ssh-agent-04 - SSH Agent Protocol (December 2019)
See also
Brute-force attack
Comparison of SSH clients
Comparison of SSH servers
Corkscrew
Ident
OpenSSH
Secure Shell tunneling
Web-based SSH
References
Further reading
Original announcement of Ssh
External links
SSH Protocols
Application layer protocols
Finnish inventions |
32961506 | https://en.wikipedia.org/wiki/RDRAND | RDRAND | RDRAND (for "read random"; known as Intel Secure Key Technology, previously known as Bull Mountain) is an instruction for returning random numbers from an Intel on-chip hardware random number generator which has been seeded by an on-chip entropy source. RDRAND is available in Ivy Bridge processors and is part of the Intel 64 and IA-32 instruction set architectures. AMD added support for the instruction in June 2015.
The random number generator is compliant with security and cryptographic standards such as NIST SP 800-90A, FIPS 140-2, and ANSI X9.82. Intel also requested Cryptography Research Inc. to review the random number generator in 2012, which resulted in the paper Analysis of Intel's Ivy Bridge Digital Random Number Generator.
RDSEED is similar to RDRAND and provides lower-level access to the entropy-generating hardware. The RDSEED generator and processor instruction rdseed are available with Intel Broadwell CPUs and AMD Zen CPUs.
Overview
The CPUID instruction can be used on both AMD and Intel CPUs to check whether the RDRAND instruction is supported. If it is, bit 30 of the ECX register is set after calling CPUID standard function 01H. AMD processors are checked for the feature using the same test. RDSEED availability can be checked on Intel CPUs in a similar manner. If RDSEED is supported, the bit 18 of the EBX register is set after calling CPUID standard function 07H.
The opcode for RDRAND is 0x0F 0xC7, followed by a ModRM byte that specifies the destination register and optionally combined with a REX prefix in 64-bit mode.
Intel Secure Key is Intel's name for both the RDRAND instruction and the underlying random number generator (RNG) hardware implementation, which was codenamed "Bull Mountain" during development. Intel calls their RNG a "digital random number generator" or DRNG. The generator takes pairs of 256-bit raw entropy samples generated by the hardware entropy source and applies them to an Advanced Encryption Standard (AES) (in CBC-MAC mode) conditioner which reduces them to a single 256-bit conditioned entropy sample. A deterministic random-bit generator called CTR_DRBG defined in NIST SP 800-90A is seeded by the output from the conditioner, providing cryptographically secure random numbers to applications requesting them via the RDRAND instruction. The hardware will issue a maximum of 511 128-bit samples before changing the seed value. Using the RDSEED operation provides access to the conditioned 256-bit samples from the AES-CBC-MAC.
The RDSEED instruction was added to Intel Secure Key for seeding another pseudorandom number generator, available in Broadwell CPUs. The entropy source for the RDSEED instruction runs asynchronously on a self-timed circuit and uses thermal noise within the silicon to output a random stream of bits at the rate of 3 GHz, slower than the effective 6.4 Gbit/s obtainable from RDRAND (both rates are shared between all cores and threads). The RDSEED instruction is intended for seeding a software PRNG of arbitrary width, whereas the RDRAND is intended for applications that merely require high-quality random numbers. If cryptographic security is not required, a software PRNG such as Xorshift is usually faster.
Performance
On an Intel Core i7-7700K, 4500 MHz (45 × 100 MHz) processor (Kaby Lake-S microarchitecture), a single RDRAND or RDSEED instruction takes 110 ns, or 463 clock cycles, regardless of the operand size (16/32/64 bits). This number of clock cycles applies to all processors with Skylake or Kaby Lake microarchitecture. On the Silvermont microarchitecture processors, each of the instructions take around 1472 clock cycles, regardless of the operand size; and on Ivy Bridge processors RDRAND takes up to 117 clock cycles.
On an AMD Ryzen CPU, each of the instructions takes around 1200 clock cycles for 16-bit or 32-bit operand, and around 2500 clock cycles for a 64-bit operand.
An astrophysical Monte Carlo simulator examined the time to generate 107 64-bit random numbers using RDRAND on a quad-core Intel i7-3740 QM processor. They found that a C implementation of RDRAND ran about 2× slower than the default random number generator in C, and about 20× slower than the Mersenne Twister. Although a Python module of RDRAND has been constructed, it was found to be 20× slower than the default random number generator in Python, although a performance comparison between a PRNG and CSPRNG cannot be made.
A microcode update released by Intel in June 2020, designed to mitigate the CrossTalk vulnerability (see the security issues section below), negatively impacts the performance of RDRAND and RDSEED due to additional security controls. On processors with the mitigations applied, each affected instruction incurs additional latency and simultaneous execution of RDRAND or RDSEED across cores is effectively serialised. Intel introduced a mechanism to relax these security checks, thus reducing the performance impact in most scenarios, but Intel processors do not apply this security relaxation by default.
Compilers
Visual C++ 2015 provides intrinsic wrapper support for the RDRAND and RDSEED functions. GCC 4.6+ and Clang 3.2+ provide intrinsic functions for RDRAND when -mrdrnd is specified in the flags, also setting to allow conditional compilation. Newer versions additionally provide immintrin.h to wrap these built-ins into functions compatible with version 12.1+ of Intel's C Compiler. These functions write random data to the location pointed to by their parameter, and return 1 on success.
Applications
It is an option to generate cryptographically secure random numbers using RDRAND and RDSEED in OpenSSL, to help secure communications.
A scientific application of RDRAND can be found in astrophysics. Radio observations of low-mass stars and brown dwarfs have revealed that a number of them emit bursts of radio waves. These radio waves are caused by magnetic reconnection, the same process that causes solar flares on the Sun. RDRAND was used to generate large quantities of random numbers for a Monte Carlo simulator, to model physical properties of the brown dwarfs and the effects of the instruments that observe them. They found that about 5% of brown dwarfs are sufficiently magnetic to emit strong radio bursts. They also evaluated the performance of the RDRAND instruction in C and Python compared to other random number generators.
Reception
In September 2013, in response to a New York Times article revealing the NSA's effort to weaken encryption, Theodore Ts'o publicly posted concerning the use of RDRAND for /dev/random in the Linux kernel:
Linus Torvalds dismissed concerns about the use of RDRAND in the Linux kernel and pointed out that it is not used as the only source of entropy for /dev/random, but rather used to improve the entropy by combining the values received from RDRAND with other sources of randomness. However, Taylor Hornby of Defuse Security demonstrated that the Linux random number generator could become insecure if a backdoor is introduced into the RDRAND instruction that specifically targets the code using it. Hornby's proof-of-concept implementation works on an unmodified Linux kernel prior to version 3.13. The issue was mitigated in the Linux kernel in 2013.
Developers changed the FreeBSD kernel away from using RDRAND and VIA PadLock directly with the comment "For FreeBSD 10, we are going to backtrack and remove RDRAND and Padlock backends and feed them into Yarrow instead of delivering their output directly to /dev/random. It will still be possible to access hardware random number generators, that is, RDRAND, Padlock etc., directly by inline assembly or by using OpenSSL from userland, if required, but we cannot trust them any more." FreeBSD /dev/random uses Fortuna and RDRAND started from FreeBSD 11.
Security issues
On 9 June 2020, researchers from Vrije Universiteit Amsterdam published a side-channel attack named CrossTalk (CVE-2020-0543) that affected RDRAND on a number of Intel processors. They discovered that outputs from the hardware digital random number generator (DRNG) were stored in a staging buffer that was shared across all cores. The vulnerability allowed malicious code running on an affected processor to read RDRAND and RDSEED instruction results from a victim application running on another core of that same processor, including applications running inside Intel SGX enclaves. The researchers developed a proof-of-concept exploit which extracted a complete ECDSA key from an SGX enclave running on a separate CPU core after only one signature operation. The vulnerability affects scenarios where untrusted code runs alongside trusted code on the same processor, such as in a shared hosting environment.
Intel refers to the CrossTalk vulnerability as Special Register Buffer Data Sampling (SRBDS). In response to the research, Intel released microcode updates to mitigate the issue. The updated microcode ensures that off-core accesses are delayed until sensitive operations specifically the RDRAND, RDSEED, and EGETKEY instructions are completed and the staging buffer has been overwritten. The SRBDS attack also affects other instructions, such as those that read MSRs, but Intel did not apply additional security protections to them due to performance concerns and the reduced need for confidentiality of those instructions' results. A wide range of Intel processors released between 2012 and 2019 were affected, including desktop, mobile, and server processors. The mitigations themselves resulted in negative performance impacts when using the affected instructions, particularly when executed in parallel by multi-threaded applications, due to increased latency introduced by the security checks and the effective serialisation of affected instructions across cores. Intel introduced an opt-out option, configurable via the IA32_MCU_OPT_CTRL MSR on each logical processor, which improves performance by disabling the additional security checks for instructions executing outside of an SGX enclave.
See also
AES instruction set
Bullrun (decryption program)
OpenSSL
wolfSSL
Notes
References
External links
RdRand .NET Open Source Project
X86 microprocessors
X86 instructions
Machine code
Random number generation
X86 architecture |
43358530 | https://en.wikipedia.org/wiki/Zoom%20Video%20Communications | Zoom Video Communications | Zoom Video Communications, Inc. (commonly shortened to Zoom, and stylized as zoom) is an American communications technology company headquartered in San Jose, California. It provides videotelephony and online chat services through a cloud-based peer-to-peer software platform and is used for teleconferencing, telecommuting, distance education, and social relations.
Eric Yuan, a former Cisco engineer and executive, founded Zoom in 2011, and launched its software in 2013. Zoom's revenue growth, and perceived ease-of-use and reliability of its software, resulted in a $1 billion valuation in 2017, making it a "unicorn" company. The company first became profitable in 2019, and completed an initial public offering that year. The company joined the NASDAQ-100 stock index on April 30, 2020.
Beginning in early 2020, Zoom's software usage saw a remarkable global increase after quarantine measures were adopted in response to the COVID-19 pandemic. Its software products have faced public and media scrutiny related to security and privacy issues.
History
Early years
Zoom was founded by Eric Yuan, a former corporate vice president for Cisco Webex. He left Cisco in April 2011 with 40 engineers to start a new company, originally named Saasbee, Inc. The company had trouble finding investors because many people thought the videotelephony market was already saturated. In June 2011, the company raised $3 million of seed money from WebEx founder Subrah Iyar, former Cisco SVP and General Counsel Dan Scheinman, and venture capitalists Matt Ocko, TSVC, and Bill Tai.
In May 2012, the company changed its name to Zoom, influenced by Thacher Hurd's children's book Zoom City. In September 2012, Zoom launched a beta version that could host conferences with up to 15 video participants. In November 2012, the company signed Stanford University as its first customer. The service was launched in January 2013 after the company raised a $6 million Series A round from Qualcomm Ventures, Yahoo! founder Jerry Yang, WebEx founder Subrah Iyar, and former Cisco SVP and General Counsel Dan Scheinman. Zoom launched version 1.0 of the program allowing the maximum number of participants per conference to be 25. By the end of its first month, Zoom had 400,000 users and by May 2013 it had 1 million users.
Growth
In July 2013, Zoom established partnerships with B2B collaboration software providers, such as Redbooth (then Teambox), and also created a program named Works with Zoom, which established partnerships with Logitech, Vaddio, and InFocus. In September 2013, the company raised $6.5 million in a Series B round from Horizon Ventures, and existing investors. At that time, it had 3 million users. In April 2020, daily users increased to more than 200 million.
On February 4, 2015, the company received US$30 million in Series C funding from investors including Emergence Capital, Horizons Ventures (Li Ka-shing), Qualcomm Ventures, Jerry Yang, and Patrick Soon-Shiong. At that time, Zoom had 40 million users, with 65,000 organizations subscribed and a total of 1 billion meeting minutes since it was established. Over the course of 2015 and 2016, the company integrated its software with Slack, Salesforce, and Skype for Business. With version 2.5 in October 2015, Zoom increased the maximum number of participants allowed per conference to 50 and later to 1,000 for business customers. In November 2015, former president of RingCentral David Berman was named president of the company, and Peter Gassner, the founder and CEO of Veeva Systems, joined Zoom's board of directors.
In January 2017, the company raised US$100 million in Series D funding from Sequoia Capital at a US$1 billion valuation, making it a unicorn. In April 2017, Zoom launched a scalable telehealth product allowing doctors to host remote consultations with patients. In May, Zoom announced integration with Polycom's conferencing systems, enabling features such as multiple screen and device meetings, HD and wireless screen sharing, and calendar integration with Microsoft Outlook, Google Calendar, and iCal. From September 25–27, 2017, Zoom hosted Zoomtopia 2017, its first annual user conference. At this conference, Zoom announced a partnership with Meta to integrate Zoom with augmented reality, integration with Slack and Workplace by Facebook, and first steps towards an artificial intelligence speech recognition program.
IPO and onward
On April 18, 2019, the company became a public company via an initial public offering. After pricing at US$36 per share, the share price increased over 72% on the first day of trading. Prior to the IPO, Dropbox invested $5 million in Zoom.
During the COVID-19 pandemic, Zoom saw a major increase in usage for remote work, distance education, and online social relations. Thousands of educational institutions switched to online classes using Zoom. This was also used during the pandemic in 2020 and 2021. The company offered its services free to K–12 schools in many countries. By February 2020, Zoom had gained 2.22 million users in 2020 – more users than it amassed in the entirety of 2019. On one day in March 2020, the Zoom app was downloaded 2.13 million times. Daily meeting participants rose from about 10 million in December 2019 to more than 300 million daily meeting participants in April 2020.
On May 7, 2020, Zoom announced that it had acquired Keybase, a company specializing in end-to-end encryption. In June 2020, the company hired its first chief diversity officer, Damien Hooper-Campbell.
In July 2020, Zoom announced its first hardware as a service products, bundling its videoconferencing software with third-party hardware by DTEN, Neat, Poly, and Yealink, and running on the ServiceNow platform. It began with Zoom Rooms and Zoom Phone offerings, with those services available to US customers, who can acquire hardware from Zoom for a fixed monthly cost. On July 15, 2020, the company announced Zoom for Home, a line of products for home use, designed for remote workers. The first product, Zoom for Home - DTEN ME, includes software by Zoom and hardware by DTEN. It consists of a 27-inch screen with three wide-angle cameras and eight microphones, with Zoom software preloaded on the device. It became available in August 2020.
On July 3–4, using Zoom Webinar, the International Association of Constitutional Law organized the first "round-the-clock and round-the-globe" event that traveled through time zones, featuring 52 speakers from 28 countries. Soon after, a format of conferences which "virtually travel the globe with the sun from East to West", became common, some of them running for several days.
In June 2021, Zoom acquired Kites (Karlsruhe Information Technology Solutions), an artificial intelligence-based language translation company with an aim to reduce language barriers in video calls. In September 2021, Zoom's attempt to acquire contact center company Five9 for $14.7 billion was turned down by Five9's shareholders.
Privacy and security issues
Zoom has been criticized for "security lapses and poor design choices" that have resulted in heightened scrutiny of its software. The company has also been criticized for its privacy and corporate data sharing policies. Security researchers and reporters have criticized the company for its lack of transparency and poor encryption practices. Zoom initially claimed to use "end-to-end encryption" in its marketing materials, but later clarified it meant "from Zoom end point to Zoom end point" (meaning effectively between Zoom servers and Zoom clients), which The Intercept described as misleading and "dishonest".
In March 2020, New York State Attorney General Letitia James launched an inquiry into Zoom's privacy and security practices; the inquiry was closed on May 7, 2020, with Zoom not admitting wrongdoing, but agreeing to take added security measures. In the same month, a class-action lawsuit against Zoom was filed in the United States District Court for the Northern District of California. According to the lawsuit, Zoom violated the privacy of its users by sharing personal data with Facebook, Google, and LinkedIn, did not prevent hackers from disrupting Zoom sessions, and erroneously claimed to offer end-to-end encryption on Zoom sessions. Zoom settled this lawsuit for $86 million.
On April 1, 2020, Zoom announced a 90-day freeze on releasing new features, to focus on fixing privacy and security issues on Zoom. On July 1, 2020, Yuan wrote a blog post detailing efforts taken by the company to address security and privacy concerns, stating that they released 100 new safety features over the 90-day period. Those efforts include end-to-end encryption for all users, turning on meeting passwords by default, giving users the ability to choose which data centers calls are routed from, consulting with security experts, forming a CISO council, an improved bug bounty program, and working with third parties to help test security. Yuan also stated that Zoom would be releasing a transparency report later in 2020.
In May 2020, the Federal Trade Commission (FTC) announced that it was looking into Zoom's privacy practices. The FTC alleged that since at least 2016, "Zoom maintained the cryptographic keys that could allow Zoom to access the content of its customers’ meetings, and secured its Zoom Meetings, in part, with a lower level of encryption than promised." On November 9, 2020, a settlement was reached, requiring the company to implement additional security measures.
In December 2020, Zoom announced that it was under investigation by the U.S. Securities and Exchange Commission (SEC) and the United States Attorney for the Northern District of California and that it had received a subpoena in June 2020 from the United States Attorney for the Eastern District of New York requesting information on the company's interactions with foreign governments and political parties. Both federal prosecutors also sought information and documentation about security and privacy matters regarding Zoom's practices.
On December 19, 2020, a former Zoom executive was charged by the U.S. Department of Justice with conspiracy to commit interstate harassment and unlawful conspiracy to transfer a means of identification. The charges are related to the alleged disruptions to video meetings commemorating the 1989 Tiananmen Square massacre. Federal prosecutors in Brooklyn, New York, said that Xinjiang "Julien" Jin, then 39, was a San Jose, California–based company's main liaison with intelligence and law enforcement agencies of China. Zoom later acknowledged it was the company in question. It said in a statement that it had terminated Jin's employment for violating company policies and was cooperating with the prosecutors. Jin is not in custody because he is based in China.
In February 2021, Zoom announced a new feature called Kiosk Mode, which will allow people visiting offices to check in with a receptionist virtually on a kiosk, without any physical contact.
In March 2021, Zoom announced that from August 23, 2021, Zoom will stop selling new and upgraded products directly to customers in mainland China.
Censorship
In April 2020, Citizen Lab warned that having much of Zoom's research and development in China could "open up Zoom to pressure from Chinese authorities." In June 2020, Zoom was criticized for closing multiple accounts of U.S. and Hong Kong–based groups, including that of Zhou Fengsuo and two other human rights activists, who were commemorating the 1989 Tiananmen Square protests and massacre. The accounts were later re-opened, with the company stating that in the future it "will have a new process for handling similar situations." Zoom responded that it has to "comply with local laws," even "the laws of governments opposed to free speech." Zoom subsequently admitted to shutting down activist accounts at the request of the Chinese government. In response, a bi-partisan group of U.S. senators requested clarification of the incident from the company. Partially in response to criticisms of its blocking of the activists accounts as well as expressions of concern by the United States Justice Department, Zoom moved to cease direct sale of its product in mainland China in late August 2020.
In September 2020, following protests and legal concerns raised by the Jewish coalition group #EndJewHatred, Zoom prevented San Francisco State University from using its video conferencing software to host former Palestinian militant and hijacker Leila Khaled, a member of the Popular Front for the Liberation of Palestine (PFLP). In justifying its decision, Zoom cited the PFLP's designation as a terrorist organization by the United States Government and its efforts to comply with U.S. export control, sanctions, and anti-terrorism laws. Facebook and YouTube also joined Zoom in denying their platforms to the conference organizers. Professor Rabab Ibrahim Abdulhadi, one of the conference organizers, criticized Zoom, Google's YouTube and Facebook for censoring Palestinian voices.
Workforce
In January 2020, Zoom had over 2,500 employees, with 1,396 in the United States and 1,136 in international locations. It is reported that 700 employees within a subsidiary work in China and develop Zoom software. In May 2020, Zoom announced plans to open new research and development centers in Pittsburgh and Phoenix, with plans to hire up to 500 engineers between the two cities over the next few years. In July 2020, Zoom announced the opening of a new technology center in Bangalore, India, to host engineering, IT, and business operations roles. In August 2020, Zoom opened a new data center in Singapore. The company ranked second place in Glassdoor's 2019 "Best Places to Work" survey.
Part of Zoom's product development team is based in China, where an average entry-level tech salary is one-third of American salaries, which is a key driver of its profitability. Zoom's research and development costs are 10 percent of its total revenue and less than half of the median percentage among its peers.
See also
List of video telecommunication services and product brands
Impact of the COVID-19 pandemic on science and technology
References
External links
2011 establishments in California
2019 initial public offerings
American companies established in 2011
Companies based in San Jose, California
Companies listed on the Nasdaq
Impact of the COVID-19 pandemic in the United States
Professional networks
Impact of the COVID-19 pandemic on science and technology
Software associated with the COVID-19 pandemic
Software companies based in the San Francisco Bay Area
Software companies established in 2011
Software companies of the United States
Telecommunications companies established in 2011
Videotelephony
Web conferencing |
44572603 | https://en.wikipedia.org/wiki/Petit%20Computer | Petit Computer | Petit Computer is a software development application for the Nintendo DSi and later systems, developed by SmileBoom in Sapporo, Japan. The application is built around a custom dialect of BASIC known as SmileBASIC (not to be confused with the 3DS sequel with the same name). Users can write games and other software using the onscreen keyboard and run the applications from within Petit Computer. The platform supports text-based console applications, visual applications, and any combination of the two. Input is available via hardware buttons, touchscreen input, or the onscreen keyboard.
In addition to the code editor and interpreter, Petit Computer includes a simple shell for file management, as well as file sharing functionality. Files can be shared by a direct wireless connection between two DS systems, or by the use of QR codes.
The usage of QR codes enabled some users to develop desktop software that can be used to write SmileBASIC and generate a QR code for easy transfer to the DS.
Petit Computer comes with several simple sample applications, 5 sample games, and several graphics-editing applications, all written in SmileBASIC with viewable source code. The latter applications can be used to create sprites, backgrounds, and other resources that can then be used within user-created software. Hundreds of premade sprites and tiles are included with Petit Computer. An extensive manual is available from within Petit that describes the basic features and limitations of SmileBASIC, as well as brief descriptions of most of the commands and their syntax.
SmileBASIC language
Petit Computer uses a customized dialect of BASIC known as SmileBASIC designed specifically for the DSi. Applications written in SmileBASIC can read input from all of the DS's hardware buttons except the Select button (which is always used to terminate the current application) as well as the touch screen, draw graphics and sprites to both screens, and play music written in Music Macro Language. Standard console commands are provided for reading, writing, and manipulating strings. An exhaustive set of graphical commands exists for displaying and manipulating sprites, background graphics, panels, and more, with support for layering, translation, rotation, scaling, palette swapping, and other features, on both screens (some features are limited on the touch screen). Up to 16 channels can be used to play simultaneous audio, with support for fully featured user-defined software instruments and sequenced music.
Reception
Nintendo Life gave the application 7/10 stars, praising its power and potential, but criticizing the presentation as tailored towards seasoned programmers, as well as the "tedious" method of entering code via the touch-screen keyboard. Peter Willington at PocketGamer said the interface "puts you off experimenting" due to the difficulties in entering and navigating text, and complained that error messages weren't useful, but described himself as "massively proud" of his accomplishments with the software and write that "experienced hands will be able to make any kind of software they like".
Sequels
A sequel designed for the Nintendo 3DS, with new features and fewer limitations, released on November 19, 2014 in Japan, October 15, 2015 in North America and August 17, 2017 in Europe. The sequel is titled SmileBASIC (the same name as the dialect of BASIC used in both applications). Nintendo Life gave the application 8/10 stars, praising the removal of QR codes and the power of the language, but again criticizing the cumbersome keyboard. Projects are shared online, with users getting the ability to share up to 10 files at 4 MB each, and can subscribe to a Gold Membership to increase the limit to 110 files at 20 MB each.
In 2015, it was announced that SmileBASIC would be ported to the Wii U. The program was removed from the North America eShop on July 11, 2016 due to an exploit that existed in between versions 3.2.1 and up to version 3.3.1 of the program. The exploit was fixed in version 3.3.2 of the application, and as a result SmileBASIC was put back up for sale on the North American eShop on August 10, 2016. As of system version 11.1.0-X released September 12, 2016, when trying to load the game, the system does not allow users to launch the title until the software is updated to version 3.3.2 or higher, forcing users to download the patch, rendering the exploit unusable even if the older version of the game is available.
Another sequel designed for the Nintendo Switch, called SmileBASIC 4 or Petit Computer 4: SmileBASIC in Japan, was released in Japan on May 23, 2019 and released internationally on April 23, 2020. As well as taking advantage of the functions of the Nintendo Switch hardware, such as the JoyCon controllers and the USB ports on the dock for keyboard and mouse support, the online sharing function now utilises "server tickets" instead of an ongoing subscription for uploaded programs and games, where each ticket purchased increases the amount of online storage you have. SmileBASIC 4 can be purchased as a bundle with one server ticket, or just as the software alone. A free trial version of SmileBASIC 4 is also available in Japan. Shortly after releasing, the international version of SmileBASIC 4 was temporarily pulled from sale in some areas as SmileBoom discussed ratings with the IARC for two sample programs in the software, GAME_RPG and GAME_SHOOTER. The international version remained on sale in the United States, Canada, and Mexico during this period. SmileBASIC 4 was put back up for sale on June 18, 2020.
References
Programming languages
DSiWare games
Nintendo Switch games
BASIC programming language
Japanese brands |
9739 | https://en.wikipedia.org/wiki/Emoticon | Emoticon | An emoticon (, , rarely pronounced , ), short for "emotion icon", also known simply as an emote, is a pictorial representation of a facial expression using characters—usually punctuation marks, numbers, and letters—to express a person's feelings, mood or reaction, or as a time-saving method.
The first ASCII emoticons are generally credited to computer scientist Scott Fahlman, who proposed what came to be known as "smileys":-) and :-(in a message on the bulletin board system (BBS) of Carnegie Mellon University in 1982.
In Western countries, emoticons are usually written at a right angle to the direction of the text. Users from Japan popularized a kind of emoticon called kaomoji, utilizing the Katakana character set, that can be understood without tilting one's head to the left. This style arose on ASCII NET of Japan in 1986.
As SMS mobile text messaging and the Internet became widespread in the late 1990s, emoticons became increasingly popular and were commonly used in texting, Internet forums, and e-mails. Emoticons have played a significant role in communication through technology, and some devices and applications have provided stylized pictures that do not use text punctuation. They offer another range of "tone" and feeling through texting that portrays specific emotions through facial gestures while in the midst of text-based cyber communication. Emoticons were the precursors to modern emojis, which have been in a state of continuous development for a variety of digital platforms. Today over 90% of the world's online population uses emojis or emoticons.
History
Smiling faces in text & precursors (pre-1981)
Modern emoticons were not the first instances of or being used in text. In 1648, poet Robert Herrick wrote, "Tumble me down, and I will sit Upon my ruins, (smiling yet:)." Herrick's work predated any other recorded use of brackets as a smiling face by around 200 years. However, experts have since weighed whether the inclusion of the colon in the poem was deliberate and if it was meant to represent a smiling face. English professor Alan Jacobs argued that "punctuation, in general, was unsettled in the seventeenth century ... Herrick was unlikely to have consistent punctuational practices himself, and even if he did he couldn't expect either his printers or his readers to share them."
Precursors to modern emoticons have existed since the 19th century.
The National Telegraphic Review and Operators Guide in April 1857 documented the use of the number 73 in Morse code to express "love and kisses" (later reduced to the more formal "best regards"). Dodge's Manual in 1908 documented the reintroduction of "love and kisses" as the number 88. New Zealand academics Joan Gajadhar and John Green comment that both Morse code abbreviations are more succinct than modern abbreviations such as LOL.
The transcript of one of Abraham Lincoln's speeches in 1862 recorded the audience's reaction as: "(applause and laughter ;)".
There has been some debate whether the glyph in Lincoln's speech was a typo, a legitimate punctuation construct, or the first emoticon.
Linguist Philip Seargeant argues that it was a simple typesetting error.
In the late 1800s, an example of "typographical art" appeared in the U.S. satirical magazine Puck, using punctuation to represent the emotions of joy. melancholy, indifference, and astonishment.
In a 1912 essay titled "For Brevity and Clarity", American author Ambrose Bierce suggested facetiously that a bracket could be used to represent a smiling face, proposing "an improvement in punctuation" with which writers could convey cachinnation, loud or immoderate laughter: "it is written thus ‿ and presents a smiling mouth. It is to be appended, with the full stop, to every jocular or ironical sentence".
In a 1936 Harvard Lampoon article, writer Alan Gregg proposed combining brackets with various other punctuation marks to represent various moods. Brackets were used for the sides of the mouth or cheeks, with other punctuation used between the brackets to display various emotions: for a smile, (showing more "teeth") for laughter, for a frown and for a wink.
The September 1962 issue of MAD magazine included an article titled "Typewri-toons". The piece, featuring typewriter-generated artwork credited to "Royal Portable", was entirely made up of repurposed typography, including a capital letter P having a bigger bust than a capital I, a lowercase b and d discussing their pregnancies, an asterisk on top of a letter to indicate the letter had just come inside from snowfall, and a classroom of lowercase n's interrupted by a lowercase h "raising its hand".
A further example attributed to a Baltimore Sunday Sun columnist appeared in a 1967 article in Reader's Digest, using a dash and right bracket to represent a tongue in one's cheek: .
Prefiguring the modern "smiley" emoticon, writer Vladimir Nabokov told an interviewer from The New York Times in 1969, "I often think there should exist a special typographical sign for a smilesome sort of concave mark, a supine round bracket, which I would now like to trace in reply to your question."
In the 1970s, the PLATO IV computer system was launched. It was one of the first computers used throughout educational and professional institutions, but rarely used in a residential setting. On the computer system, a student at the University of Illinois developed pictograms that resembled different smiling faces. Mary Kalantzis and Bill Cope stated this likely took place in 1972, and they claimed these to be the first emoji or emoticon. The student's creations likely cover multiple timelines, the creation of computer icons, digital pictograms and emoticons. Since the pictograms were not focused on offering a means to communicate, they aren't generally considered important in the history of the emoticon.
Use of :-) and :-( as communication (1982)
Carnegie Mellon computer scientist Scott Fahlman is generally credited with the invention of the digital text-based emoticon in 1982.
Carnegie Mellon's bulletin board system (BBS) was a forum used by students and teachers for discussing a variety of topics, where jokes often created misunderstandings.
As a response to the difficulty of conveying humor or sarcasm in plain text, Fahlman proposed colon–hyphen–right bracket as a label for "attempted humor".
The use of ASCII symbols, a standard set of codes representing typographical marks, was essential to allow the symbols to be displayed on any computer.
Fahlman sent the following message after an incident where a humorous warning about a mercury spill in an elevator was misunderstood as serious:
19-Sep-82 11:44 Scott E Fahlman :-)
From: Scott E Fahlman <Fahlman at Cmu-20c>
I propose that the following character sequence for joke markers:
:-)
Read it sideways. Actually, it is probably more economical to mark
things that are NOT jokes, given current trends. For this, use
:-(
Other suggestions on the forum included an asterisk and an ampersand , the former meant to represent a person doubled over in laughter, as well as a percent sign and a pound sign .
Within a few months, the smiley had spread to the ARPANET and Usenet.
Many of those that pre-dated Fahlman either drew faces using alphabetic symbols or created digital pictograms. Scott Fahlman took it a step further, by suggesting that not only could his emoticon communication emotion, but also replace language. Using the emoticons as a form of communication is why Fahlman is seen as the creator of emoticons vs. other earlier claims.
Later evolution
In modern times, emoticons have been around since 1990s and at present "Smiley" emoticons (colon, hyphen and bracket) have become integral to digital communications, and have inspired a variety of other emoticons, including the "winking" face using a semicolon , the "surprised" face with a letter o in place of a bracket , and , a visual representation of the Face with Tears of Joy emoji or the acronym LOL.
The 1997 book Smileys by David Sanderson included over 650 different emoticons, and James Marshall's online dictionary of emoticons listed over two thousand in the early 2000s.
A researcher at Stanford University surveyed the emoticons used in four million Twitter messages and found that the smiling emoticon without a hyphen "nose" was much more common than the original version with the hyphen . Linguist Vyvyan Evans argues that this represents a shift in usage by younger users as a form of covert prestige: rejecting a standard usage in order to demonstrate in-group membership.
Inspired by Fahlman's idea of using faces in language, the Loufrani family established The Smiley Company in 1996. Nicolas Loufrani developed hundreds of different emoticons, including 3D versions. His designs were registered at the United States Copyright Office in 1997 and appeared online as .gif files in 1998. These were the first graphical representations of the originally text-based emoticon. He published his icons as well as emoticons created by others, along with their ASCII versions, in an online Smiley Dictionary in the early 2000s. This dictionary included over 3,000 different smileys and was published as a book called Dico Smileys in 2002.
Fahlman has stated that he sees emojis as "the remote descendants of this thing I did." The original smileys were sold by Fahlman as non-fungible tokens for $237,500 in 2021.
Styles
Western
Usually, emoticons in Western style have the eyes on the left, followed by the nose and the mouth. The two-character version :) which omits the nose is also very popular.
The most basic emoticons are relatively consistent in form, but each of them can be transformed by being rotated (making them tiny ambigrams), with or without a hyphen (nose).
There are also some possible variations to emoticons to get new definitions, like changing a character to express a new feeling, or slightly change the mood of the emoticon. For example, :( equals sad and :(( equals very sad. Weeping can be written as :'(. A blush can be expressed as :">. Others include wink ;), a grin :D, smug :->, and can be used to denote a flirting or joking tone, or may be implying a second meaning in the sentence preceding it. ;P, such as when blowing a raspberry. An often used combination is also <3 for a heart, and <code></3</code> for a broken heart. :O is also sometimes used to depict shock. :/ is used to depict melancholy, disappointment, or disapproval. :| is used to depict a neutral face.
A broad grin is sometimes shown with crinkled eyes to express further amusement; XD and the addition of further "D" letters can suggest laughter or extreme amusement e.g. XDDDD. The same is true for X3 but the three represents an animal's mouth. There are other variations including >:( for anger, or >:D for an evil grin, which can be, again, used in reverse, for an unhappy angry face, in the shape of D:<. =K for vampire teeth, :s for grimace, and :P tongue out, can be used to denote a flirting or joking tone, or may be implying a second meaning in the sentence preceding it.
As computers offer increasing built-in support for non-Western writing systems, it has become possible to use other glyphs to build emoticons. The 'shrug' emoticon, ¯\_(ツ)_/¯, uses the glyph ツ from the Japanese katakana writing system.
An equal sign is often used for the eyes in place of the colon, seen as =), without changing the meaning of the emoticon. In these instances, the hyphen is almost always either omitted or, occasionally, replaced with an "o" as in =O). In most circles it has become acceptable to omit the hyphen, whether a colon or an equal sign is used for the eyes, but in some areas of usage people still prefer the larger, more traditional emoticon :-) or :^). One linguistic study has indicated that the use of a nose in an emoticon may be related to the user's age, with younger people less likely to use a nose. Similar-looking characters are commonly substituted for one another: for instance, o, O, and 0 can all be used interchangeably, sometimes for subtly different effect or, in some cases, one type of character may look better in a certain font and therefore be preferred over another. It is also common for the user to replace the rounded brackets used for the mouth with other, similar brackets, such as ] instead of ).
Some variants are also more common in certain countries due to keyboard layouts. For example, the smiley =) may occur in Scandinavia, where the keys for = and ) are placed right beside each other. However, the :) variant is without a doubt the dominant one in Scandinavia, making the =) version a rarity. Diacritical marks are sometimes used. The letters Ö and Ü can be seen as an emoticon, as the upright version of :O (meaning that one is surprised) and :D (meaning that one is very happy) respectively.
Some emoticons may be read right to left instead, and in fact, can only be written using standard ASCII keyboard characters this way round; for example D: which refers to being shocked or anxious, opposite to the large grin of :D.
On the Russian-speaking Internet, the right parenthesis ) is used as a smiley. Multiple parentheses )))) are used to express greater happiness, amusement or laughter. It is commonly placed at the end of a sentence. The colon is omitted due to being in a lesser-known position on the ЙЦУКЕН keyboard layout.
Japanese (kaomoji)
Users from Japan popularized a style of emoticons (顔文字, kaomoji, lit. 'face characters') that can be understood without tilting one's head. This style arose on ASCII NET, an early Japanese online service, in the 1980s. They often include Japanese typography (katakana) in addition to ASCII characters, and in contrast to Western-style emoticons, tend to emphasize the eyes, rather than the mouth.
Wakabayashi Yasushi is credited with inventing the original kaomoji in 1986.
Similar-looking emoticons were used on the Byte Information Exchange (BIX) around the same time.
Whereas Western emoticons were first used by US computer scientists, kaomoji were most commonly used by young girls and fans of Japanese comics (manga). Linguist Ilaria Moschini suggests this is partly due to the kawaii ('cuteness') aesthetic of kaomoji.
These emoticons are usually found in a format similar to (*_*). The asterisks indicate the eyes; the central character, commonly an underscore, the mouth; and the parentheses, the outline of the face.
Different emotions can be expressed by changing the character representing the eyes: for example, "T" can be used to express crying or sadness: (T_T). T_T may also be used to mean "unimpressed". The emphasis on the eyes in this style is reflected in the common usage of emoticons that use only the eyes, e.g. ^^. Looks of stress are represented by the likes of (x_x), while (-_-;) is a generic emoticon for nervousness, the semicolon representing an anxiety-induced sweat drop (discussed further below). /// can indicate embarrassment by symbolizing blushing. Characters like hyphens or periods can replace the underscore; the period is often used for a smaller, "cuter" mouth, or to represent a nose, e.g. (^.^). Alternatively, the mouth/nose can be left out entirely, e.g. (^^).
Parentheses are sometimes replaced with braces or square brackets, e.g. {^_^} or [o_0]. Many times, the parentheses are left out completely, e.g. ^^, >.< , o_O, O.O, e_e, or e.e. A quotation mark ", apostrophe ', or semicolon ; can be added to the emoticon to imply apprehension or embarrassment, in the same way that a sweat drop is used in manga and anime.
Microsoft IME 2000 (Japanese) or later supports the input of emoticons like the above by enabling the Microsoft IME Spoken Language/Emotion Dictionary. In IME 2007, this support was moved to the Emoticons dictionary. Such dictionaries allow users to call up emoticons by typing words that represent them.
Communication software allowing the use of Shift JIS encoded characters rather than just ASCII allowed for the development of more kaomoji using the extended character set including hiragana, katakana, kanji, symbols, Greek and Cyrillic alphabet, such as , (`Д´) or (益).
Modern communication software generally utilizes Unicode, which allows for the incorporation of characters from other languages and a variety of symbols into the kaomoji, as in (◕‿◕✿) (❤ω❤) (づ ◕‿◕ )づ (▰˘◡˘▰).
Further variations can be produced using Unicode combining characters, as in ٩(͡๏̯͡๏)۶ or ᶘᵒᴥᵒᶅ.
Combination of Japanese and Western styles
English-language anime forums adopted those Japanese-style emoticons that could be used with the standard ASCII characters available on Western keyboards. Because of this, they are often called "anime style" emoticons in English. They have since seen use in more mainstream venues, including online gaming, instant-messaging, and non-anime-related discussion forums. Emoticons such as <( ^.^ )>, <(^_^<), <(o_o<), <( -'.'- )>, <('.'-^), or (>';..;')> which include the parentheses, mouth or nose, and arms (especially those represented by the inequality signs < or >) also are often referred to as "" in reference to their likeness to Nintendo's video game character Kirby. The parentheses are sometimes dropped when used in the English language context, and the underscore of the mouth may be extended as an intensifier for the emoticon in question, e.g. ^_^ for very happy. The emoticon uses the Eastern style, but incorporates a depiction of the Western "middle-finger flick-off" using a "t" as the arm, hand, and finger. Using a lateral click for the nose such as in is believed to originate from the Finnish image-based message board Ylilauta, and is called a "Lenny face". Another apparently Western invention is the use of emoticons like *,..,* or `;..;´ to indicate vampires or other mythical beasts with fangs.
Exposure to both Western and Japanese style emoticons or kaomoji through blogs, instant messaging, and forums featuring a blend of Western and Japanese pop culture has given rise to many emoticons that have an upright viewing format. The parentheses are often dropped, and these emoticons typically only use alphanumeric characters and the most commonly used English punctuation marks. Emoticons such as -O-, -3-, -w-, '_', ;_;, T_T, :>, and .V. are used to convey mixed emotions that are more difficult to convey with traditional emoticons. Characters are sometimes added to emoticons to convey an anime- or manga-styled sweat drop, for example ^_^', !>_<!, <@>_<@>;;, ;O;, and *u*. The equals sign can also be used for closed, anime-looking eyes, for example =0=, =3=, =w=, =A=, and =7=. The uwu face (and its variations UwU and OwO), is an emoticon of Japanese origin which denotes a cute expression or emotion felt by the user.
In Brazil, sometimes combining characters (accents) are added to emoticons to represent eyebrows, as in ò_ó, ó_ò, õ_o, ù_u, o_Ô, or ( •̀ ᴗ •́ ).
2channel
Users of the Japanese discussion board 2channel, in particular, have developed a wide variety of unique emoticons using characters from various scripts, such as Kannada, as in ಠ_ಠ (for a look of disapproval, disbelief, or confusion). These were quickly picked up by 4chan and spread to other Western sites soon after. Some have taken on a life of their own and become characters in their own right, like Monā.
Korean
In South Korea, emoticons use Korean Hangul letters, and the Western style is rarely used. The structures of Korean and Japanese emoticons are somewhat similar, but they have some differences. Korean style contains Korean jamo (letters) instead of other characters. There are countless number of emoticons that can be formed with such combinations of Korean jamo letters. Consonant jamos ㅅ, ㅁ or ㅂ as the mouth/nose component and ㅇ, ㅎ or ㅍ for the eyes. For example: ㅇㅅㅇ, ㅇㅂㅇ, ㅇㅁㅇ and -ㅅ-. Faces such as 'ㅅ', "ㅅ", 'ㅂ' and 'ㅇ', using quotation marks " and apostrophes ' are also commonly used combinations. Vowel jamos such as ㅜ,ㅠ depict a crying face. Example: ㅜㅜ, ㅠㅠ and 뉴뉴 (same function as T in western style). Sometimes ㅡ (not an em-dash "—" but a vowel jamo), a comma or an underscore is added, and the two character sets can be mixed together, as in ㅜ.ㅜ, ㅠ.ㅜ, ㅠ.ㅡ, ㅜ_ㅠ, ㅡ^ㅜ and ㅜㅇㅡ. Also, semicolons and carets are commonly used in Korean emoticons; semicolons mean sweating (embarrassed). If they are used with ㅡ or – they depict a bad feeling. Examples: -;/, --^, ㅡㅡ;;;, -_-;; and -_^. However, ^^, ^오^ means smile (almost all people use this without distinction of sex or age). Others include: ~_~, --a, -6-, +0+.
Chinese ideographic
The character 囧 (U+56E7), which means "bright", may be combined with posture emoticon Orz, such as 囧rz. The character existed in Oracle bone script, but its use as emoticon was documented as early as January 20, 2005.
Other ideographic variants for 囧 include 崮 (king 囧), 莔 (queen 囧), 商 (囧 with hat), 囧興 (turtle), 卣 (Bomberman).
The character 槑 (U+69D1), which sounds like the word for "plum" (梅 (U+FA44)), is used to represent double of 呆 (dull), or further magnitude of dullness. In Chinese, normally full characters (as opposed to the stylistic use of 槑) might be duplicated to express emphasis.
Posture emoticons
Orz
Orz (other forms include: ) is an emoticon representing a kneeling or bowing person (the Japanese version of which is called dogeza) with the "o" being the head, the "r" being the arms and part of the body, and the "z" being part of the body and the legs. This stick figure can represent respect or kowtowing, but commonly appears along a range of responses, including "frustration, despair, sarcasm, or grudging respect".
It was first used in late 2002 at the forum on Techside, a Japanese personal website. At the "Techside FAQ Forum" (TECHSIDE教えて君BBS(教えてBBS) ), a poster asked about a cable cover, typing to show a cable and its cover. Others commented that it looked like a kneeling person, and the symbol became popular. These comments were soon deleted as they were considered off-topic. By 2005, Orz spawned a subculture: blogs have been devoted to the emoticon, and URL shortening services have been named after it. In Taiwan, Orz is associated with the phrase "nice guy"that is, the concept of males being rejected for a date by females, with a phrase like "You are a nice guy."
Orz should not be confused with m(_ _)m, which means "Thank you" or an apology (つ ͡ꈍ ͜ʖ̫ ͡ꈍ ).
Multimedia variations
A portmanteau of emotion and sound, an emotisound is a brief sound transmitted and played back during the viewing of a message, typically an IM message or e-mail message. The sound is intended to communicate an emotional subtext. Many instant messaging clients automatically trigger sound effects in response to specific emoticons.
Some services, such as MuzIcons, combine emoticons and music player in an Adobe Flash-based widget.
In 2004, the Trillian chat application introduced a feature called "emotiblips", which allows Trillian users to stream files to their instant message recipients "as the voice and video equivalent of an emoticon".
In 2007, MTV and Paramount Home Entertainment promoted the "emoticlip" as a form of viral marketing for the second season of the show The Hills. The emoticlips were twelve short snippets of dialogue from the show, uploaded to YouTube, which the advertisers hoped would be distributed between web users as a way of expressing feelings in a similar manner to emoticons. The emoticlip concept is credited to the Bradley & Montgomery advertising firm, which hopes they would be widely adopted as "greeting cards that just happen to be selling something".
In 2008, an emotion-sequence animation tool, called FunIcons was created. The Adobe Flash and Java-based application allows users to create a short animation. Users can then email or save their own animations to use them on similar social utility applications.
During the first half of the 2010s, there have been different forms of small audiovisual pieces to be sent through instant messaging systems to express one's emotion. These videos lack an established name, and there are several ways to designate them: "emoticlips" (named above), "emotivideos" or more recently "emoticon videos". These are tiny videos that can be easily transferred from one mobile phone to another. Current video compression codecs such as H.264 allow these pieces of video to be light in terms of file size and very portable. The popular computer and mobile app Skype use these in a separate keyboard or by typing the code of the "emoticon videos" between parentheses.
Emoticons and intellectual property rights
In 2000, Despair, Inc. obtained a U.S. trademark registration for the "frowny" emoticon :-( when used on "greeting cards, posters and art prints". In 2001, they issued a satirical press release, announcing that they would sue Internet users who typed the frowny; the joke backfired and the company received a storm of protest when its mock release was posted on technology news website Slashdot.
A number of patent applications have been filed on inventions that assist in communicating with emoticons. A few of these have been issued as US patents. US 6987991, for example, discloses a method developed in 2001 to send emoticons over a cell phone using a drop-down menu. The stated advantage over the prior art was that the user saved on the number of keystrokes though this may not address the obviousness criteria.
The emoticon :-) was also filed in 2006 and registered in 2008 as a European Community Trademark (CTM). In Finland, the Supreme Administrative Court ruled in 2012 that the emoticon cannot be trademarked, thus repealing a 2006 administrative decision trademarking the emoticons :-), =), =(, :) and :(.
In 2005, a Russian court rejected a legal claim against Siemens by a man who claimed to hold a trademark on the ;-) emoticon.
In 2008, Russian entrepreneur Oleg Teterin claimed to have been granted the trademark on the ;-) emoticon. A license would not "cost that muchtens of thousands of dollars" for companies, but would be free of charge for individuals.
Unicode
A different, but related, use of the term "emoticon" is found in the Unicode Standard, referring to a subset of emoji which display facial expressions. The standard explains this usage with reference to existing systems, which provided functionality for substituting certain textual emoticons with images or emoji of the expressions in question.
Some smiley faces were present in Unicode since 1.1, including a white frowning face, a white smiling face, and a black smiling face. ("Black" refers to a glyph which is filled, "white" refers to a glyph which is unfilled).
The Emoticons block was introduced in Unicode Standard version 6.0 (published in October 2010) and extended by 7.0. It covers Unicode range from U+1F600 to U+1F64F fully.
After that block had been filled, Unicode 8.0 (2015), 9.0 (2016) and 10.0 (2017) added additional emoticons in the range from U+1F910 to U+1F9FF. Currently, U+1F90CU+1F90F, U+1F93F, U+1F94DU+1F94F, U+1F96CU+1F97F, U+1F998U+1F9CF (excluding U+1F9C0 which contains the 🧀 emoji) and U+1F9E7U+1F9FF do not contain any emoticons since Unicode 10.0.
For historic and compatibility reasons, some other heads, and figures, which mostly represent different aspects like genders, activities, and professions instead of emotions, are also found in Miscellaneous Symbols and Pictographs (especially U+1F466U+1F487) and Transport and Map Symbols. Body parts, mostly hands, are also encoded in the Dingbat and Miscellaneous Symbols blocks.
See also
ASCII art
Emotion Markup Language (EML)
Emotions in virtual communication
Henohenomoheji
Hieroglyph
iConji
Internet slang
Irony punctuation
Kaoani
List of emoticons
Martian language
Pixel art
Smiley
Tête à Toto
Text
Typographic alignment
Typographic approximation
Notes
References
Further reading
Bódi, Zoltán, and Veszelszki, Ágnes (2006). Emotikonok. Érzelemkifejezés az internetes kommunikációban (Emoticons. Expressing emotions in the internet communication). Budapest: Magyar Szemiotikai Társaság.
Dresner, Eli, and Herring, Susan C. (2010). "Functions of the non-verbal in CMC: Emoticons and illocutionary force." Communication Theory 20: 249–268. Preprint.
Veszelszki, Ágnes (2012). Connections of Image and Text in Digital and Handwritten Documents. In: Benedek, András, and Nyíri, Kristóf (eds.): The Iconic Turn in Education. Series Visual Learning Vol. 2. Frankfurt am Main et al.: Peter Lang, pp. 97−110.
Veszelszki, Ágnes (2015). Emoticons vs. Reaction-Gifs. Non-Verbal Communication on the Internet from the Aspects of Visuality, Verbality and Time. In: Benedek, András − Nyíri, Kristóf (eds.): Beyond Words. Pictures, Parables, Paradoxes (series Visual Learning, vol. 5). Frankfurt: Peter Lang. 131−145.
Wolf, Alecia (2000). "Emotional expression online: Gender differences in emoticon use." CyberPsychology & Behavior 3: 827–833.
Emoticons
ASCII art
Computer-related introductions in 1982
Email
Internet forum terminology
Internet memes
Internet slang
Online chat
Pictograms |
148742 | https://en.wikipedia.org/wiki/GeForce | GeForce | GeForce is a brand of graphics processing units (GPUs) designed by Nvidia. As of the GeForce 30 series, there have been seventeen iterations of the design. The first GeForce products were discrete GPUs designed for add-on graphics boards, intended for the high-margin PC gaming market, and later diversification of the product line covered all tiers of the PC graphics market, ranging from cost-sensitive GPUs integrated on motherboards, to mainstream add-in retail boards. Most recently, GeForce technology has been introduced into Nvidia's line of embedded application processors, designed for electronic handhelds and mobile handsets.
With respect to discrete GPUs, found in add-in graphics-boards, Nvidia's GeForce and AMD's Radeon GPUs are the only remaining competitors in the high-end market. GeForce GPUs are very dominant in the general-purpose graphics processor unit (GPGPU) market thanks to their proprietary CUDA architecture. GPGPU is expected to expand GPU functionality beyond the traditional rasterization of 3D graphics, to turn it into a high-performance computing device able to execute arbitrary programming code in the same way a CPU does, but with different strengths (highly parallel execution of straightforward calculations) and weaknesses (worse performance for complex branching code).
Name origin
The "GeForce" name originated from a contest held by Nvidia in early 1999 called "Name That Chip". The company called out to the public to name the successor to the RIVA TNT2 line of graphics boards. There were over 12,000 entries received and 7 winners received a RIVA TNT2 Ultra graphics card as a reward. Brian Burke, senior PR manager at Nvidia, told Maximum PC in 2002 that "GeForce" originally stood for "Geometry Force" since GeForce 256 was the first GPU for personal computers to calculate the transform-and-lighting geometry, offloading that function from the CPU.
Graphics processor generations
GeForce 256
Launched on September 1, 1999, the GeForce 256 (NV10) was the first consumer-level PC graphics chip shipped with hardware transform, lighting, and shading although 3D games utilizing this feature did not appear until later. Initial GeForce 256 boards shipped with SDR SDRAM memory, and later boards shipped with faster DDR SDRAM memory.
GeForce 2 series
Launched in April 2000, the first GeForce2 (NV15) was another high-performance graphics chip. Nvidia moved to a twin texture processor per pipeline (4x2) design, doubling texture fillrate per clock compared to GeForce 256. Later, Nvidia released the GeForce2 MX (NV11), which offered performance similar to the GeForce 256 but at a fraction of the cost. The MX was a compelling value in the low/mid-range market segments and was popular with OEM PC manufacturers and users alike. The GeForce 2 Ultra was the high-end model in this series.
GeForce 3 series
Launched in February 2001, the GeForce3 (NV20) introduced programmable vertex and pixel shaders to the GeForce family and to consumer-level graphics accelerators. It had good overall performance and shader support, making it popular with enthusiasts although it never hit the midrange price point. The NV2A developed for the Microsoft Xbox game console is a derivative of the GeForce 3.
GeForce 4 series
Launched in February 2002, the then-high-end GeForce4 Ti (NV25) was mostly a refinement to the GeForce3. The biggest advancements included enhancements to anti-aliasing capabilities, an improved memory controller, a second vertex shader, and a manufacturing process size reduction to increase clock speeds. Another member of the GeForce 4 family, the budget GeForce4 MX, was based on the GeForce2, with the addition of some features from the GeForce4 Ti. It targeted the value segment of the market and lacked pixel shaders. Most of these models used the AGP 4× interface, but a few began the transition to AGP 8×.
GeForce FX series
Launched in 2003, the GeForce FX (NV30) was a huge change in architecture compared to its predecessors. The GPU was designed not only to support the new Shader Model 2 specification but also to perform well on older titles. However, initial models like the GeForce FX 5800 Ultra suffered from weak floating point shader performance and excessive heat which required infamously noisy two-slot cooling solutions. Products in this series carry the 5000 model number, as it is the fifth generation of the GeForce, though Nvidia marketed the cards as GeForce FX instead of GeForce 5 to show off "the dawn of cinematic rendering".
GeForce 6 series
Launched in April 2004, the GeForce 6 (NV40) added Shader Model 3.0 support to the GeForce family, while correcting the weak floating point shader performance of its predecessor. It also implemented high-dynamic-range imaging and introduced SLI (Scalable Link Interface) and PureVideo capability (integrated partial hardware MPEG-2, VC-1, Windows Media Video, and H.264 decoding and fully accelerated video post-processing).
GeForce 7 series
The seventh generation GeForce (G70/NV47) was launched in June 2005 and was the last Nvidia video card series that could support the AGP bus. The design was a refined version of GeForce 6, with the major improvements being a widened pipeline and an increase in clock speed. The GeForce 7 also offers new transparency supersampling and transparency multisampling anti-aliasing modes (TSAA and TMAA). These new anti-aliasing modes were later enabled for the GeForce 6 series as well. The GeForce 7950GT featured the highest performance GPU with an AGP interface in the Nvidia line. This era began the transition to the PCI-Express interface.
A 128-bit, 8 ROP variant of the 7950 GT, called the RSX 'Reality Synthesizer', is used as the main GPU in the Sony PlayStation 3.
GeForce 8 series
Released on November 8, 2006, the eighth-generation GeForce (originally called G80) was the first ever GPU to fully support Direct3D 10. Manufactured using a 90 nm process and built around the new Tesla microarchitecture, it implemented the unified shader model. Initially just the 8800GTX model was launched, while the GTS variant was released months into the product line's life, and it took nearly six months for mid-range and OEM/mainstream cards to be integrated into the 8 series. The die shrink down to 65 nm and a revision to the G80 design, codenamed G92, were implemented into the 8 series with the 8800GS, 8800GT and 8800GTS-512, first released on October 29, 2007, almost one whole year after the initial G80 release.
GeForce 9 series and 100 series
The first product was released on February 21, 2008. Not even four months older than the initial G92 release, all 9-series designs are simply revisions to existing late 8-series products. The 9800GX2 uses two G92 GPUs, as used in later 8800 cards, in a dual PCB configuration while still only requiring a single PCI-Express 16x slot. The 9800GX2 utilizes two separate 256-bit memory busses, one for each GPU and its respective 512 MB of memory, which equates to an overall of 1 GB of memory on the card (although the SLI configuration of the chips necessitates mirroring the frame buffer between the two chips, thus effectively halving the memory performance of a 256-bit/512MB configuration). The later 9800GTX features a single G92 GPU, 256-bit data bus, and 512 MB of GDDR3 memory.
Prior to the release, no concrete information was known except that the officials claimed the next generation products had close to 1 TFLOPS processing power with the GPU cores still being manufactured in the 65 nm process, and reports about Nvidia downplaying the significance of Direct3D 10.1. In March 2009, several sources reported that Nvidia had quietly launched a new series of GeForce products, namely the GeForce 100 Series, which consists of rebadged 9 Series parts. GeForce 100 series products were not available for individual purchase.
GeForce 200 series and 300 series
Based on the GT200 graphics processor consisting of 1.4 billion transistors, codenamed Tesla, the 200 series was launched on June 16, 2008. The next generation of the GeForce series takes the card-naming scheme in a new direction, by replacing the series number (such as 8800 for 8-series cards) with the GTX or GTS suffix (which used to go at the end of card names, denoting their 'rank' among other similar models), and then adding model-numbers such as 260 and 280 after that. The series features the new GT200 core on a 65nm die. The first products were the GeForce GTX 260 and the more expensive GeForce GTX 280. The GeForce 310 was released on November 27, 2009, which is a rebrand of GeForce 210. The 300 series cards are rebranded DirectX 10.1 compatible GPUs from the 200 series, which were not available for individual purchase.
GeForce 400 series and 500 series
On April 7, 2010, Nvidia released the GeForce GTX 470 and GTX 480, the first cards based on the new Fermi architecture, codenamed GF100; they were the first Nvidia GPUs to utilize 1 GB or more of GDDR5 memory. The GTX 470 and GTX 480 were heavily criticized due to high power use, high temperatures, and very loud noise that were not balanced by the performance offered, even though the GTX 480 was the fastest DirectX 11 card as of its introduction.
In November 2010, Nvidia released a new flagship GPU based on an enhanced GF100 architecture (GF110) called the GTX 580. It featured higher performance, less power utilization, heat and noise than the preceding GTX 480. This GPU received much better reviews than the GTX 480. Nvidia later also released the GTX 590, which packs two GF110 GPUs on a single card.
GeForce 600 series, 700 series and 800M series
In September 2010, Nvidia announced that the successor to Fermi microarchitecture would be the Kepler microarchitecture, manufactured with the TSMC 28 nm fabrication process. Earlier, Nvidia had been contracted to supply their top-end GK110 cores for use in Oak Ridge National Laboratory's "Titan" supercomputer, leading to a shortage of GK110 cores. After AMD launched their own annual refresh in early 2012, the Radeon HD 7000 series, Nvidia began the release of the GeForce 600 series in March 2012. The GK104 core, originally intended for their mid-range segment of their lineup, became the flagship GTX 680. It introduced significant improvements in performance, heat, and power efficiency compared to the Fermi architecture and closely matched AMD's flagship Radeon HD 7970. It was quickly followed by the dual-GK104 GTX 690 and the GTX 670, which featured only a slightly cut-down GK104 core and was very close in performance to the GTX 680.
With the GTX Titan, Nvidia also released GPU Boost 2.0, which would allow the GPU clock speed to increase indefinitely until a user-set temperature limit was reached without passing a user-specified maximum fan speed. The final GeForce 600 series release was the GTX 650 Ti BOOST based on the GK106 core, in response to AMD's Radeon HD 7790 release. At the end of May 2013, Nvidia announced the 700 series, which was still based on the Kepler architecture, however it featured a GK110-based card at the top of the lineup. The GTX 780 was a slightly cut-down Titan that achieved nearly the same performance for two-thirds of the price. It featured the same advanced reference cooler design, but did not have the unlocked double-precision cores and was equipped with 3 GB of memory.
At the same time, Nvidia announced ShadowPlay, a screen capture solution that used an integrated H.264 encoder built into the Kepler architecture that Nvidia had not revealed previously. It could be used to record gameplay without a capture card, and with negligible performance decrease compared to software recording solutions, and was available even on the previous generation GeForce 600 series cards. The software beta for ShadowPlay, however, experienced multiple delays and would not be released until the end of October 2013. A week after the release of the GTX 780, Nvidia announced the GTX 770 to be a rebrand of the GTX 680. It was followed by the GTX 760 shortly after, which was also based on the GK104 core and similar to the GTX 660 Ti. No more 700 series cards were set for release in 2013, although Nvidia announced G-Sync, another feature of the Kepler architecture that Nvidia had left unmentioned, which allowed the GPU to dynamically control the refresh rate of G-Sync-compatible monitors which would release in 2014, to combat tearing and judder. However, in October, AMD released the R9 290X, which came in at $100 less than the GTX 780. In response, Nvidia slashed the price of the GTX 780 by $150 and released the GTX 780 Ti, which featured a full 2880-core GK110 core even more powerful than the GTX Titan, along with enhancements to the power delivery system which improved overclocking, and managed to pull ahead of AMD's new release.
The GeForce 800M series consists of rebranded 700M series parts based on the Kepler architecture and some lower-end parts based on the newer Maxwell architecture.
GeForce 900 series
In March 2013, Nvidia announced that the successor to Kepler would be the Maxwell microarchitecture. It was released in September 2014, with the GM10x series chips, emphasizing the new power efficiency architectural improvements in OEM, and low TDP products in desktop GTX 750/750 ti, and mobile GTX 850M/860M. Later that year Nvidia pushed the TDP with the GM20x chips for power users, skipping the 800 series for desktop entirely, with the 900 series of GPUs.
This was the last GeForce series to support analog video output through DVI-I.
GeForce 10 series
In March 2014, Nvidia announced that the successor to Maxwell would be the Pascal microarchitecture; announced on 6 May 2016 and released on 27 May 2016. Architectural improvements include the following:
In Pascal, an SM (streaming multiprocessor) consists of 128 CUDA cores. Kepler packed 192, Fermi 32 and Tesla only 8 CUDA cores into an SM; the GP100 SM is partitioned into two processing blocks, each having 32 single-precision CUDA Cores, an instruction buffer, a warp scheduler, 2 texture mapping units and 2 dispatch units.
GDDR5X New memory standard supporting 10Gbit/s data rates and an updated memory controller. Only the Nvidia Titan X (and Titan Xp), GTX 1080, GTX 1080 Ti, and GTX 1060 (6 GB Version) support GDDR5X. The GTX 1070 Ti, GTX 1070, GTX 1060 (3GB version), GTX 1050 Ti, and GTX 1050 use GDDR5.
Unified memory A memory architecture, where the CPU and GPU can access both main system memory and memory on the graphics card with the help of a technology called "Page Migration Engine".
NVLink A high-bandwidth bus between the CPU and GPU, and between multiple GPUs. Allows much higher transfer speeds than those achievable by using PCI Express; estimated to provide between 80 and 200 GB/s.
16-bit (FP16) floating-point operations can be executed at twice the rate of 32-bit floating-point operations ("single precision") and 64-bit floating-point operations ("double precision") executed at half the rate of 32-bit floating point operations (Maxwell 1/32 rate).
GeForce 20 series and 16 series
In August 2018, Nvidia announced the GeForce successor to Pascal. The new microarchitecture name was revealed as "Turing" at the Siggraph 2018 conference. This new GPU microarchitecture is aimed to accelerate the real-time ray tracing support and AI Inferencing. It features a new Ray Tracing unit (RT Core) which can dedicate processors to the ray tracing in hardware. It supports the DXR extension in Microsoft DirectX 12. Nvidia claims the new architecture is up to 6 times faster than the older Pascal architecture. A whole new Tensor core design since Volta introduces AI deep learning acceleration, which allows the utilisation of DLSS (Deep Learning Super Sampling), a new form of anti-aliasing that uses AI to provide crisper imagery with less impact on performance. It also changes its integer execution unit which can execute in parallel with the floating point data path. A new unified cache architecture which doubles its bandwidth compared with previous generations was also announced.
The new GPUs were revealed as the Quadro RTX 8000, Quadro RTX 6000 and Quadro RTX 5000. The high end Quadro RTX 8000 features 4,608 CUDA cores and 576 Tensor cores with 48GB of VRAM. Later during the Gamescom press conference, Nvidia's CEO Jensen Huang, unveiled the new GeForce RTX series with RTX 2080 Ti, 2080, and 2070 that will use the Turing architecture. The first Turing cards were slated to ship to consumers on September 20, 2018. Nvidia announced the RTX 2060 on January 6, 2019 at CES 2019.
On July 2, 2019, Nvidia announced the GeForce RTX Super line of cards, a 20 series refresh which comprises higher-spec versions of the RTX 2060, 2070 and 2080. The RTX 2070 and 2080 were discontinued.
In February 2019, Nvidia announced the GeForce 16 series. It is based on the same Turing architecture used in the GeForce 20 series, but omitting the Tensor (AI) and RT (ray tracing) cores unique to the latter in favour of providing a more affordable graphics solution for gamers while still attaining a higher performance compared to respective cards of the previous GeForce generations.
Like the RTX Super refresh, Nvidia on October 29, 2019 announced the GTX 1650 Super and 1660 Super cards, which replaced their non-Super counterparts.
GeForce 30 series
Nvidia officially announced at the GeForce Special Event that the successor to GeForce 20 series will be the 30 series. The GeForce Special Event introduced took place on September 1, 2020 and set September 17 as the official release date for the 3080 GPU, September 24 as the release date for the 3090 GPU and October for the 3070 GPU.
Variants
Mobile GPUs
Since the GeForce 2 series, Nvidia has produced a number of graphics chipsets for notebook computers under the GeForce Go branding. Most of the features present in the desktop counterparts are present in the mobile ones. These GPUs are generally optimized for lower power consumption and less heat output in order to be used in notebook PCs and small desktops.
Beginning with the GeForce 8 series, the GeForce Go brand was discontinued and the mobile GPUs were integrated with the main line of GeForce GPUs, but their name suffixed with an M. This ended in 2016 with the launch of the laptop GeForce 10 series – Nvidia dropped the M suffix, opting to unify the branding between their desktop and laptop GPU offerings, as notebook Pascal GPUs are almost as powerful as their desktop counterparts (something Nvidia tested with their "desktop-class" notebook GTX 980 GPU back in 2015).
The GeForce MX brand, previously used by Nvidia for their entry-level desktop GPUs, was revived in 2017 with the release of the GeForce MX150 for notebooks. The MX150 is based on the same Pascal GP108 GPU as used on the desktop GT 1030, and was quietly released in June 2017.
Small form factor GPUs
Similar to the mobile GPUs, Nvidia also released a few GPUs in "small form factor" format, for use in all-in-one desktops. These GPUs are suffixed with an S, similar to the M used for mobile products.
Integrated desktop motherboard GPUs
Beginning with the nForce 4, Nvidia started including onboard graphics solutions in their motherboard chipsets. These onboard graphics solutions were called mGPUs (motherboard GPUs). Nvidia discontinued the nForce range, including these mGPUs, in 2009.
After the nForce range was discontinued, Nvidia released their Ion line in 2009, which consisted of an Intel Atom CPU partnered with a low-end GeForce 9 series GPU, fixed on the motherboard. Nvidia released an upgraded Ion 2 in 2010, this time containing a low-end GeForce 300 series GPU.
Nomenclature
From the GeForce 4 series until the GeForce 9 series, the naming scheme below is used.
Since the release of the GeForce 100 series of GPUs, Nvidia changed their product naming scheme to the one below.
Earlier cards such as the GeForce4 follow a similar pattern.
cf. Nvidia's Performance Graph here.
Graphics device drivers
Proprietary
Nvidia develops and publishes GeForce drivers for Windows 10 x86/x86-64 and later, Linux x86/x86-64/ARMv7-A, OS X 10.5 and later, Solaris x86/x86-64 and FreeBSD x86/x86-64. A current version can be downloaded from Nvidia and most Linux distributions contain it in their own repositories. Nvidia GeForce driver 340.24 from 8 July 2014 supports the EGL interface enabling support for Wayland in conjunction with this driver. This may be different for the Nvidia Quadro brand, which is based on identical hardware but features OpenGL-certified graphics device drivers.
Basic support for the DRM mode-setting interface in the form of a new kernel module named nvidia-modeset.ko has been available since version 358.09 beta.
The support Nvidia's display controller on the supported GPUs is centralized in nvidia-modeset.ko. Traditional display interactions (X11 modesets, OpenGL SwapBuffers, VDPAU presentation, SLI, stereo, framelock, G-Sync, etc.) initiate from the various user-mode driver components and flow to nvidia-modeset.ko.
On the same day the Vulkan graphics API was publicly released, Nvidia released drivers that fully supported it.
Legacy driver:
GeForce driver 71.x provides support for RIVA TNT, RIVA TNT2, GeForce 256 and GeForce 2 series
GeForce driver 96.x provides support for GeForce 2 series, GeForce 3 series and GeForce 4 series
GeForce driver 173.x provides support for GeForce FX series
GeForce driver 304.x provides support for GeForce 6 series and GeForce 7 series
GeForce driver 340.x provides support for Tesla 1 and 2-based, i.e. GeForce 8 series – GeForce 300 series
GeForce driver 390.x provides support for Fermi, i.e. GeForce 400 series – GeForce 500 series
GeForce driver "47x,x" provides support for Kepler, i.e. GeForce 600 series – GeForce 700 series
Usually a legacy driver does feature support for newer GPUs as well, but since newer GPUs are supported by newer GeForce driver numbers which regularly provide more features and better support, the end-user is encouraged to always use the highest possible drivers number.
Current driver:
GeForce driver latest provides support for Maxwell, Pascal, Turing and Ampere-based GPUs.
Free and open-source
Community-created, free and open-source drivers exist as an alternative to the drivers released by Nvidia. Open-source drivers are developed primarily for Linux, however there may be ports to other operating systems. The most prominent alternative driver is the reverse-engineered free and open-source nouveau graphics device driver. Nvidia has publicly announced to not provide any support for such additional device drivers for their products, although Nvidia has contributed code to the Nouveau driver.
Free and open-source drivers support a large portion (but not all) of the features available in GeForce-branded cards. For example, nouveau driver lacks support for the GPU and memory clock frequency adjustments, and for associated dynamic power management. Also, Nvidia's proprietary drivers consistently perform better than nouveau in various benchmarks. However, and version 3.16 of the Linux kernel mainline, contributions by Nvidia allowed partial support for GPU and memory clock frequency adjustments to be implemented.
Licensing and privacy issues
The license has common terms against reverse engineering and copying, and it disclaims warranties and liability.
Starting in 2016 the GeFORCE license says Nvidia "SOFTWARE may access, collect non-personally identifiable information about, update, and configure Customer's system in order to properly optimize such system for use with the SOFTWARE." The privacy notice goes on to say, "We are not able to respond to "Do Not Track" signals set by a browser at this time. We also permit third party online advertising networks and social media companies to collect information... We may combine personal information that we collect about you with the browsing and tracking information collected by these [cookies and beacons] technologies."
The software configures the user's system to optimize its use, and the license says, "NVIDIA will have no responsibility for any damage or loss to such system (including loss of data or access) arising from or relating to (a) any changes to the configuration, application settings, environment variables, registry, drivers, BIOS, or other attributes of the system (or any part of such system) initiated through the SOFTWARE".
GeForce Experience
Until the March 26, 2019 update, users of GeForce Experience were vulnerable to code execution, denial of service and escalation of privilege attacks.
References
External links
GeForce product page on Nvidia's website
GeForce powered games on Nvidia's website
techPowerUp! GPU Database
Nvidia
Nvidia graphics processors
Graphics cards
Companies' terms of service |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.