id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
30058902
https://en.wikipedia.org/wiki/Acer%20beTouch%20E140
Acer beTouch E140
The Acer beTouch E140 is a smartphone manufactured by Acer Inc. utilizing the Android 2.2 (Froyo) operating system. Main specifications Operating System: Android 2.2 Display: 2.8-inch touch screen Processor: 600 MHz Wi-Fi 802.11 b / g, Bluetooth 2.1, IR FM-radio Camera: 3.2MP Battery: 1300 mAH Weight: 115 grams Release The Acer beTouch 140 was unveiled on December 2010. The device is to be released in the UK though the exact date is not known. Price has not been announced but it should be around €199 See also Galaxy Nexus List of Android devices References beTouch E140 Android (operating system) devices Mobile phones introduced in 2010
504848
https://en.wikipedia.org/wiki/Quantum%20Corporation
Quantum Corporation
Quantum Corporation is a data storage and management company headquartered in San Jose, California. The company works with a network of distributors, VARs, DMRs, OEMs and other suppliers. From its founding in 1980 until 2001, it was also a major disk storage manufacturer (usually second-place in market share behind Seagate), and was based in Milpitas, California. Quantum sold its hard disk drive business to Maxtor in 2001 and now focuses on integrated storage systems. History Plus Development Corporation Quantum was originally a market leader in 8-inch hard disk drives, but missed the industry's transition to 5.25-inch drives. In 1984, a subsidiary was launched called "Plus Development" to focus on new technology development. Plus Development became a successful designer of 3.5-inch drives with Matsushita Kotobuki Electronics (now Panasonic) as the contract manufacturer. Quantum later reacquired Plus Development and was the largest drive producer worldwide in 1994. DEC storage group acquisition In July 1994, Quantum purchased DEC's data storage division. Quantum–Maxtor merger By 2000, the hard drive market was becoming less profitable. Quantum decided to sell its hard drive division to Maxtor at this time. The transfer took effect on April 1, 2001. Although Maxtor systematically eliminated much of the staff of Quantum's former hard drive division during the following year, it continued most of Quantum's disk storage products and brands until it was acquired by Seagate Technology on December 21, 2005. Loans during Pandemic CARES Act During the COVID-19 pandemic, Quantum was a recipient of a government loan of US$10 million as part of the Paycheck Protection Program (PPP). With a head count of ~800 employees, this would be equivalent of receiving over $12,000 per employee. The SBA sets its size standards for qualification based on the North American Industry Classification System (NAICS) industry code, and the size standards for the Computer Storage Device Manufacturing Industry (NAICS code 334112) is 1,250 employees. Quantum qualifies for the PPP which allows businesses in the Computer Storage Device Manufacturing industry with fewer than 1,250 employees to obtain loans of up to $10 million to incentivize companies to maintain their workers as they manage the business disruptions caused by the COVID-19 pandemic. Acquisitions Prior to the 2000 merger of the hard drive division, Quantum began a series of tape technology acquisitions: 1998 – ATL Products, a manufacturer of automated tape libraries. 2001 – M4 Data (Holdings) Ltd., a manufacturer of tape libraries. 2002 – Benchmark Storage Innovations, who manufactured the VStape product line under a Quantum license. 2005 – Certance, the former tape business of Seagate Technology, becoming a member of the LTO consortium. 2006 – Advanced Digital Information Corporation (ADIC), Scalar brand tape libraries, StorNext filesystem and De-Duplication technology. 2011 – Pancetera Software, a specialist in data management and protection for virtual environments, for $12 million. 2014 – SymForm, a cloud storage company. 2020 – ActiveScale object storage business acquired from Western Digital. 2020 – UK-based Square Box Systems Ltd, maker of CATDV and a specialist in data cataloging, user collaboration, and digital asset management software. 2021 – Video surveillance assets from Pivot3, a hyperconverged infrastructure company. Products Fireball The Fireball brand of hard drives were manufactured between 1995 and 2001. In 1995, 540 MB Fireball hard drives using ATA and SCSI were available. In 1997, the Fireball ST, available in 1.6 GB to 6.4 GB capacities, was considered a top performer, while the Fireball TM was significantly slower. High-performance file system software StorNext At the core of Quantum's high-performance shared storage product line is Quantum StorNext software which enables video editing and management of large video and image datasets. StorNext software is a parallel file processing system that provides fast streaming performance and data access, a shared file storage environment for Apple Macintosh, Microsoft Windows, and Linux workstations, and intelligent data management to protect data across its lifecycle. StorNext runs on standard servers and is sold with storage arrays that are used within the StorNext environment. These storage arrays include Quantum QXS-Series, a line of high performance, reliable hybrid storage arrays, offered with either HDDs, SSDs, or some combination of the two. StorNext software can also manage data across different types, or pools, of storage, such as public cloud object stores and disk-based object storage systems. StorNext supports a broad range of both private and public object stores. For customers that archive video and image data for years, StorNext is also integrated with tape storage, and can assign infrequently-used but important data to tape to create a large-scale active archive. In 2011, the company added the StorNext appliance offerings to its product family. In addition to the StorNext Archive Enabled Library (AEL), the company added a metadata controller (StorNext M330), a scale-out gateway appliance (G300), and several scalable storage systems (QM1200, QS1200 and QD6000). In February 2012, the company bolstered the StorNext appliance family with the addition of the QS2400 Storage System, followed in May by the M660 metadata appliance. F-Series NVMe In April 2019, Quantum introduced F-Series, a new line of NVMe storage arrays “designed for performance, availability and reliability.” Non-volatile memory express (NVMe) flash drives allow for massive parallel processing, while the latest Remote Direct Memory Access (RDMA) networking technology provides direct access between workstations and the NVMe storage devices. These hardware features are combined with Quantum Cloud Storage Platform and the StorNext file system to provide storage capabilities for post production houses, broadcasters and other rich media environments. Tape storage Since 1994, when it acquired the Digital Linear Tape product line from Digital, Quantum has sold tape storage products, including tape drives, media and automation. In 2007, Quantum discontinued development of the DLT line in favor of Linear Tape-Open, which it began selling in 2005 following its acquisition of Certance. In 2012, Quantum introduced its Scalar LTFS (Linear Tape File System) appliance, which offers new modes of portability and user accessibility for archived content on LTO tape. In 2016, Quantum refreshed its Scalar LTO tape library family and added an appliance for rich media archiving. The three new systems are part of the Quantum Scalar Storage Platform aimed at handling large-scale unstructured data. The Scalar i3 and i6 support LTO-6 and LTO-7 tapes. The Quantum Scalar i3 is designed for small to medium-sized businesses and departmental configurations. It scales up to 3 PB in a 12U rack space. The Quantum Scalar i6 is a midrange library for small enterprises. It scales up to 12 PB within a single 48U rack. The StorNext AEL6 archiving appliance combines the Quantum Scalar i6 library with Quantum's StorNext data management software for archive storage. It has self-healing auto-migration and targets rich media use cases. Backup appliances Quantum introduced its first disk-based backup and recovery product, the DX30, in 2002 and has continued to build out this product line. At the end of 2006, shortly after its acquisition of Advanced Digital Information Corporation (ADIC), Quantum announced the first of its DXi-Series products incorporating data deduplication technology which ADIC had acquired from a small Australian company called Rocksoft earlier that year. Quantum expanded and enhanced this product line. DXi-Series products incorporate Quantum's data deduplication technology, providing typical data reduction ratios of 15:1 or 93%. The company offers both target and source-based deduplication as well as integrated path-to-tape capability. DXi works with all major backup applications, including Symantec's OpenStorage (OST) API, Oracle SBT API, Veeam DMS, and supports everything from remote offices to corporate data centers. Quantum includes almost all software licenses for each model in the base price. In 2011, in addition to its DXi-Series of disk backup products, Quantum offered its RDX removable disk libraries and NDX-8 NAS appliances for data protection in small business environments. In 2012, Quantum announced a virtual deduplication appliance, the DXi V1000. In January, 2019, Quantum refreshed its DXi series, with the addition of the DXi9000 and DXi4800. The DXi9000 targets the enterprise market, scaling from 51 TB to 1 petabyte of usable capacity. The 12 TB hard drives allow for more storage using less physical space. The DXi4800 is a smaller-scale appliance targeting midmarket organizations and remote sites. Virtual machine data protection Quantum's vmPRO software and appliances are used for protecting virtual machine (VM) data. vmPRO software works with DXi appliances and users' existing backup applications to integrate VM backup and recovery into their existing data protection processes. It auto-discovers VMs and presents a file system view, allowing users to back up VMs or files within VMs without adding VM-specific agents. When data is read through the vmPRO software, inactive data is filtered out, reducing backup volumes by up to 75% and boosting deduplication rates. To support fast recovery, vmPRO software augments traditional backup with a simple VM snapshot utility that creates native-format VM copies on secondary disk, allowing restore at a VM or at a single file level. In March 2012, Quantum announced that its vmPRO technology and DXi V1000 virtual appliance had been selected by Xerox as a key component of the company's a key component of Xerox's cloud backup and disaster recovery (DR) services. In August 2012, Quantum announced Q-Cloud, its own branded cloud-based data protection service, which is also based on vmPRO and DXi technology. Media asset management software CatDV is a media management and workflow automation platform that helps organizations manage large volumes of unstructured data such as video, images, audio files, and PDF digital assets. The software catalogs and analyzes these files and can be used with Quantum’s StorNext file storage. Object storage In late 2012, Quantum introduced the Lattus product family OEMed from Amplidata, an object storage system composed of storage nodes, access nodes and controller nodes for large data stores. Lattus-X was the first disk-based archives in the Lattus family that includes a native HTTP REST interface, and CIFS and NFS access to applications. In 2020, Quantum entered into an agreement with Western Digital Technologies, Inc. to acquire its ActiveScale object storage business. ActiveScale allows companies to manage, protect, and preserve unstructured data, from a few hundred terabytes to tens of petabytes. It is used in industries such as media and entertainment, surveillance, big data, genomics, HPC, telecom, and medical imaging. Video surveillance storage Quantum has a number of products that capture and store video data, including video recording servers and hyperconverged storage systems. The VS-HCI series provides hyperconverged infrastructure for surveillance recording and video management. In July 2021, Quantum acquired assets and intellectual property from Pivot3, a hyperconverged infrastructure technology company. Pivot3’s video surveillance appliances, NVRs, management applicatons, and scale-out hyperconverged software are sold under Quantum’s VS-Series product portfolio. Autonomous vehicle edge storage Quantum’s R-Series Edge Storage is a ruggedized storage system that captures large amounts of data generated by vehicles. The removable storage allows the data to be quickly transferred to a centralized data center. References Manufacturing companies based in San Jose, California Companies listed on the New York Stock Exchange Computer storage companies Computer companies established in 1980 1980 establishments in California
23094504
https://en.wikipedia.org/wiki/IEEE%20802.1aq
IEEE 802.1aq
Shortest Path Bridging (SPB), specified in the IEEE 802.1aq standard, is a computer networking technology intended to simplify the creation and configuration of networks, while enabling multipath routing. It is the replacement for the older spanning tree protocols: IEEE 802.1D, IEEE 802.1w, IEEE 802.1s. These blocked any redundant paths that could result in a layer 2 loop, whereas SPB allows all paths to be active with multiple equal cost paths, provides much larger layer 2 topologies, supports faster convergence times, and improves the efficiency by allowing traffic to load share across all paths of a mesh network. It is designed to virtually eliminate human error during configuration and preserves the plug-and-play nature that established Ethernet as the de facto protocol at layer 2. The technology provides logical Ethernet networks on native Ethernet infrastructures using a link state protocol to advertise both topology and logical network membership. Packets are encapsulated at the edge either in media access control-in-media access control (MAC-in-MAC) 802.1ah or tagged 802.1Q/802.1ad frames and transported only to other members of the logical network. Unicast, multicast, and broadcast are supported and all routing is on symmetric shortest paths. The control plane is based on the Intermediate System to Intermediate System (IS-IS) routing protocol, leveraging a small number of extensions defined in RFC 6329. History On 4 March 2006 the working group posted 802.1aq draft 0.1. In December 2011, SPB was evaluated by the Joint Interoperability Test Command (JITC) and approved for deployment within the US Department of Defense (DoD) because of the ease in integrated OA&M and interoperability with current protocols. In March 2012 the IEEE approved the 802.1aq standard. In 2012, David Allan and Nigel Bragg said in 802.1aq Shortest Path Bridging Design and Evolution: The Architect's Perspective that shortest path bridging is one of the most significant enhancements in Ethernet's history. In May 2013, the first public multi-vendor interoperability was demonstrated as SPB served as the backbone for Interop 2013 in Las Vegas. The 2014 Winter Olympics were the first "fabric-enabled" Games using SPB "IEEE 802.1aq" technology. During the games this fabric network could handle up to 54 Tbit/s of traffic. In 2013 and 2014 SPB was used to build the InteropNet backbone with only one-tenth the resources of prior years. During Interop 2014 SPB was used as the backbone protocol which can enable software-defined networking (SDN) functionalities. Associated protocols IEEE 802.1Q-2014 - Bridges and Bridged Networks - This standard incorporates Shortest Path Bridging (IEEE 802.1aq) with the following: IEEE Std 802.1Q-2011, IEEE Std 802.1Qbe-2011, IEEE Std 802.1Qbc-2011, IEEE Std 802.1Qbb-2011, IEEE Std 802.1Qaz-2011, IEEE Std 802.1Qbf-2011, IEEE Std 802.1Qbg-2012, IEEE Std 802.1Q-2011/Cor 2–2012, and IEEE Std 802.1Qbp-2014, and much functionality previously specified in 802.1D. IEEE 802.1ag - Connectivity Fault Management (CFM) IEEE 802.1Qbp - Equal Cost Multiple Paths in Shortest Path Bridging IEEE P802.1Qcj - Automatic Attachment to Provider Backbone Bridging (PBB) services RFC 6329 - IS-IS Extensions Supporting IEEE 802.1aq Shortest Path Bridging RFC 6329 The Intermediate System to Intermediate System protocol (IS-IS), is defined in the IETF proposed standard RFC 6329, is used as the control plane for SPB. SPB requires no state machine or other substantive changes to IS-IS, and simply requires a new Network Layer Protocol Identifier (NLPID) and set of TLVs. SPB allows for shortest-path forwarding in an Ethernet mesh network context utilizing multiple equal cost paths. This permits SPB to support large Layer 2 topologies, with faster convergence, and vastly improved use of the mesh topology. Combined with this is single point provisioning for logical connectivity membership. IS-IS is therefore augmented with a small number of TLVs and sub-TLVs, and supports two Ethernet encapsulating data paths, 802.1ad Provider Bridges (PB) and 802.1ah Provider Backbone Bridges (PBB). SPB is designed to run in parallel with other network layer protocols such as IPv4 and IPv6. Standards mandate that the failure of two nodes to establish an SPB adjacency will not have collateral impact, such as the rejection of an adjacency for other network layer protocols (e.g. OSPF). Protocol Extensions The IS-IS extensions defined in RFC 6329 to deliver standardized support for 802.1aq SPB are: IS-IS Hello (IIH) Protocol Extensions Node Information Extensions Adjacency Information Extensions Service Information Extensions IS-IS Hello (IIH) Protocol Extensions 802.1aq has been designed to operate in parallel with other network layer protocols such as IPv4 and IPv6; therefore, failure of two nodes to establish an SPB adjacency will not cause network layer protocols to also reject an adjacency. 802.1aq has been assigned the Network Layer Protocol ID (NLPID) value 0xC1, as per RFC 6328, and is used by SPB Bridges to indicate their ability to form adjacencies and operate as part of a 802.1aq domain. 802.1aq frames flow on adjacencies that advertise this NLPID in both directions, and nodes regard an adjacency that has not advertised in both directions as non-existent (infinite link metric). 802.1aq augments the normal IIH PDU with three new TLVs, which like all other SPB TLVs, travel within Multi-Topology TLVs, therefore allowing multiple logical instances of SPB within a single IS-IS protocol instance. SPB can use many VIDs, agreeing on which VIDs are used for which purposes. The IIH PDUs carry a digest of all the used VIDs, referred to as the Multiple Spanning Tree Configuration TLV which uses a common and compact encoding reused from 802.1Q. For the purposes of loop prevention SPB neighbors may also support a mechanism to verify that the contents of their topology databases are synchronized. Exchanging digests of SPB topology information, using the optional SPB-Digest sub-TLV, allows nodes to compare information and take specific action where a mismatch in topology is indicated. Finally, SPB needs to know which Shortest Path Tree (SPT) sets are being used by which VIDs, and this is carried in the Base VLAN Identifiers TLV. Node Information Extensions All SPB nodal information extensions travel within a new Multi-Topology (MT) capability TLV. There can be one or many MT-Capability TLVs present, depending on the amount of information that needs to be carried. The SPB Instance sub-TLV gives the Shortest Path Source ID (SPSourceID) for this node/topology instance. This used in the formation of Multicast Destination Addresses (DAs) for frames originating from this node/instance. There are multiple ECT algorithms defined for SPB; however, for the future, additional algorithms may be defined, including but not limited to ECMP- or hash-based behaviors and (*,G) Multicast trees. These algorithms will use this optional TLV to define new algorithm parametric data. For tie-breaking parameters, there are two broad classes of algorithm, one that uses nodal data to break ties and one that uses link data to break ties. The SPB Instance Opaque Equal cost Tree Algorithm TLV is used to associate opaque tie-breaking data with a node. Adjacency Information Extensions The SPB Link Metric sub-TLV occurs within the Multi-Topology Intermediate System Neighbor TLV or within the Extended IS Reachability TLV. Where this sub-TLV is not present for an IS-IS adjacency, then that adjacency will not carry SPB traffic for the given topology instance. There are multiple ECT algorithms defined for SPB; however, for the future, additional algorithms may be defined; similarly the SPB Adjacency Opaque Equal Cost Tree Algorithm TLV also occurs within the Multi-Topology Intermediate System TLV or the Extended IS Reachability TLV. Service Information Extensions The SPBM Service Identifier and Unicast Address TLV is used to introduce service group membership on the originating node and/or to advertise an additional B-MAC Unicast Address present on, or reachable by the node. The SPBV MAC Address TLV is the IS-IS sub-TLV used for advertisement of Group MAC Addresses in SPBV mode. Benefits Shortest Path Bridging - VID (SPBV) and Shortest Path Bridging - MAC (SPBM) are two operating modes of 802.1aq, and are described in more detail below. Both inherit key benefits of link state routing: the ability to use all available physical connectivity, because loop avoidance uses a Control Plane with a global view of network topology fast restoration of connectivity after failure, again because of Link State routing's global view of network topology under failure, the property that only directly affected traffic is impacted during restoration; all unaffected traffic just continues rapid restoration of broadcast and multicast connectivity, because IS-IS floods all of the required information in the SPB extensions to IS-IS, thereby allowing unicast and multicast connectivity to be installed in parallel, with no need for a second phase signaling process to run over the converged unicast topology to compute and install multicast trees Virtualisation is becoming an increasingly important aspect of a number of key applications, in both carrier and enterprise space, and SPBM, with its MAC-in-MAC datapath providing complete separation between client and server layers, is uniquely suitable for these. "Data Centre virtualisation" articulates the desire to flexibly and efficiently harness available compute resources in a way that may rapidly be modified to respond to varying application demands, without the need to dedicate physical resources to a specific application. One aspect of this is server virtualisation. The other is connectivity virtualisation, because a physically distributed set of server resources must be attached to a single IP subnet, and modifiable in an operationally simple and robust way. SPBM delivers this; because of its client-server model, it offers a perfect emulation of a transparent Ethernet LAN segment, which is the IP subnet seen at layer 3. A key component of how it does this is implementing VLANs with scoped multicast trees, which means no egress discard of broadcast/unknown traffic, a feature common to approaches that use a small number of shared trees, hence the network does not simply degrade with size as the percentage of frames discarded goes up. It also supports "single touch" provisioning, so that configuration is simple and robust; the port of a virtual server must simply be bound locally to the SPBM I-SID identifying the LAN segment, after which IS-IS for SPB floods this binding, and all nodes that need to install forwarding state to implement the LAN segment do so automatically. The carrier-space equivalent of this application is the delivery of Ethernet VPN services to Enterprises over common carrier infrastructure. The required attributes are fundamentally the same; complete transparency for customer Ethernet services (both point-to-point and LAN), and complete isolation between one customer's traffic and that of all other customers. The multiple virtual LAN segment model provides this, and the single-touch provisioning model eases carrier operations. Furthermore, the MAC-in-MAC datapath allows the carrier to deploy the "best in class" Ethernet OAM suit (IEEE 802.1ag, etc.), entirely transparently and independently from any OAM which a customer may choose to run. A further consequence of SPBM's transparency in both dataplane and control plane is that it provides a perfect, "no compromise" delivery of the complete MEF 6.1 service set. This includes not only E-LINE and E-LAN constructs, by also E-TREE (hub-and-spoke) connectivity. This latter is clearly very relevant to enterprise customers of carrier VPN/MPLS services which have this network structure internally. It also provides the carrier with the toolkit to support geo-redundant broadband backhaul; in this applications, many DSLAMs or other access equipments must be backhauled to multiple Broadband Remote Access Server (BRAS) sites, with application-determined binding of sessions to a BRAS. However, DSLAMs must not be allowed to communicate with each other, because carriers then lose the ability to control peer-to-peer connectivity. MEF E-TREE does just this, and further provides an efficient multicast fabric for the distribution of IP TV. SPBM offers both the ideal multicast replication model, where packets are replicated only at fork points in the shortest path tree that connects members, and also the less state intensive head end replication model where in essence serial unicast packets are sent to all other members along the same shortest path first tree. These two models are selected by specifying properties of the service at the edge which affect the transit node decisions on multicast state installation. This allows for a trade-off to be made between optimum transit replication points (with their larger state costs) v.s. reduced core state (but much more traffic) of the head end replication model. These selections can be different for different members of the same Individual Service ID (I-SID) allowing different trade-offs to be made for different members. Figure 5 below is a quick way to understand what SPBM is doing on the scale of the entire network. Figure 5 shows how a 7-member E-LAN is created from the edge membership information and the deterministic distributed calculation of per source, per service trees with transit replication. Head end replication is not shown as it is trivial and simply uses the existing unicast FIBs to forward copies serially to the known other receivers. Operations and management 802.1aq builds on all existing Ethernet operations, administration and management (OA&M). Since 802.1aq ensures that its unicast and multicast packets for a given virtual LAN (VLAN) follow the same forward and reverse path and use completely standard 802 encapsulations, all of the methods of 802.1ag and Y.1731 operate unchanged on an 802.1aq network. See IEEE 802.1ag and ITU-recommendation Y.1731. High level 802.1aq is the Institute of Electrical and Electronics Engineers (IEEE) sanctioned link state Ethernet control plane for all IEEE VLANs covered in IEEE 802.1Q. Shortest Path Bridging virtual local area network identifier (VLAN ID) or Shortest Path Bridging VID (SPBV) provides capability that is backwards compatible with spanning tree technologies. Shortest Path Bridging Media Access Control (MAC) or (SPBM), (previously known as Provider Backbone Bridge PBB) provides additional values which capitalize on Provider Backbone Bridge (PBB) capabilities. SPB (the generic term for both) combines an Ethernet data path (either IEEE 802.1Q in the case of SPBV, or Provider Backbone Bridges (PBBs) IEEE 802.1ah in the case of SPBM) with an IS-IS link state control protocol running between Shortest Path bridges (network-to-network interface (NNI) links). The link state protocol is used to discover and advertise the network topology and compute shortest path trees (SPT) from all bridges in the SPT Region. In SPBM, the Backbone MAC (B-MAC) addresses of the participating nodes and also the service membership information for interfaces to non-participating devices (user network interface (UNI) ports) is distributed. Topology data is then input to a calculation engine which computes symmetric shortest path trees based on minimum cost from each participating node to all other participating nodes. In SPBV these trees provide a shortest path tree where individual MAC address can be learned and Group Address membership can be distributed. In SPBM the shortest path trees are then used to populate forwarding tables for each participating node's individual B-MAC addresses and for Group addresses; Group multicast trees are sub trees of the default shortest path tree formed by (Source, Group) pairing. Depending on the topology several different equal cost multi path trees are possible and SPB supports multiple algorithms per IS-IS instance. In SPB as with other link state based protocols, the computations are done in a distributed fashion. Each node computes the Ethernet compliant forwarding behavior independently based on a normally synchronized common view of the network (at scales of about 1000 nodes or less) and the service attachment points (user network interface (UNI) ports). Ethernet filtering Database (or forwarding) tables are populated locally to independently and deterministically implement its portion of the network forwarding behavior. The two different flavors of data path give rise to two slightly different versions of this protocol. One (SPBM) is intended where complete isolation of many separate instances of client LANs and their associated device MAC addresses is desired, and it therefore uses a full encapsulation (MAC-in-MAC a.k.a. IEEE 802.1ah). The other (SPBV) is intended where such isolation of client device MAC addresses is not necessary, and it reuses only the existing VLAN tag a.k.a. IEEE 802.1Q on participating network-to-network interface (NNI) links. Chronologically SPBV came first, with the project originally being conceived to address scalability and convergence of MSTP. At the time the specification of Provider Backbone bridging was progressing and it became apparent that leveraging both the PBB data plane and a link state control plane would significantly extend Ethernet's capabilities and applications. Provider Link State Bridging (PLSB) was a strawman proposal brought to the IEEE 802.1aq Shortest Path Bridging Working Group, in order to provide a concrete example of such a system. As IEEE 802.1aq standardization has progressed, some of the detailed mechanisms proposed by PLSB have been replaced by functional equivalents, but all of the key concepts embodied in PLSB are being carried forward into the standard. The two flavors (SPBV and SPBM) will be described separately although the differences are almost entirely in the data plane. Shortest Path Bridging-VID Shortest Path bridging enables shortest path trees for VLAN Bridges all IEEE 802.1 data planes and SPB is the term used in general. Recently there has been a lot of focus on SPBM as explained due to its ability to control the new PBB data plane and leverage certain capabilities such as removing the need to do B-MAC learning and automatically creating individual (unicast) and group (multicast) Trees. SPBV was actually the original project that endeavored to enable Ethernet VLANs to better utilize mesh networks. A primary feature of Shortest Path bridging is the ability to use Link State IS-IS to learn network topology. In SPBV the mechanism used to identify the tree is to use a different Shortest Path VLAN ID (VID) for each source bridge. The IS-IS topology is leveraged both to allocate unique SPVIDs and to enable shortest path forwarding for individual and group addresses. Originally targeted for small low configuration networks SPB grew into a larger project encompassing the latest provider control plane for SPBV and harmonizing the concepts of Ethernet data plane. Proponents of SPB believe that Ethernet can leverage link state and maintain the attributes that have made Ethernet one of the most encompassing data plane transport technologies. When we refer to Ethernet it is the layer 2 frame format defined by IEEE 802.3 and IEEE 802.1. Ethernet VLAN bridging IEEE 802.1Q is the frame forwarding paradigm that fully supports higher level protocols such as IP. SPB defines a shortest path Region which is the boundary of the shortest path topology and the rest of the VLAN topology (which may be any number of legacy bridges.) SPB operates by learning the SPB capable bridges and growing the Region to include the SPB capable bridges that have the same Base VID and MSTID configuration digest (Allocation of VIDs for SPB purposes). SPBV builds shortest path trees that support Loop Prevention and optionally support loop mitigation on the SPVID. SPBV still allows learning of Ethernet MAC addresses but it can distribute multicast address that can be used to prune the shortest path trees according to the multicast membership either through Multiple MAC Registration Protocol (MMRP) or directly using IS-IS distribution of multicast membership. SPBV builds shortest path trees but also interworks with legacy bridges running Rapid Spanning Tree Protocol and Multiple Spanning Tree Protocol. SPBV uses techniques from MSTP Regions to interwork with non-SPT regions behaving logically as a large distributed bridge as viewed from outside the region. SPBV supports shortest path trees but SPBV also builds a spanning tree which is computed from the link state database and uses the Base VID. This means that SPBV can use this traditional spanning tree for computation of the Common and Internal Spanning Tree (CIST). The CIST is the default tree used to interwork with other legacy bridges. It also serves as a fall back spanning tree if there are configuration problems with SPBV. SPBV has been designed to manage a moderate number of bridges. SPBV differs from SPBM in that MAC addresses are learned on all bridges that lie on the shortest path and a shared VLAN learning is used since destination MACs may be associated with multiple SPVIDs. SPBV learns all MACs it forwards even outside the SPBV region. Shortest Path Bridging-MAC SPBM reuses the PBB data plane which does not require that the Backbone Core Bridges (BCB) learn encapsulated client addresses. At the edge of the network the C-MAC (client) addresses are learned. SPBM is very similar to PLSB (Provider Link State Bridging) using the same data and control planes but the format and contents of the control messages in PLSB are not compatible. Individual MAC frames (unicast traffic) from an Ethernet attached device that are received at the SPBM edge are encapsulated in a PBB (mac-in-mac) IEEE 802.1ah header and then traverse the IEEE 802.1aq network unchanged until they are stripped of the encapsulation as they egress back to the non-participating attached network at the far side of the participating network. Ethernet destination addresses (from UNI port attached devices) perform learning over the logical LAN and are forwarded to the appropriate participating B-MAC address to reach the far end Ethernet destination. In this manner Ethernet MAC addresses are never looked up in the core of an IEEE 802.1aq network. When comparing SPBM to PBB, the behavior is almost identical to a PBB IEEE 802.1ah network. PBB does not specify how B-MAC addresses are learned and PBB may use a spanning tree to control the B-VLAN. In SPBM the main difference is that B-MAC address are distributed or computed in the control plane, eliminating the B-MAC learning in PBB. Also SPBM ensures that the route followed is shortest path tree. The forward and reverse paths used for unicast and multicast traffic in an IEEE 802.1aq network are symmetric. This symmetry permits the normal Ethernet Continuity Fault Messages (CFM) IEEE 802.1ag to operate unchanged for SPBV and SPBM and has desirable properties with respect to time distribution protocols such as Precision Time Protocol (PTP Version 2). Also existing Ethernet loop prevention is augmented by loop mitigation to provide fast data plane convergence. Group address and unknown destination individual frames are optimally transmitted to only members of the same Ethernet service. IEEE 802.1aq supports the creation of thousands of logical Ethernet services in the form of E-LINE, E-LAN or E-TREE constructs which are formed between non-participating logical ports of the IEEE 802.1aq network. These group address packets are encapsulated with a PBB header which indicates the source participating address in the SA while the DA indicates the locally significant group address this frame should be forwarded on and which source bridge originated the frame. The IEEE 802.1aq multicast forwarding tables are created based on computations such that every bridge which is on the shortest path between a pair of bridges which are members of the same service group will create proper forwarding database (FDB) state to forward or replicate frames it receives to that members of that service group. Since the group address computation produce shortest path trees, there is only ever one copy of a multicast packet on any given link. Since only bridges on a shortest path between participating logical ports create forwarding database (FDB) state the multicast makes the efficient use of network resources. The actual group address forwarding operation operates more or less identically to classical Ethernet, the backbone destination address (B-DA)+ backbone VLAN identifier (B-VID) combination are looked up to find the egress set of next hops. The only difference compared with classical Ethernet is that reverse learning is disabled for participating bridge backbone media access control (B-MAC) addresses and is replaced with an ingress check and discard (when the frame arrives on an incoming interface from an unexpected source). Learning is however implemented at the edges of the SPBM multicast tree to learn the B-MAC to MAC address relationship for correct individual frame encapsulation in the reverse direction (as packets arrive over the Interface). Properly implemented an IEEE 802.1aq network can support up to 1000 participating bridges and provide tens of thousands of layer 2 E-LAN services to Ethernet devices. This can be done by simply configuring the ports facing the Ethernet devices to indicate they are members of a given service. As new members come and go, the IS-IS protocol will advertise the I-SID membership changes and the computations will grow or shrink the trees in the participating node network as necessary to maintain the efficient multicast property for that service. IEEE 802.1aq has the property that only the point of attachment of a service needs configuration when a new attachment point comes or goes. The trees produced by the computations will automatically be extended or pruned as necessary to maintain connectivity. In some existing implementations this property is used to automatically (as opposed to through configuration) add or remove attachment points for dual-homed technologies such as rings to maintain optimum packet flow between a nonparticipating ring protocol and the IEEE 802.1aq network by activating a secondary attachment point and deactivating a primary attachment point. Failure recovery Failure recovery is as per normal IS-IS with the link failure being advertised and new computations being performed, resulting in new FDB tables. Since no Ethernet addresses are advertised or known by this protocol, there is no re-learning required by the SPBM core and its learned encapsulations are unaffected by a transit node or link failure. Fast link failure detection may be performed using IEEE 802.1ag Continuity Check Messages (CCMs) which test link status and report a failure to the IS-IS protocol. This allows much faster failure detection than is possible using the IS-IS hello message loss mechanisms. Both SPBV and SPBM inherit the rapid convergence of a link state control plane. A special attribute of SPBM is its ability to rebuild multicast trees in a similar time to unicast convergence, because it substitutes computation for signaling. When an SPBM bridge has performed the computations on a topology database, it knows whether it is on the shortest path between a root and one or more leaves of the SPT and can install state accordingly. Convergence is not gated by incremental discovery of a bridge's place on a multicast tree by the use of separate signaling transactions. However, SPBM on a node does not operate completely independently of its peers, and enforces agreement on the current network topology with its peers. This very efficient mechanism uses exchange of a single digest of link state covering the entire network view, and does not need agreement on each path to each root individually. The result is that the volume of messaging exchanged to converge the network is in proportion to the incremental change in topology and not the number of multicast trees in the network. A simple link event that may change many trees is communicated by signaling the link event only; the consequent tree construction is performed by local computation at each node. The addition of a single service access point to a service instance involves only the announcement of the I-SID, regardless of the number of trees. Similarly the removal of a bridge, which might involve the rebuilding of hundreds to thousands of trees, is signaled only with a few link state updates. Commercial offerings will likely offer SPB over multi-chassis lag. In this environment multiple switch chassis appear as a single switch to the SPB control plane, and multiple links between pairs of chassis appear as an aggregate link. In this context a single link or node failure is not seen by the control plane and is handled locally resulting in sub 50ms recovery times. Animations Following are three animated GIFs which help to show the behavior of 802.1aq. The first of these gifs, shown in Figure 5, demonstrates the routing in a 66 node network where we have created a 7-member E-LAN using ISID 100. In this example we show the equal cost tree (ECT) created from each member to reach all of the other members. We cycle through each member to show the full set of trees created for this service. We pause at one point to show the symmetry of routing between two of the nodes and emphasize it with a red line. In each case the source of the tree is highlighted with a small purple V. The second of these animated gifs, shown in Figure 6, demonstrates 8 ECT paths in the same 66 node network as Figure 4. In each subsequent animated frame the same source is used (in purple) but a different destination is shown (in yellow). For each frame, all of the shortest paths are shown superimposed between the source and destination. When two shortest paths traverse the same hop, the thickness of the lines being drawn is increased. In addition to the 66 node network, a small multi level Data Center style network is also shown with sources and destinations both within the servers (at the bottom) and from servers to the router layer at the top. This animation helps to show the diversity of the ECT being produced. The last of these animated gifs, shown in Figure 7, demonstrates source destination ECT paths using all 16 of the standard algorithms currently defined. Details Equal cost multi tree Sixteen equal cost multi tree (ECMT) paths are initially defined, however there are many more possible. ECMT in an IEEE 802.1aq network is more predictable than with internet protocol (IP) or multiprotocol label switching (MPLS) because of symmetry between the forward and reverse paths. The choice as to which ECMT path will be used is therefore an operator assigned head end decision while it is a local / hashing decision with IP/MPLS. IEEE 802.1aq, when faced with a choice between two equal link cost paths, uses the following logic for its first ECMT tie breaking algorithm: first, if one path is shorter than the other in terms of hops, the shorter path is chosen, otherwise, the path with the minimum Bridge Identifier { BridgePriority concatenated with (IS-IS SysID) } is chosen. Other ECMT algorithms are created by simply using known permutations of the BridgePriority||SysIds. For example, the second defined ECMT algorithm uses the path with the minimum of the inverse of the BridgeIdentifier and can be thought of as taking the path with the maximum node identifier. For SPBM, each permutation is instantiated as a distinct B-VID. The upper limit of multipath permutations is gated by the number of B-VIDs delegated to 802.1aq operation, a maximum of 4094, although the number of useful path permutations would only require a fraction of the available B-VID space. Fourteen additional ECMT algorithms are defined with different bit masks applied to the BridgeIdentifiers. Since the BridgeIdentfier includes a priority field, it is possible to adjust the ECMT behavior by changing the BridgePriority up or down. A service is assigned to a given ECMT B-VID at the edge of the network by configuration. As a result, non-participating packets associated with that service are encapsulated with the VID associated with the desired ECMT end to end path. All individual and group address traffic associated with this service will therefore use the proper ECMT B-VID and be carried symmetrically end to end on the proper equal cost multi path. Essentially the operator decides which services go in which ECMT paths, unlike a hashing solution used in other systems such as IP/MPLS. Trees can support link aggregation (LAG) groups within a tree "branch" segment where some form of hashing occurs. This symmetric and end to end ECMT behavior gives IEEE 802.1aq a highly predictable behavior and off line engineering tools can accurately model exact data flows. The behavior is also advantageous to networks where one way delay measurements are important. This is because the one way delay can be accurately computed as 1/2 the round trip delay. Such computations are used by time distribution protocols such as IEEE 1588 for frequency and time of day synchronization as required between precision clock sources and wireless base stations. Shown above are three figures [5,6,7] which show 8 and 16 equal cost tree (ECT) behavior in different network topologies. These are composites of screen captures of an 802.1aq network emulator and show the source in purple, the destination in yellow, and then all the computed and available shortest paths in pink. The thicker the line, the more shortest paths use that link. The animations shows three different networks and a variety of source and destination pairs which continually change to help visualize what is happening. The equal cost tree (ECT) algorithms can be almost extended through the use of OPAQUE data which allows extensions beyond the base 16 algorithms more or less infinitely. It is expected that other standards groups or vendors will produce variations on the currently defined algorithms with behaviors suited for different networks styles. It is expected that numerous shared tree models will also be defined, as will hop by hop hash based equal-cost multi-path (ECMP) style behaviors .. all defined by a VID and an algorithm that every node agrees to run. Traffic engineering 802.1aq does not spread traffic on a hop by hop basis. Instead, 802.1aq allows assignment of a Service ID (ISID) to a VLAN ID (VID) at the edge of the network. A VID will correspond to exactly one of the possible sets of shortest path nodes in the network and will never stray from that routing. If there are 10 or so shortest paths between different nodes, it is possible to assign different services to different paths and to know that the traffic for a given service will follow exactly the given path. In this manner traffic can easily be assigned to the desired shortest path. In the event that one of the paths becomes overloaded it is possible to move some services off that shortest path by reassigning those service's ISID to a different, less loaded, VID at the edges of the network. The deterministic nature of the routing makes offline prediction/computation/experimentation of the network loading much simpler since actual routes are not dependent on the contents of the packet headers with the exception of the VLAN identifier. Figure 4 shows four different equal-cost paths between nodes 7 and 5. An operator can achieve a relatively good balance of traffic across the cut between nodes [0 and 2] and [1 and 3] by assigning the services at nodes 7 and 5 to one of the four desired VIDs. Using more than 4 equal cost tree (ECT) paths in the network will likely allow all 4 of these paths to be used. Balance can also be achieved between nodes 6 and 4 in a similar manner. In the event that an operator does not wish to manually assign services to shortest paths it is a simple matter for a switch vendor to allow a simple hash of the ISID to one of the available VIDS to give a degree of non-engineered spreading. For example, the ISID modulo the number of ECT-VIDs could be used to decide on the actual relative VID to use. In the event that the ECT paths are not sufficiently diverse the operator has the option of adjusting the inputs to the distributed ECT algorithms to apply attraction or repulsion from a given node by adjusting that node's Bridge Priority. This can be experimented with via offline tools until the desired routes are achieved at which point the bias can be applied to the real network and then ISIDs can be moved to the resulting routes. Looking at the animations in Figure 6 shows the diversity available for traffic engineering in a 66 node network. In this animation, there are 8 ECT paths available from each highlighted source to destination and therefore services could be assigned to 8 different pools based on the VID. One such initial assignment in Figure 6 could therefore be (ISID modulo 8) with subsequent fine tuning as required. Example We will work through SPBM behavior on a small example, with emphasis on the shortest-path trees for unicast and multicast. The network shown in Figure 1 consists of 8 participating nodes numbered 0 through 7. These would be switches or routers running the IEEE 802.1aq protocol. Each of the 8 participating nodes has a number of adjacencies numbered 1..5. These would likely correspond to interface indexes, or possibly port numbers. Since 802.1aq does not support parallel interfaces each interface corresponds to an adjacency. The port / interface index numbers are of course local and are shown because the output of the computations produce an interface index (in the case of unicast) or a set of interface indexes (in the case of multicast) which are part of the forwarding information base (FIB) together with a destination MAC address and backbone VID. The network has a fully meshed inner core of four nodes (0..3) and then four outer nodes (4,5,6 and 7), each dual-homed onto a pair of inner core nodes. Normally when nodes come from the factory they have a MAC address assigned which becomes a node identifier but for the purpose of this example we will assume that the nodes have MAC addresses of the form 00:00:00:00:N:00 where N is the node id (0..7) from Figure 1. Therefore, node 2 has a MAC address of 00:00:00:00:02:00. Node 2 is connected to node 7 (00:00:00:00:07:00) via node 2's interface/5. The IS-IS protocol runs on all the links shown since they are between participating nodes. The IS-IS hello protocol has a few additions for 802.1aq including information about backbone VIDs to be used by the protocol. We will assume that the operator has chosen to use backbone VIDs 101 and 102 for this instance of 802.1aq on this network. The node will use their MAC addresses as the IS-IS SysId and join a single IS-IS level and exchange link-state packets (LSPs in IS-IS terminology). The LSPs will contain node information and link information such that every node will learn the full topology of the network. Since we have not specified any link weights in this example, the IS-IS protocol will pick a default link metric for all links, therefore all routing will be minimum hop count. After topology discovery the next step is distributed calculation of the unicast routes for both ECMP VIDs and population of the unicast forwarding tables (FIBs). Consider the route from Node 7 to Node 5: there are a number of equal-cost paths. 802.1aq specifies how to choose two of them: the first is referred to as the Low PATH ID path. This is the path which has the minimum node id on it. In this case the Low PATH ID path is the 7->0->1->5 path (as shown in red in Figure 2). Therefore, each node on that path will create a forwarding entry toward the MAC address of node five using the first ECMP VID 101. Conversely, 802.1aq specifies a second ECMP tie-breaking algorithm called High PATH ID. This is the path with the maximum node identifier on it and in the example is the 7->2->3->5 path (shown in blue in Figure 2). Node 7 will therefore have a FIB that among other things indicates: MAC 00:00:00:05:00 / vid 101 the next hop is interface/1. MAC 00:00:00:05:00 / vid 102 the next hop is interface/2. Node 5 will have exactly the inverse in its FIB: MAC 00:00:00:07:00 / vid 101 the next hop is interface/1. MAC 00:00:00:07:00 / vid 102 the next hop is interface/2. The intermediate nodes will also produce consistent results so for example node 1 will have the following entries. MAC 00:00:00:07:00 / vid 101 the next hop is interface/5. MAC 00:00:00:07:00 / vid 102 the next hop is interface/4. MAC 00:00:00:05:00 / vid 101 the next hop is interface/2. MAC 00:00:00:05:00 / vid 102 the next hop is interface/2. And Node 2 will have entries as follows: MAC 00:00:00:05:00 / vid 101 the next hop is interface/2. MAC 00:00:00:05:00 / vid 102 the next hop is interface/3. MAC 00:00:00:07:00 / vid 101 the next hop is interface/5. MAC 00:00:00:07:00 / vid 102 the next hop is interface/5. If we had an attached non-participating device at Node 7 talking to a non-participating device at Node 5 (for example Device A talks to Device C in Figure 3), they would communicate over one of these shortest paths with a MAC-in-MAC encapsulated frame. The MAC header on any of the NNI links would show an outer source address of 00:00:00:70:00, an outer destination address of 00:00:00:50:00 and a BVID of either 101 or 102 depending on which has been chosen for this set of non-participating ports/vids. The header once inserted at node 7 when received from node A, would not change on any of the links until it egressed back to non-participating Device C at Node 5. All participating devices would do a simple DA+VID lookup to determine the outgoing interface, and would also check that incoming interface is the proper next hop for the packet's SA+VID. The addresses of the participating nodes 00:00:00:00:00:00 ... 00:00:00:07:00 are never learned but are advertised by IS-IS as the node's SysId. Unicast forwarding to a non-participating client (e.g. A, B, C, D from Figure 3) address is of course only possible when the first hop participating node (e.g. 7) is able to know which last hop participating node (e.g. 5) is attached to the desired non-participating node (e.g. C). Since this information is not advertised by IEEE 802.1aq it has to be learned. The mechanism for learning is identical to IEEE 802.1ah, in short, the corresponding outer MAC unicast DA, if not known is replaced by a multicast DA and when a response is received, the SA of that response now tells us the DA to use to reach the non-participating node that sourced the response. e.g. node 7 learns that C is reached by node 5. Since we wish to group/scope sets of non-participating ports into services and prevent them from multicasting to each other, IEEE 802.1aq provides mechanism for per source, per service multicast forwarding and defines a special multicast destination address format to provide this. Since the multicast address must uniquely identify the tree, and because there is a tree per source per unique service, the multicast address contains two components, a service component in the low order 24 bits and a network-wide unique identifier in the upper 22 bits. Since this is a multicast address the multicast bit is set, and since we are not using the standard OUI space for these manufactured addresses, the Local 'L' bit is set to disambiguate these addresses. In Figure 3 above, this is represented with the DA=[7,O] where the 7 represents packets originating from node 7 and the colored O represents the E-LAN service we are scoped within. Prior to creating multicast forwarding for a service, nodes with ports that face that service must be told they are members. For example, nodes 7,4,5 and 6 are told they are members of the given service, for example service 200, and further that they should be using BVID 101. This is advertised by ISIS and all nodes then do the SPBM computation to determine if they are participating either as a head end or tail end, or a tandem point between other head and tail ends in the service. Since node 0 is a tandem between nodes 7 and 5 it creates a forwarding entry for packets from node 7 on this service, to node 5. Likewise, since it is a tandem between nodes 7 and 4 it creates forwarding state from node 7 for packets in this service to node 4 this results in a true multicast entry where the DA/VID have outputs on two interfaces 1 and 2. Node 2 on the other hand is only on one shortest path in this service and only creates a single forwarding entry from node 7 to node 6 for packets in this service. Figure 3 only shows a single E-LAN service and only the tree from one of the members, however very large numbers of E-LAN services with membership from 2 to every node in the network can be supported by advertising the membership, computing the tandem behaviors, manufacturing the known multicast addresses and populating the FIBs. The only real limiting factors are the FIB table sizes and computational power of the individual devices both of which are growing yearly in leaps and bounds. Implementation notes 802.1aq takes IS-IS topology information augmented with service attachment (I-SID) information, does a series of computations and produces a forwarding table (filtering table) for unicast and multicast entries. The IS-IS extensions that carry the information required by 802.1aq are given in the isis-layer2 IETF document listed below. An implementation of 802.1aq will first modify the IS-IS hellos to include an NLPID (network layer protocol identifier) of 0xC01 in their Protocols-Supported type–length–value (TLV) (type 129) which has been reserved for 802.1aq. The hellos also must include an MSTID (which gives the purpose of each VID) and finally each ECMT behavior must be assigned to a VID and exchanged in the hellos. The hellos would normally run untagged. Note that NLPID of IP is not required to form an adjacency for 802.1aq but also will not prevent an adjacency when present. The links are assigned 802.1aq specific metrics which travel in their own TLV (Type Length Value) which is more or less identical to the IP link metrics. The calculations will always use the maximum of the two unidirectional link metrics to enforce symmetric route weights. The node is assigned a mac address to identify it globally and this is used to form the IS-IS SYSID. A box mac would normally serve this purpose. The Area-Id is not directly used by 802.1aq but should, of course, be the same for nodes in the same 802.1aq network. Multiple areas/levels are not yet supported. The node is further assigned an SPSourceID which is a 20 bit network wide unique identifier. This can often be the low 20 bits of the SYSID (if unique) or can be dynamically negotiated or manually configured. The SPSourceID and the ECMT assignments to B-VIDs are then advertised into the IS-IS network in their own 802.1aq TLV. The 802.1aq computations are restricted to links between nodes that have an 802.1aq link weight and which support the NLPID 0xC01. As previously discussed the link weights are forced to be symmetric for the purpose of computation by taking the min of two dissimilar values. When a service is configured in the form of an I-SID assignment to an ECMT behavior that I-SID is then advertised along with the desired ECMT behavior and an indication of its transmit, receive properties (a new TLV is used for this purpose of course). When an 802.1aq node receives an IS-IS update it will compute the unique shortest path to all other IS-IS nodes that support 802.1aq. There will be one unique (symmetric) shortest path per ECMT behavior. The tie breaking used to enforce this uniqueness and ECMT is described below. The unicast FDB/FIB will be populated based on this first shortest path computation. There will be one entry per ECMT behavior/B-VID produced. The transit multicast computation (which only applies when transit replication is desired and not applicable to services that have chosen head end replication) can be implemented in many ways, care must be taken to keep this efficient, but in general a series of shortest path computations must be done. The basic requirement is to decide 'am I on the shortest path between two nodes one of which transmits an I-SID and the other receives that I-SID.' Rather poor performing pseudo-code for this computation looks something like this: for each NODE in network which originates at least one transmit ISID do SPF = compute the shortest path trees from NODE for all ECMT B-VIDs. for each ECMT behavior do for each NEIGHBOR of NODE do if NEIGHBOR is on the SPF towards NODE for this ECMT then T = NODE's transmit ISIDs unioned with all receive ISIDs below us on SPF for each ISID in T do create/modify multicast entry where [ MAC-DA = NODE.SpsourceID:20||ISID:24||LocalBit:1||MulticastBit:1 B-VID = VID associated with this ECMT out port = interface to NEIGHBOR in port = port towards NODE on the SPF for this ECMT ] The above pseudo code computes many more SPF's than strictly necessary in most cases and better algorithms are known to decide if a node is on a shortest path between two other nodes. A reference to a paper presented at the IEEE which gives a much faster algorithm that drastically reduces the number of outer iterations required is given below. In general though even the exhaustive algorithm above is more than able to handle several hundred node networks in a few 10's of milliseconds on the 1 GHz or greater common CPUs when carefully crafted. For ISIDs that have chosen head end replication the computation is trivial and involves simply finding the other attachment points that receive that ISID and creating a serial unicast table to replicate to them one by one. Tie-breaking 802.1aq must produce deterministic symmetric downstream congruent shortest paths. This means that not only must a given node compute the same path forward and reverse but all the other nodes downstream (and upstream) on that path must also produce the same result. This downstream congruence is a consequence of the hop by hop forwarding nature of Ethernet since only the destination address and VID are used to decide the next hop. It is important to keep this in mind when trying to design other ECMT algorithms for 802.1aq as this is an easy trap to fall into. It begins by taking the unidirectional link metrics that are advertised by ISIS for 802.1aq and ensuring that they are symmetric. This is done by simply taking the MIN of the two values at both ends prior to doing any computations. This alone does not guarantee symmetry however. The 802.1aq standard describes a mechanism called a PATHID which is a network-wide unique identifier for a path. This is a useful logical way to understand how to deterministically break ties but is not how one would implement such a tie-breaker in practice. The PATHID is defined as just the sequence of SYSIDs that make up the path (not including the end points).. sorted. Every path in the network therefore has a unique PATHID independent of where in the network the path is discovered. 802.1aq simply always picks the lowest PATHID path when a choice presents itself in the shortest path computations. This ensures that every node will make the same decision. For example, in Figure 7 above, there are four equal-cost paths between node 7 and node 5 as shown by the colors blue, green, pink and brown. The PATHID for these paths are as follows: PATHID[] = {0,1} PATHID[] = {0,3} PATHID[] = {1,2} PATHID[] = {2,3} The lowest PATHID is therefore the brown path {0,1}. This low PATHID algorithm has very desirable properties. The first is that it can be done progressively by simply looking for the lowest SYSID along a path and secondly because an efficient implementation that operates stepwise is possible by simply back-tracking two competing paths and looking for the minimum of the two paths minimum SYSIDs. The low PATHID algorithm is the basis of all 802.1aq tie breaking. ECMT is also based on the low PATHID algorithm by simply feeding it different SYSID permutations – one per ECMT algorithm. The most obvious permutation to pass is a complete inversion of the SYSID by XOR-ing it with 0xfff... prior to looking for the min of two minimums. This algorithm is referred to as high PATHID because it logically chooses the largest PATHID path when presented with two equal-cost choices. In the example in figure 7, the path with the highest PATHID is therefore the blue path whose PATHID is {2,3}. Simply inverting all the SYSIDs and running the low PATHID algorithm will yield same result. The other 14 defined ECMT algorithms use different permutations of the SYSID by XOR-ing it with different bit masks which are designed to create relatively good distribution of bits. It should be clear that different permutations will result in the purple and green paths being lowest in turn. The 17 individual 64-bit masks used by the ECT algorithm are made up of the same byte value repeated eight times to fill each 64-bit mask. These 17 byte values are as follows: ECT-MASK[17] = { 0x00, 0x00, 0xFF, 0x88, 0x77, 0x44, 0x33, 0xCC, 0xBB, 0x22, 0x11, 0x66, 0x55, 0xAA, 0x99, 0xDD, 0xEE }; ECT-MASK[0] is reserved for a common spanning tree algorithm, while ECT-MASK[1] creates the Low PATHID set of shortest path first trees, ECT-MASK[2] creates the High PATHID set of shortest path trees and the other indexes create other relatively diverse permutations of shortest path first trees. In addition the ECMT tie-breaking algorithms also permit some degree of human override or tweaking. This is accomplished by including a BridgePriority field together with the SYSID such that the combination, called a BridgeIdentfier, becomes the input to the ECT algorithm. By adjusting the BridgePriority up or down a path's PATHID can be raised or lowered relative to others and a substantial degree of tunability is afforded. The above description gives an easy to understand way to view the tie breaking; an actual implementation simply backtracks from the fork point to the join point in two competing equal-cost paths (usually during the Dijkstra shortest path computation) and picks the path traversing the lowest (after masking) BridgePriority|SysId. Interoperability The first public interoperability tests of IEEE 802.1aq were held in Ottawa in October 2010. Two vendors provided SPBM implementations and a total of 5 physical switches and 32 emulated switches were tested for control/data and OA&M. Further events were held in Ottawa in January 2011 with 5 vendors and 6 implementations, at 2013's Interop event at Las Vegas where an SPBM network was used as a backbone. Competitors MC-LAG, VXLAN, and QFabric have all been proposed, but the IETF TRILL standard (Transparent Interconnect of Lots of Links) is considered the major competitor of IEEE 802.1aq, and: "the evaluation of relative merits and difference of the two standards proposals is currently a hotly debated topic in the networking industry." Deployments Deployment considerations and interoperability best practices are documented in an IETF document titled "SPB Deployment Considerations" 2013 Interop: Networking Leaders Demo Shortest Path Bridging 2014 Interop: InteropNet Goes IPv6, Includes Shortest Path Bridging Extreme Networks, by virtue of their acquisition of the Avaya Networking business and assets, is currently the leading exponent of SPB-based deployments; their enhanced and extended implementation of SPB - including integrated Layer 3 IP Routing and IP Multicast functionality - is marketed under the banner of the "Fabric Connect" technology. Additionally, Extreme Networks is supporting an IETF Internet Draft Draft that defines a means of automatically extended SPBM-based services to end-devices via conventional Ethernet Switches, leveraging an 802.1AB LLDP-based communications protocol; this capability - marketing "Fabric Attach" technology - allows for the automatic attachment of end-devices, and includes dynamic configuration of VLAN/I-SID (VSN) mappings. Avaya (acquired by Extreme Networks) has deployed SPB/Fabric Connect solutions for businesses operating across a number of industry verticals: Education, examples include: Leeds Metropolitan University, Macquaire University, Pearland Independent School District, Ajman University of Science & Technology Transportation, examples include: Schiphol Telematics, Rheinbahn, Sendai City Transportation Bureau, NSB Banking & Finance, examples include: Fiducia, Sparebanken Vest Major Events, examples include: 2013 & 2014 Interop (InteropNet Backbone), 2014 Sochi Winter Olympics, Dubai World Trade Center Healthcare, examples include: Oslo University Hospital, Concord Hospital, Franciscan Alliance, Sydney Adventist Hospital Manufacturing, examples include: Fujitsu Technology Solutions Media, examples include: Schibsted, Medienhaus Lensing, Sanlih Entertainment Television Government, examples include: City of Redondo Beach, City of Breda, Bezirksamt Neukölln Product support Alcatel-Lucent 7750-SR, Alcatel-Lucent Enterprise OmniSwitch 9900, OmniSwitch 6900, OmniSwitch 6860, OmniSwitch 6865 Extreme Networks VSP 9000 Series Extreme Networks VSP 8400 Series Extreme Networks VSP 8000 Series (VSP 8284XSQ, VSP 8404C) Extreme Networks VSP 7200 Series Extreme Networks VSP 4000 Series (VSP 4450GSX-PWR+, VSP 4450GSX-DC, VSP 4450GTX-HT-PWR+, VSP 4850GTS, VSP 4850GTS-PWR+, VSP 4850GTS-DC) Extreme Networks ERS 5900 Series Extreme Networks ERS 4900 Series Extreme Networks ERS 4800 Series Enterasys Networks S140 and S180 Extreme Networks K-Series Huawei S9300 (prototype only at the moment) Solana Spirent HP 5900, 5920, 5930, 11900, 12500, 12900 IP Infusion's ZebOS network platform IXIA JDSU See also Connection-oriented Ethernet Provider Backbone Bridge Traffic Engineering (PBB-TE) Virtual Enterprise Network Architecture Notes References Avaya VSP Configuration Guide Further reading The Great Debate: TRILL Versus 802.1aq (SBP), NANOG 50 session (October 2010) External links 802 Committee website ITU-T Recommendation Y.1731 OAM functions and mechanisms for Ethernet based networks Avaya Alcatel-Lucent Huawei Solana and Sprient Showcase Shortest Path Bridging Interoperability; Marketwatch, 7 September 2011 - Retrieved 7 September 2011 IEEE 802 Ethernet standards Link protocols Internet architecture Network layer protocols Routing Emerging technologies Network architecture Mesh networking
37705948
https://en.wikipedia.org/wiki/%28385250%29%202001%20DH47
(385250) 2001 DH47
, provisional designation , is a sub-kilometer asteroid and Mars trojan orbiting 60° behind the orbit of Mars near the . Discovery, orbit and physical properties was discovered on 1 February 2001 by the Spacewatch program, observing from Steward Observatory, Kitt Peak and classified as Mars-crosser by the Minor Planet Center. Its orbit is characterized by low eccentricity (0.035), moderate inclination (24.4º) and a semi-major axis of 1.52 AU. Its orbit is well determined as it is currently (March 2013) based on 45 observations with a data-arc span of 3,148 days. It has an absolute magnitude of 19.7 which gives a characteristic diameter of 562 m. Mars trojan and orbital evolution It was identified as Mars trojan by H. Scholl, F. Marzari and P. Tricarico in 2005 and its dynamical half-lifetime was found to be of the order of the age of the Solar System. Recent calculations confirm that it is indeed a stable Mars trojan with a libration period of 1365 yr and an amplitude of 11°. These values as well as its short-term orbital evolution are very similar to those of 5261 Eureka. Origin Long-term numerical integrations show that its orbit is very stable on Gyr time-scales (1 Gyr = 1 billion years). As in the case of Eureka, calculations in both directions of time (4.5 Gyr into the past and 4.5 Gyr into the future) indicate that may be a primordial object, perhaps a survivor of the planetesimal population that formed in the terrestrial planets region early in the history of the Solar System. See also 5261 Eureka (1990 MB) References Further reading 2001 DH47 Ivashchenko, Y., Ostafijchuk, P., Spahr, T. B. 2007, Minor Planet Electronic Circular, 2007-P09. Dynamics of Mars Trojans Scholl, H., Marzari, F., Tricarico, P. 2005, Icarus, Volume 175, Issue 2, pp. 397–408. Three new stable L5 Mars Trojans de la Fuente Marcos, C., de la Fuente Marcos, R. 2013, Monthly Notices of the Royal Astronomical Society: Letters, Vol. 432, Issue 1, pp. 31–35. External links data at MPC 385250 385250 385250 20010220
20682
https://en.wikipedia.org/wiki/MIPS%20Technologies
MIPS Technologies
MIPS Technologies, Inc., formerly MIPS Computer Systems, Inc., was an American fabless semiconductor design company that is most widely known for developing the MIPS architecture and a series of RISC CPU chips based on it. MIPS provides processor architectures and cores for digital home, networking, embedded, Internet of things and mobile applications. MIPS was founded in 1984 to commercialize the work being carried out at Stanford University on the MIPS architecture, a pioneering RISC design. The company generated intense interest in the late 1980s, seeing design wins with Digital Equipment Corporation (DEC) and Silicon Graphics (SGI), among others. By the early 1990s the market was crowded with new RISC designs and further design wins were limited. The company was purchased by SGI in 1992, by that time its only major customer, and won several new designs in the game console space. In 1998, SGI announced they would be transitioning off MIPS and spun off the company. After several years operating as an independent design house, in 2013 the company was purchased by Imagination Technologies, best known for their PowerVR graphics processor family. They were sold to Tallwood Venture Capital in 2017 and then purchased soon after by Wave Computing in 2018. Wave declared bankruptcy in 2020, emerging in 2021 as MIPS and announcing that the MIPS architecture was being abandoned in favor of RISC-V designs. History MIPS Computer Systems Inc. was founded in 1984 by a group of researchers from Stanford University that included John L. Hennessy and Chris Rowen. These researchers had worked on a project called MIPS (for Microprocessor without Interlocked Pipeline Stages), one of the projects that pioneered the RISC concept. Other principal founders were Skip Stritter, formerly a Motorola technologist, and John Moussouris, formerly of IBM. The initial CEO was Vaemond Crane, formerly President and CEO of Computer Consoles Inc., who arrived in February 1985 and departed in June 1989. He was replaced by Bob Miller, a former senior IBM and Data General executive. Miller ran the company through its IPO and subsequent sale to Silicon Graphics. In 1988, MIPS Computer Systems designs were noticed by Silicon Graphics (SGI) and the company adopted the MIPS architecture for its computers. A year later, in December 1989, MIPS held its first IPO. That year, Digital Equipment Corporation (DEC) released a Unix workstation based on the MIPS design. After developing the R2000 and R3000 microprocessors, a management change brought along the larger dreams of being a computer vendor. The company found itself unable to compete in the computer market against much larger companies and was struggling to support the costs of developing both the chips and the systems (MIPS Magnum). To secure the supply of future generations of MIPS microprocessors (the 64-bit R4000), SGI acquired the company in 1992 for $333 million and renamed it as MIPS Technologies Inc., a wholly owned subsidiary of SGI. During SGI's ownership of MIPS, the company introduced the R8000 in 1994 and the R10000 in 1996 and a follow up the R12000 in 1997. During this time, two future microprocessors code-named The Beast and Capitan were in development; these were cancelled after SGI decided to migrate to the Itanium architecture in 1998. As a result, MIPS was spun out as an intellectual property licensing company, offering licences to the MIPS architecture as well as microprocessor core designs. On June 30, 1998, MIPS held an IPO after raising about $16.3 million with an offering price of $14 a share. In 1999, SGI announced it would overhaul its operations; it planned to continue introducing new MIPS processors until 2002, but its server business would include Intel's processor architectures as well. SGI spun MIPS out completely on June 20, 2000 by distributing all its interest as stock dividend to the stockholders. In early 2008 MIPS laid-off 28 employees from its processor business group. On August 13, 2008, MIPS announced a loss of $108.5 million for their fiscal fourth-quarter and that they would lay-off another 15% of their workforce. At the time MIPS had 512 employees. In May 2018, according to the company's presence on LinkedIn, there may be less than 50 employees. Some notable people who worked in MIPS: James Billmaier, Steve Blank, Joseph DiNucci, John L. Hennessy, David Hitz, Earl Killian, Dan Levin, John Mashey, John P. McCaskey, Bob Miller, Stratton Sclavos. and Skip Stritter. Board members included: Bill Davidow. In 2010, Sandeep Vij was named CEO of MIPS Technologies. Vij studied under Dr. John Hennessy as a Stanford University grad student. Prior to taking over at MIPS, Vij was an executive at Cavium Networks, Xilinx and Altera. EE Times reported that MIPS had 150 employees as of November 1, 2010. If the August 14, 2008 EDN article was accurate about MIPS having over 500 employees at the time, then MIPS reduced their total workforce by 70% between 2008 and 2010. In addition to its main R&D centre in Sunnyvale, California, MIPS has engineering facilities in Shanghai, China, Beaverton, Oregon, Bristol and Kings Langley, both in England. It also has offices in Hsin-chu, Taiwan; Tokyo, Japan; Remscheid, Germany and Haifa, Israel. During the first quarter of 2013, 498 out of 580 of MIPS patents were sold to Bridge Crossing which was created by Allied Security Trust, with all processor-specific patents and the other parts of the company sold to Imagination Technologies Group. Imagination had outbid Ceva Inc to buy MIPS with an offer of $100 million, and was investing to develop the architecture for the embedded processor market. Company timeline Products MIPS Technologies created the processor architecture that is licensed to chip makers. Before the acquisition, the company had 125+ licensees who ship more than 500 million MIPS-based processors each year. MIPS processor architectures and cores are used in home entertainment, networking and communications products. The company licensed its 32- and 64-bit architectures as well as 32-bit cores. The MIPS32 architecture is a high-performance 32-bit instruction set architecture (ISA) that is used in applications such as 32-bit microcontrollers, home entertainment, home networking devices and mobile designs. MIPS customers license the architecture to develop their own processors or license off-the-shelf cores from MIPS that are based on the architecture. The MIPS64 architecture is a high performance 64-bit instruction set architecture that is widely used in networking infrastructure equipment through MIPS licensees such as Cavium Networks and Broadcom. SmartCE (Connected Entertainment) is a reference platform that integrates Android, Adobe Flash platform for TV, Skype, the Home Jinni ConnecTV application and other applications. SmartCE lets OEM customers create integrated products more quickly. MIPS processor core families The MIPS processor cores are divided by Imagination into three major families: Warrior: hardware virtualization, hardware multi-threading, and SIMD M-class: M5100 and M5150, M6200 and M6250 I-class: I6400, I7200 P-class: P5600, P6600 Aptiv: microAptiv (compact, real-time embedded processor core), interAptiv (multiprocessor, multi-threaded core with a nine-stage pipeline), proAptiv (super-scalar, deeply out-of-order processor core with high CoreMark/MHz score) Classic. 4K, M14K, 24K, 34K, 74K, 1004K (multicore and multithreaded) and 1074K (superscalar and multithreaded) families. Licensees MIPS Technologies had a strong customer licensee base in home electronics and portable media players; for example, 75 percent of Blu-ray Disc players were running on MIPS Technologies processors. In the digital home, the company's processors were predominantly found in digital TVs and set-top boxes. The Sony PlayStation Portable used two processors based on the MIPS32 4K processor. Within the networking segment, licensees include Cavium Networks and Broadcom. Cavium has used up to 48 MIPS cores for its OCTEON family network reference designs. Broadcom ships Linux-ready MIPS64-based XLP, XLR, and XLS multicore, multithreaded processors. Licensees using MIPS to build smartphones and tablets include Actions Semiconductor and Ingenic Semiconductor. Tablets based on MIPS include the Cruz tablets from Velocity Micro. TCL Corporation is using MIPS processors for the development of smartphones. Companies can also obtain an MIPS architectural licence for designing their own CPU cores using the MIPS architecture. Distinct MIPS architecture implementations by licensees include Broadcom's BRCM 5000. Other licensees include Broadcom, which has developed MIPS-based CPUs for over a decade, Microchip Technology, which leverages MIPS processors for its 32-bit PIC32 microcontrollers, Qualcomm Atheros, MediaTek and Mobileye, whose EyeQ chips are based on cores licensed from MIPS. Operating systems MIPS is widely supported by Unix-like systems, including Linux, FreeBSD, NetBSD, and OpenBSD. Google's processor-agnostic Android operating system is built on the Linux kernel. MIPS originally ported Android to its architecture for embedded products beyond the mobile handset, where it was originally targeted by Google but MIPS support was dropped in 2018. In 2010, MIPS and its licensee Sigma Designs announced the world's first Android set-top boxes. By porting to Android, MIPS processors power smartphones and tablets running on the Android operating system. OpenWrt is an embedded operating system based on the Linux kernel. While it currently runs on a variety of processor architectures, it was originally developed for the Linksys WRT54G, which used a 32-bit MIPS processor from Broadcom. The OpenWrt Table of Hardware now includes MIPS-based devices from Atheros, Broadcom, Cavium, Lantiq, MediaTek, etc. Real-time operating systems that run on MIPS include CMX Systems, eCosCentric's eCos, ENEA OSE, Express Logic's ThreadX, FreeRTOS, Green Hills Software's Integrity, LynuxWorks' LynxOS, Mentor Graphics, Micrium's Micro-Controller Operating Systems (µC/OS), QNX Software Systems' QNX, Quadros Systems Inc.'s RTXC™ Quadros RTOS, Segger's embOS and Wind River's VxWorks. See also Prpl Foundation References Further reading 1984 establishments in California American companies established in 1984 Companies based in Sunnyvale, California Computer companies established in 1984 Electronics companies established in 1984 Manufacturing companies based in the San Francisco Bay Area Semiconductor companies of the United States Technology companies based in the San Francisco Bay Area 1980s initial public offerings 1998 initial public offerings 2013 mergers and acquisitions 2017 mergers and acquisitions Private equity portfolio companies
23108931
https://en.wikipedia.org/wiki/Forward%20air%20control%20during%20the%20Vietnam%20War
Forward air control during the Vietnam War
The forward air controller (FAC) played a significant part in the Vietnam War from the very start. Largely relegated to airborne duty by the constraints of jungled terrain, FACs began operations as early as 1962. Using makeshift propeller-driven aircraft and inadequate radio nets, they became so essential to air operations that the overall need for FACs would not be completely satisfied until 1969. The FAC's expertise as an air strike controller also made him an intelligence source, munitions expert, communication specialist, and above all, the on-scene commander of the strike forces and the start of any subsequent combat search and rescue if necessary. Present as advisors under Farm Gate, FACs grew even more important as American troops poured into Vietnam after the Gulf of Tonkin incident. The U.S. Air Force (USAF) would swell its FAC complement to as many as 668 FACs in Vietnam by 1968; there were also FACs from the U.S. Army, U.S. Navy, U.S. Marine Corps, and allied nations. For the early years of the war USAF manning levels were at about 70% of need; they finally reached 100% in December 1969. The FACs would be essential participants in close air support in South Vietnam, interdiction efforts against the Ho Chi Minh Trail, supporting a guerrilla war on the Plain of Jars in Laos, and probing home defenses in North Vietnam. As the war came to center on the Trail in 1969, the FAC role began to be marginalized. Anti-aircraft (AAA) defenses became steadily more aggressive and threatening along the Trail as the bombing of North Vietnam closed down. The communist enemy moved their supply activities to nighttime, quite literally leaving the FACs in the dark. The American response was twofold. They used fixed-wing gunships with electronic sensors to detect communist trucks, and onboard weaponry to destroy them. They also began putting FACs in jet aircraft and in flareships as a counter to the AAA threat. At about the same time, emplaced ground sensors began to complement and overshadow FAC reconnaissance as an intelligence source. FAC guidance of munitions also began to come into play in 1970. By the time the Vietnam War ended in 1975, the U.S. and its allies had dropped about six times as many tons of bombs as had been dropped in the entirety of World War II. A considerable proportion of this tonnage had been directed by forward air controllers. Operating environment Terrain The Forward Air Controller (FAC) fulfilled many duties during the Second Indochina War. In addition to the usual close air support strike missions to aid South Vietnamese ground forces in their struggle against insurgents backed by the Democratic Republic of Vietnam, he might direct combat search and rescue operations or air interdiction strikes on the Ho Chi Minh Trail. Other FAC duties included escort of supply truck convoys, or involvement in covert operations. The FAC also advised ground commanders on usage of air power, and trained indigenous personnel in forward air control. Most of all, he flew the core mission of visual reconnaissance, seeking information on the enemy. The airborne FAC flew the Cessna O-1 Bird Dog or other light aircraft slowly over the rough terrain at low altitude to maintain constant aerial surveillance. By patrolling the same area constantly, the FACs grew very familiar with the terrain, and they learned to detect any changes that could indicate enemy forces hiding below. The rugged jungle terrain of South East Asia easily hid enemy troop movements. However, the FAC looked for tracks on the ground, dust settling in foliage, roiled water in streams—all signs of furtive enemy movement. Both unexpected campfire plumes and rows of fresh vegetables growing near water in "uninhabited" areas were also tip-offs to communist camps. However, decoy camps were not unknown; ofttimes fires were kept underground, with the smoke plumes being redirected through a laterally extended chimney. The communist insurgents were proficient in both camouflage and disguise. Camouflage extended down to individual soldiers using green branches to garnish backpacks. As part of disguise, the insurgents would sometimes dress as civilians, even going so far as to dress as monks or as women carrying small children. Despite these evasions, by 1968 FAC visual recon had largely suppressed daytime communist activities. Flying low and slow over enemy forces was very dangerous for the FACs; however the enemy usually held his fire to avoid discovery. However, when the enemy opened fire, it might hit him with anything from rifle bullets to 37mm antiaircraft cannon. A low pass for post-strike bomb damage assessment was another hazardous duty. American ground FACs began to supply on-the-job training to South Vietnamese counterparts in Tactical Air Control Parties, in an effort to improve poor performance by the local FACs. However, rough terrain, limited sight lines, and difficulty in communication always seriously hindered ground FAC efforts in Southeast Asia. FAC aircraft A wide variety of aircraft were used in the forward air control role. Propeller-driven Common propeller-driven FAC aircraft were: Cessna O-1 Bird Dog: This two-seater served as the original FAC aircraft. Slow, unarmed and unarmored, its small size limited its payload and it did not have instrumentation for night operations. Although it carried three radios for air strike coordination—Frequency Modulation, High Frequency, and Very High Frequency—only one radio channel was available at a time. Cessna O-2 Skymaster: 510 were modified for military service. A businessman's aircraft, it was adapted for interim use by FACs, to replace the O-1. It was a faster airplane, with two engines in a tractor/pusher arrangement, four hard points for ordnance, and a seven-hour linger time. Its side by side seating limited the pilot's line of vision to the right and to the rear. OV-10 Bronco: The first American airplane designed for FAC work; entered combat service on 6 July 1968. With double an O-1's speed, excellent all-around sight lines for observation, an armored cockpit, and an avionics suite that included eight secure radios along with the flight instruments, the OV-10's five ordnance hard points made it a potent combination of FAC and light strike aircraft. By 1972, the Bronco was responsible for laser illuminating targets for about 60% of the "smart bombs" dropped in Vietnam. Other propeller-driven aircraft were also used as FAC aircraft, usually in an interim, ad hoc, or specialized role: Cessna U-17 Skywagon North American T-28 Trojan A-26 Invader A-1 Skyraider OV-1 Mohawk Fairchild AC-119 C-123 Provider: Call sign "Candlestick" C-130 Hercules: Call sign "Blindbat" Douglas RC-47P C-7 Caribou Fast FAC jet Jet aircraft were also used for FAC duties: Grumman F-9 Cougar: Used by U.S. Marine Corps as original Fast FAC experiment F-100 Super Sabre: Call sign "Misty" McDonnell Douglas F-4 Phantom II: Call signs "Stormy", "Wolf", "Night Owl", "Whiplash/Laredo" Martin B-57 Canberra Rules of engagement The Rules of Engagement (ROE) placed restrictions on the use and direction of air strikes. In 1961, when American pilots and South Vietnamese FACs began to fly combat missions together, the first ROE was established. The original requirement was that only the Vietnamese FACs could drop ordnance because all air strikes required the approval of the South Vietnamese government. Also, aircraft could return fire if fired upon, in what was dubbed "armed reconnaissance". On 25 January 1963, the ROE were updated to establish some free-fire zones containing only enemy troops; permission was not needed to place an air strike there. The requirement for Vietnamese approval was also waived for night missions supporting troops in contact, so long as they were supported by a Douglas C-47 flareship. By 1964, the ROEs had changed to allow U.S. Army aircraft to observe from as low as 50 feet, while the USAF and VNAF were held to a 500-foot minimum. As the war evolved, so did the ROE; they became more complex. Different branches of the military service—U.S. Air Force, U.S. Army, U.S. Marine Corps, U.S. Navy—flew under differing rules. For instance, the requirement for a Vietnamese FAC on-board was waived for U.S. Army FACs. On 9 March 1965, U.S. air was cleared to strike in South Vietnam with airplanes stationed in-country; however, Thailand-based bombers were forbidden to hit South Vietnamese targets. The ROE changed according to the location of the action and force involved. Only USAF FACs could support U.S. Army ground forces in South Vietnam, unless the Army was operating in a free fire zone. And while the military approved air strikes in Vietnam, approval of any target in Laos depended on the American ambassador. Common to all iterations of the ROE was insistence on aligning strike runs so ordnance was dropped or fired headed away from friendly troops and innocent civilians, and toward the enemy troops. Extraordinary circumstances might find a FAC constrained to direct a drop parallel to friendly lines. Only in dire emergencies would the FAC decree approve strikes in a direction toward friendly forces. The ROE were also clear that, no matter how junior in rank the FAC, he was in complete control of the air strike. Inattentive or disobedient pilots were sometimes told to carry their bombs back to base. There is anecdotal evidence that "friendly fire" incidents were reported all the way to the U.S. President. An outstanding example of FAC salvation of the civilian populace occurred on 8 February 1968. Several hundred refugees moving on Route 9 from Khe Sanh to Lang Vei were spared an artillery barrage when Captain Charles Rushforth identified them as a non-military target. Staff personnel tested the FACs on the ROE on a monthly basis. The FAC might have to master more than one set of the ROEs. The complexity of the Rules, and the aggravation of conforming with them, were a prime recruiter for the Raven FACs working undercover in Laos. System In many cases, the forward air control system began with the forward air controller being pre-briefed on a target. He then planned his attack mission. In other cases, an immediate air request came in requiring a rapid response; the FAC might have to divert from a pre-briefed target. In any case, the FAC would rendezvous with the strike aircraft, preferably out of view of the targeted enemy. Once permission to strike was verified, the FAC marked the target, usually with a smoke rocket. Once the strike aircraft identified the marked target, they were directed by the FAC. Once the strike was complete, the FAC would make a bomb damage assessment and report it. The FAC was the most important link in any one of the air control systems; in any of them he served as a hub for the strike effort. He was in radio contact not just with the strike aircraft; he also talked to the Airborne Command and Control Center coordinating airstrike availability, to ground forces, and to the headquarters approving the strike. He was supported by Tactical Air Control Parties co-located with ground forces' headquarters ranging down to the regimental, brigade, or battalion level. However, the multiplicity of systems, their equipment shortages, and the inexperience of participants, all handicapped the FAC in the Vietnam War. In summary, whether airborne or ground bound, a FAC's expertise as an air strike controller made him an intelligence source, munitions expert, communication specialist, and above all, the on-scene commander of the strike forces and the start of any subsequent combat search and rescue if necessary. Operations There were four focal points of anti-communist air operations during the Second Indochina War. Only two of these four focal points were located in Vietnam. South Vietnam Before the Tonkin Gulf Incident The U.S. Air Force had shut down FAC operations after the Korean War, in 1956. In 1961 it revived the doctrine and sent five fighter pilots as FACs to Bien Hoa Air Base in the Farm Gate contingent to advise and train the Republic of Vietnam Air Force (VNAF) in directing air strikes from O-1 Bird Dogs. In the process, the USAF reinvented both forward air control and the Air Commandos. The reinvention was complicated by the language and cultural difficulties between Americans and Vietnamese, the clash between the two country's differing FAC procedures, and South Vietnamese policies toward FACs. Inadequate radios and a clash of four differing communications procedures—U.S. Army, U.S. Air Force, U.S. Marine Corps, and the VNAF—would plague attempts to standardize a forward air control system. Although all users agreed that strike aircraft should be diverted from preplanned missions to supply close air support, the U.S. Air Force, U.S. Army, and the Vietnamese military each followed different new complex communication procedures for the redirection. The U.S. Air Force believed in a centralized top-down control system. The U.S. Army opted for a decentralized one. The Vietnamese had a more complex centralized system, and trusted only a very few senior officers and officials to approve strikes. The U.S. Special Forces sometimes was forced to circumvent any system because of dire emergencies. The Marines continued their organic system of Marine fliers supporting Marine infantry. The FACS' situation was aggravated by shortages and maldistribution of the most basic supplies. A 1957 inter-service agreement laid supply responsibility for U.S. Air Force FAC efforts to support the U.S. Army on the Army. The latter owned the O-1 Bird Dogs; both the USAF and the VNAF depended on transfers of the aircraft to them. Both radio jeeps and ordinary vehicles were in short supply. Supplies all around were scanty, and the logistics system was a nightmare. With the Army doing such a poor job of supply, the USAF assumed the responsibility, but logistics problems would dog the FACs until war's end. In December, 1961, the Tactical Air Control System set up as part of the Farm Gate effort began handling air offensive operations, including airborne forward air control. On 8 December 1961, the U.S. Joint Chiefs of Staff granted the newly re-established 1st Air Commando Group authority to strike communist insurgents. On 8 February 1962, the Air Operations Center for Vietnam was set up at Tan Son Nhut on the outskirts of Saigon; it would be the command and control network for forward air control. In April 1962, a USAF study concluded only 32 American FACs were required for Vietnam service; by the time the last of the 32 had been assigned a year later, they were obviously insufficient. On 14 April 1962, the VNAF began training Forward Air Guides (FAGs) as ground personnel to aid airborne FACs. By 1 July 1962, 240 FAGs had been trained, but were authorized to direct air strikes only in an emergency. The FAGs were often misassigned upon return to duty, and seldom used in practice. The FAG training program dwindled away. At the same time, the Americans tried to "sell" the concept of a FAC stationed as an Air Liaison Officer at each Vietnamese headquarters as an advisor on air power. At night, the communist guerrillas would attack detachments of Army of the Republic of Vietnam (ARVN) troops in isolated hamlets. The Farm Gate air commandos improvised a night FAC procedure, using a C-47 to drop flares, with T-28s or A-26 Invaders dive bombing under the flares. The Viet Cong fled these strikes. Eventually, it took only the first flare for the communists to break off an assault. In 1962, elements of Marine Observation Squadron 2 landed at Soc Trang to join the forward air control effort. The squadron would later transition from the O-1 to UH-1 Huey helicopters. It was also in 1962 that the communists began to attack convoys moving supplies within South Vietnam. A program of shadowing truck convoys with FAC O-1s began; no escorted resupply column was ambushed during early 1963. As the war escalated, the Vietnamese military needed more FACs than could be trained. The U.S. Air Force responded by activating the 19th Tactical Air Support Squadron (19th TASS) at Bien Hoa Air Base on 17 June 1963. Despite chronic shortages of aircraft, vehicles, and radios, the 19th TASS would persevere into combat readiness. However, their effectiveness was constrained by the fact that the Vietnamese FACs were subject to prosecution for any "friendly fire" incidents. The U.S. Army's 73d Aviation Company also began FAC duties at this time; they were somewhat more successful than the 19th TASS because the Army allowed surveillance from a lower altitude than the USAF. After the Tonkin Gulf Incident After the Gulf of Tonkin incident served as the American casus belli in August 1964, the United States began to add large numbers of ground troops needing air support in South Vietnam. As of January, 1965, there were only 144 USAF airborne FACs to support them; 76 of these were assigned as advisers. There were also 68 VNAF FACs, but only 38 aircraft, in the four Vietnamese liaison squadrons. Yet the Rules of Engagement mandated a forward air controller direct all air strikes in South Vietnam. At this juncture, the overloaded air control mission began to metastasize in response to events. On 7 February 1965, Viet Cong guerrillas attacked Pleiku Air Base. On 2 March, the U.S. retaliated by beginning a campaign, Operation Rolling Thunder, to bomb North Vietnam. To streamline operations, the American FACs were relieved of the necessity of carrying Vietnamese observers to validate targets on 9 March. The Operation Steel Tiger interdiction campaign against the Ho Chi Minh Trail and the Demilitarized Zone was started on 3 April 1965. In September 1965, the USAF's 12th Tactical Air Control Party (TACP) landed in Vietnam to begin management of the FAC force. TACPs were slated to be assigned one per maneuver battalion, one per brigade headquarters, and four per divisional headquarters. Chairman of the U.S. Joint Chiefs of Staff General Earle G. Wheeler visited South Vietnam in the midst of all this, in March 1965. He saw a need for more FACs. Immediately after his return stateside, the JCS authorized three more Tactical Air Support Squadrons in June 1965. In the midst of this buildup in number of FACs, the first Airborne Command and Control Center was launched to serve as a relay between TACP and the FAC pilots. ABCCC would become the inflight nerve center of the Vietnam air war. It not only kept track of all other aircraft, it served "to assure proper execution of the fragged missions and to act as, a central control agency in diversion of the strike force to secondary and lucrative targets." ABCCC would expand into a twenty-four-hour-per-day program directing all air activity in the war. By early 1965, the USAF had realized that TACAN radar was a near-necessity for bombing operations, due to the lack of reliable maps and other navigation aids. As a result, Combat Skyspot radars were emplaced throughout South Vietnam and elsewhere in Southeast Asia. By October, 1965, the U. S. Air Force realized it still had an insufficient number of FACs. Although the Rules of Engagement were changed to lessen the workload on the FAC force, the USAF continued short of trained Forward Air Controllers until the U. S. drawdown of troops lessened demand. By April 1966, five Tactical Air Support Squadrons had filled out the Air Force combat units of the 504th Tactical Air Support Group. The squadrons were based thus: 19th Tactical Air Support Squadron: Bien Hoa Air Base, Republic of Vietnam (RVN) 20th Tactical Air Support Squadron: Da Nang Air Base, RVN 21st Tactical Air Support Squadron: Pleiku Air Base, RVN 22d Tactical Air Support Squadron: Binh Thuy Air Base, RVN 23d Tactical Air Support Squadron: Nakhon Phanom Royal Thai Air Force Base, Kingdom of Thailand The 504th Group served mostly for logistics, maintenance, and administrative functions. It comprised only 250 O-1 Bird Dog FACs for all South Vietnam. The FACs were supposed to be assigned two per maneuver battalion. However, the FACs were actually assigned to ground force brigades and lived and worked with the battalions on active operations. As of September 1966, in the wake of establishing the 23d TASS, the FAC effort was still short 245 O-1 Bird Dogs, with no suitable alternatives. From December 1965 onwards, close air support for the U.S. Navy's riverine forces in the Mekong Delta came from carrier aviation. On 3 January 1969, the U. S. Navy raised its own forward air control squadron, VAL-4, using OV-10 Broncos borrowed from the Marine Corps. VAL-4 was stationed at Binh Thuy and Vung Tau, and would fly 21,000 combat sorties before its disbandment on 10 April 1972. Those sorties would be a mix of light strike missions and forward air control. The 220th Reconnaissance Airplane Company, under operation control of 3rd MARDIV in I Corps, was the only Army company officially authorized to direct air strikes. Due to the Marine pilots of VMO-6 being overstretched by the intensity of combat operations in the DMZ, pilots of the 220th were, uniquely, given the Marine designation of Tactical Air Coordinator (Airborne). As airborne controllers, they were formally approved to run air strikes in addition to directing artillery and Naval gunfire. The Royal Australian Air Force sent 36 experienced and well-trained FACs to serve in Vietnam, either attached to USAF units or with No. 9 Squadron RAAF. One of them, Flight Lieutenant Garry Cooper (see Further reading section below), served with such distinction he was recommended for the Medal of Honor by Major General Julian Ewell. The Royal New Zealand Air Force placed 15 of its FACs under U. S. command during the war. By 1968, there were 668 Air Force FACs in country, scattered at 70 forward operating locations throughout South Vietnam. By November, a minimum of 736 FACs were deemed necessary for directing the air war, but only 612 were available. The USAF was scanting and diluting the requirement that all FACs be qualified fighter pilots by this time, in its effort to supply the demand. FAC manning levels from 1965 through 1968 averaged only about 70% of projected need. By this time, the cessation of enemy daytime activities in areas surveilled by FACs, as the communists changed to night operations, would lead to a shift to night FAC operations by some O-2s. One hundred percent manning of the FAC requirement effort would finally come in December, 1969, via lessened demand for the mission. The Mekong Delta and the Cambodian incursion Following preliminary trial against the Ho Chi Minh Trail, Operation Shed Light A-1 Skyraiders fitted with low-light-level television were tested in night operations over the Mekong Delta. Flying at 2,000 to 2,500 feet altitude, the A-1s found enemy targets on 83% of their sorties, and launched attacks in about half these sightings. The A-1s took more hits over the Delta than they had over the Trail, and the television did not work as well as expected. By the time the test was over on 1 December 1968, the USAF had decided to further develop the sensor. The cameras were stripped from the test A-1s, and the aircraft forwarded to the 56th Special Operations Wing. However, the low-light-level television would be further developed as part of a sensor package installed in Martin B-57 Canberra bombers. The communists used Cambodia as a sanctuary for their troops, flanking the South Vietnamese effort and venturing across the border into South Vietnam's Mekong Delta for operations and retreating into "neutral" territory to escape counterattacks. On 20 April 1970, the Cambodian government asked the U.S. for help with the problem of the border sanctuaries. On 30 April, the U.S. and South Vietnam sent ground forces into Cambodia to destroy communist supplies and sanctuaries. They were supported by a huge air campaign. Four Tactical Air Support Squadrons were committed to the effort—the 19th, 20th, 22d, and 23d. To handle such a massive effort, a TACP was committed, relaying its instructions to FACs through a central airborne FAC dubbed "Head Beagle". When he proved unequal to handling the volume of incoming air support, a Lockheed EC-121 Warning Star was assigned to the task in December 1970. Although American ground forces withdrew from Cambodia by 1 July, the air interdiction campaign continued. A detachment of the 19th TASS, the French-speaking "Rustic" FACs, remained to patrol in support of Cambodian troops. The American FACs would covertly support the Cambodian non-communists by directing massive U. S. air strikes until 15 August 1973. North Vietnam The U.S. military considered the Demilitarized Zone (DMZ) and the southern portion of North Vietnam as an extension of the South Vietnamese battleground. In 1966, the U.S. used FACs from the 20th TASS, flying O-1 Bird Dogs and later O-2 Skymasters, to direct air strikes in the Route Pack 1 portion of Rolling Thunder. Contained within Route Pack 1, Tally Ho took in the southern end of the Route Pack plus the DMZ. By August 1966, communist anti-aircraft fire made eastern half of Tally Ho too hazardous for the O-1s. As ground fire made the Tally Ho mission increasingly hazardous for the slow prop planes, the Marines pioneered Fast FACs in Vietnam, using two-seated F9F Panther jets in this area, as well on deep targets on the Ho Chi Minh Trail. The Marine Fast FACs also adjusted naval gunfire when they were north of the DMZ. There were also pioneering efforts by "Misty" Fast FACs. The greatest effect Rolling Thunder had on FAC usage was its demise. President Johnson halted bombing above 20 degrees north longitude in North Vietnam on 1 April 1968. On 1 November 1968, he entirely ended the bombing of North Vietnam, closing Operation Rolling Thunder. These halts would cause a drastic redirection of American air power toward the Ho Chi Minh Trail in Laos. Laos The basis for military operations in Laos were radically different from that in Vietnam. Laotian neutrality had been established by the international treaty of the 1954 Geneva Agreement which prohibited any foreign military except a small French military mission. In December 1961, General Phoumi Nosavan seized control of the Kingdom of Laos in the Battle of Vientiane. The Central Intelligence Agency (CIA) backed his rise to power and established themselves and their Thai mercenaries as the prime advisers to the Lao armed forces. On 29 May 1961, because there could be no military advisory group in Laos, U.S. President John F. Kennedy granted the Ambassador control of all American paramilitary activities within that country. Thus it was that the CIA gained charge of the ground war in Laos. They contracted out aerial supply missions in the country to the civilian pilots of the CIA's captive airline, Air America. U.S. Air Force FACs would be secretively imported by the ambassador to control air strikes under his supervision. The initial use of forward air control in northern Laos was a sub rosa effort by both airborne and ground FACs during 19–29 July 1964 for Operation Triangle. Heartened by this experience, the USAF initially used enlisted Combat Controllers garbed as civilians, with the call sign "Butterfly", to direct air strikes from civilian aircraft flown by Air America. After General William Momyer cancelled the "Butterfly" assignment, the Raven forward air control unit was created on 5 May 1966 for service in Laos as a successor to the Butterfly program. The U.S. Air Force's Project 404 began an organized FAC effort at the request of Ambassador William H. Sullivan. These Raven FACs were stationed throughout Laos. Two of their Air Operations Centers were in northern Laos, at Luang Prabang and Long Tieng. Two more AOCs edged the Ho Chi Minh Trail, at Pakxe and Savannakhet. A fifth AOC was at Vientiane. Project 404 accepted veteran FACs in the Vietnam theater who volunteered for the Raven FAC assignment; they tended to be warriors frustrated with bureaucracy and Byzantine Rules of Engagement. Few in number, flying in civilian clothing in unmarked O-1 Bird Dogs or U-17s, the Ravens often faced overwhelming tasks. In one instance, a FAC flew 14 combat hours in a single day. In another, a FAC directed 1,000 air strikes in 280 combat hours within a month. Upon occasion, queues of up to six fighter-bomber flights awaited target marking by a Raven. By 1969, 60% of all tactical air strikes flown in Southeast Asia were expended in Laos. The ranks of the Ravens were greatly augmented to handle this stepped-up air offensive, though they never exceeded 22. Working as a Raven FAC was an exhausting, high-risk, high-stress job. By war's end, there had been 161 Butterflies and Ravens directing air strikes in Laos; 24 were lost in action. The overall casualty rate ran about 50%. By the end of his tour, Raven Craig Duehring calculated that 90% of their planes had been hit by ground fire at some point, and 60% had been downed. Nor were the Ravens the only FACs working in Laos. By mid 1969, about 91 FAC sorties per day were launched into Laos, about a third of them jet FACs. Northern Laos The Raven FACs were stationed throughout Laos. Two of their Air Operations Centers (AOCs) were in northern Laos, at Luang Prabang and Long Tieng. Two more AOCs edged the Ho Chi Minh Trail, at Pakxe and Savannakhet. A fifth AOC was located at Vientiane. Simultaneously, beginning in March 1966, TACAN units began to be emplaced within Laos. Lima Site 85 was sited on Phou Pha Thi, Laos, in the Annamese Cordillera with its beamed pointing over the nearby border at Hanoi. It would be overrun in March 1968. Unlike the other bombing campaigns in Southeast Asia, the northern Laotian bombing campaigns within Barrel Roll would support a guerrilla force in action. U.S. Air Force and Royal Lao Air Force (RLAF) tactical air sorties directed by forward air controllers cleared the way for CIA-sponsored guerrillas commanded by General Vang Pao in their battle for the Plain of Jars. Among these campaigns were Operation Pigfat, Operation Raindance, Operation Off Balance, and Operation About Face. Joining the Ravens in this endeavor were the "Tiger" Fast FACs. Ho Chi Minh Trail In the beginning The Ho Chi Minh Trail—Vietnamese name Trường Sơn trail—consisted of a network of roads and transshipment points concealed by the jungle. It would eventually develop into an intricate system of over 3,000 miles of interweaving roadways, trails, and truck parks running down the eastern edge of the Vietnamese/Laotian border. Although located in Laos, the materiel shipped along the Trail supplied the communist troops in South Vietnam. Associated with it were other roads running through Dien Bien Phu to Xam Neua and further into northern Laos. FAC reconnaissance patrols over the Ho Chi Minh Trail in southern Laos began in May 1964, even as the Trail network began massive expansion. It became apparent that victory for the North Vietnamese war effort depended on keeping the Trail open. As noted above, the Operation Steel Tiger interdiction campaign began 3 April 1965. The communist counter to daytime air strikes was a switch to night movement of supplies. O-1 Bird Dogs were originally tried for night FAC operations to interdict those shipments. Inadequate instrumentation and a small load of target markers so handicapped it that other aircraft came into use in the role. In July 1966, A-26 Invaders using the call sign "Nimrod" began night operations against the Ho Chi Minh Trail with support from multi-engine flareships. Though principally a strike aircraft, it served as a FAC on occasion. Also in 1966, the U.S. Air Force began experimenting with various night vision devices for FAC use, under the code name Operation Shed Light. Advent of sensor intelligence The Starlight Scope became a key tool for night FAC operations. Originally it was tried mounted in the O-2 Skymaster. By late 1966, the scope was being used for FACing along the southern Ho Chi Minh Trail from C-130 Hercules flareships under the call sign "Blind Bat". These FACs worked in conjunction with O-2s; after "Blind Bat" illuminated an area with a flare, the O-2 would mark the actual target. Strike aircraft were variously T-28 Trojans, A-1 Skyraiders, A-26 Invaders, or later in the war—A-37 Dragonflies or F-4 Phantoms. C-123 Providers of the 606th Special Operations Squadron, under the call sign "Candlestick", filled a similar role over the northern end of the Trail. From September to December 1967, a prototype AC-130 gunship tested various detection sensors' ability to locate trucks. The new gunship mounted improved night vision capabilities, an infrared detection unit, an array of radars, and a searchlight. The latter could be filtered to produce infrared or ultraviolet beams, as well as ordinary light. An ignition detector would be added later, to pinpoint the running engines of supply trucks. Testing of the sensors later incorporated into the Igloo White military intelligence system began in December 1967. At that time, FAC reconnaissance found five to ten times as many trucks as the prior December; as many as 250 trucks were spotted in single convoys, risking air strikes by running with lights on. However, the Battle of Khe Sanh sidetracked sensor usage from the Ho Chi Minh Trail to track the communists besieging the U.S. Marines in the embattled fire base. This distraction undercut a full testing of "people sensors"; as a result, the USAF favored the tested sensors, which detected trucks. In January 1968, four A-1 Skyraiders modified to carry two low-light-level television cameras were assigned to Nakhon Phanom Royal Thai Air Base. By February, they were flying test missions into Steel Tiger to direct strikes against trucks on the Trail. However, proper testing depended on flying a straight level course while not exposed to ground fire. The Ho Chi Minh Trail offered little chance of that. The sensor test was moved to South Vietnam. The war on trucks On 1 November 1968, President Lyndon Baines Johnson declared a halt to bombing in North Vietnam, thus suspending Operation Rolling Thunder. Immediately, with North Vietnamese targets off limits, the air power directed at the Ho Chi Minh Trail nearly quintupled, rising from 140 to 620 sorties per day. Operation Commando Hunt would concentrate on destroying so many communist supply trucks that the insurgency in the south would collapse from lack of supplies. Communist anti-aircraft weaponry, freed from defense duties by the bombing halt, also moved south to the Trail. At November 1968's end, U.S. intelligence estimated there were 166 anti-aircraft guns defending the Trail. Five months later, by the end of April 1969, 621 anti-aircraft guns had been reported. Although no fire control radar was detected, the optically aimed weaponry was still a potent force. Some few 57mm guns could reach an aircraft at 12,500 feet altitude. Over half of the guns sited along the Trail were 37mm cannons, which could range up to 8,200 feet altitude. Dual mounted 23mm guns constituted most of the other artillery; they had a range of 5,000 feet. Underlying the anti-aircraft artillery, a plethora of machine guns fired at lower level fliers. Compounding the FAC's difficulties, the communists had more prepared sites than antiaircraft guns, and could quickly shift guns from one site to another. They also set up decoy sites of dummy guns. By mid-1969, American strike pilots noted increased effectiveness in antiaircraft defense due to the influx of experienced gunners. By the end of 1969, the growing hazards of enemy anti-aircraft fire caused the withdrawal of the "Candlesticks" from the Trail. In early 1970, the Paveway system came into action; it required the FAC use of a laser designator to guide so-called "smart bombs". Differing designators were mounted on both AC-130 Blindbats and F-4 Phantoms; both were used successfully. In June 1970, the era of flareship FACs ended, as the "Blindbats" were withdrawn. The Martin B-57 Canberras of Operation Tropic Moon superseded them. However, the B-57 had its success limited by its high fuel consumption and substandard sensors. The fixed-wing gunships working the Trail became the principal truck busters in Operation Commando Hunt. The AC-119s would prove marginal in performance, and too vulnerable to ground fire to continue campaigning against trucks on the Trail. However, the AC-130 would become the American's premier weapon against supply convoys. Campaign results Between November 1970 and May 1971, the 12 AC-130 Spectres of the 16th Special Operations Squadron (SOS) were credited with destroying 10,319 enemy trucks and damaging 2,733 others. When these spectacular results were added to those of other units interdicting the Trail, it became apparent that if the damage assessments were correct, the North Vietnamese were out of usable trucks. Continuing steady traffic on the Trail mocked that assumption. At the time, the 16th SOS had developed its own damage assessment criteria, given that visual observation of strike results was infrequent. According to the 16th: A vehicle that exploded and/or caught on fire was considered destroyed; A vehicle hit by a 40mm shell was considered destroyed; If a 40mm shell exploded within 10 feet of a vehicle, it was reported damaged; A vehicle hit by 20mm fire was considered damaged. According to Vietnam Magazine, on 12 May 1971, these criteria were tested with a staged firepower demonstration by an AC-130, directed against eight targeted trucks on a bombing range near Bien Hoa AB. By applying the criteria, these eight test targets would have been reported as five destroyed, three damaged. In actuality, a ground check proved only two trucks destroyed. Five more trucks were operable after repair. One that would have been reported destroyed under the criteria was still drivable. Unexploded 20mm shells littered the ground, surrounding the trucks. As a result of this test, the damage criteria were changed to report only exploding or burning trucks as destroyed. Notable Forward Air Controllers Steven L. Bennett: Medal of Honor recipient Hilliard A. Wilbanks: Medal of Honor recipient Craig W. Duehring: Assistant Secretary of the Air Force in later years George Everette "Bud" Day: Medal of Honor recipient Legacy By the time Igloo White ended, it had cost in excess of two billion dollars in equipment costs, excluding the cost of lost aircraft. Operating expenses doubled that. Despite the expense, Igloo White's emphasis on interdicting supply trucks instead of enemy troops failed to deter continuing communist offensives in South Vietnam. The Vietnam War saw about 13 million tons of bombs dropped by the U.S. and its allies. This was approximately six times the tonnage dropped during World War II. However, unlike World War II, there was no mass dumping of ordnance on cities full of civilians. Instead, in close air support—and in many interdiction situations—forward air controllers were charged with following stringent Rules of Engagement in directing air strikes. The expertise went to little avail. When Raven FAC Greg Wilson called The Pentagon for a post-Laos assignment as a fighter pilot, he was told, "We're trying to purge the Vietnam FAC experience from the fighter corps because we have moved into an era of air combat where the low-threat, low speed, close air support you did in Southeast Asia is no longer valid. And we don't want these habits or these memories in our fighter force." This was symbolic of what was to come. Once again the United States Air Force abandoned forward air control at the war's end, just as it had following World War II and after the Korean War. See also Raven Forward Air Controllers 19th Tactical Air Support Squadron 20th Tactical Air Support Squadron 21st Tactical Air Support Squadron 22nd Tactical Air Support Squadron 23rd Tactical Air Support Squadron Misty Fast FAC's mistyvietnam.com External links List of Australian RACs who served in Vietnam at Australian FAC history at Notes References Ahern, Thomas L. Jr. (2006), Undercover Armies: CIA and Surrogate Warfare in Laos. Center for the Study of Intelligence. Classified control no. C05303949. Anthony, Victor B. and Richard R. Sexton (1993). The War in Northern Laos. Center for Air Force History, OCLC 232549943. Anthony, Victor B. (1973). The Air Force in Southeast Asia: Tactics and Techniques of Night Operations 1961-1970. Office of Air Force History. (2011 reprint). Military Bookshop. ISBNs 1780396570, 978-1780396576. Castle, Timothy N. (1993). At War in the Shadow of Vietnam: U.S. Military Aid to the Royal Lao Government 1955–1975. Columbia University Press. . Churchill, Jan (1997). Hit My Smoke!: Forward Air Controllers in Southeast Asia. Sunflower University Press. ISBNs 0-89745-215-1, 978-0-89745-215-1. Conboy, Kenneth and James Morrison (1995). Shadow War: The CIA's Secret War in Laos. Paladin Press, ISBNs 0-87364-825-0, 978-1-58160-535-8. Dunnigan, James F. and Albert A. Nofi (2000). Dirty Little Secrets of the Vietnam War: Military Information You're Not Supposed to Know. St. Martin's Griffin. ISBNs 031225282X, 978-0312252823. Gooderson, Ian (1998). Air Power at the Battlefront: Allied Close Air Support in Europe 1943-45 (Studies in Air Power). Routledge. ISBNs: 0714642118, 978-0714642116. Harrison, Marshall (1989). A Lonely Kind of War: Forward Air Controller Vietnam. Pocket Books. . Hooper, Jim (2009). A Hundred Feet Over Hell: Flying With the Men of the 220th Recon Airplane Company Over I Corps and the DMZ, Vietnam 1968-1969. Zenith Imprint. ISBNs 0-7603-3633-4, 978-0-7603-3633-5. Kelly, Orr (1996). From a Dark Sky: The Story of U.S. Air Force Special Operations. Pocket Books. ISBNs 0-671-00917-6, 767140059900917. LaFeber, Walter (2005). The Deadly Bet: LBJ, Vietnam, and the 1968 Election. Rowman & Littlefield Publishers. ISBNs 0742543927, 978-0742543928. Lester, Gary Robert (1987). Mosquitoes to Wolves: The Evolution of the Airborne Forward Air Controller. Air University Press. ISBNs 1-58566-033-7, 978-1-58566-033-9. Mahnken, Thomas G. (2010). Technology and the American Way of War Since 1945. Columbia University Press. ISBNs 0231517882, 9780231517881. Nalty, Bernard C. (2005). War Against Trucks: Aerial Interdiction in Southern Laos 1968- 1972. Air Force History and Museums Program, United States Air Force. . Prados, John (1998). The Hidden History of the Vietnam War. Ivan R. Dee. ISBNs 1566631971, 978-1566631976. Robbins, Christopher (1987) The Ravens: The Men Who Flew in America's Secret War in Laos. Crown, , Rowley, Ralph A. (1972). The Air Force in Southeast Asia: US FAC Operations in Southeast Asia 1961-1965. U.S. Office of Air Force History. (2011 reprint). Military Studies Press. ISBNs 1780399987, 9781780399980. — (1975). The Air Force in Southeast Asia: FAC Operations 1965-1970. U.S. Office of Air Force History. Military Bookshop (2011 reprint). ISBNs 1780396562, 978-1780396569. Schlight, John (1969). Project CHECO Report: JET FORWARD AIR CONTROLLERS IN SEASIA." Headquarters Pacific Air Force. ASIN B00ARRLMEY. Staff (1969). The ABCCC in Southeast Asia (Project CHECO Reports). U.S. Air Force. ASIN: B005E7KAQS. Summers, Col. (Ret) Harry G. Jr. and Stanley Karnow (1995). Historical Atlas of the Vietnam War. Houghton Mifflin Harcourt. ISBNs 0395722233, 978-0395722237. Walton, Andrew R. (2014). The History of the Airborne Forward Air Controller in Vietnam. CreateSpace Independent Publishing Platform reprint of U.S. Army Command and General Staff College publication. ISBNs: 1500830917, 978-1500830915. Warner, Roger (1995). Back Fire: The CIA's Secret War in Laos and Its Link to the War in Vietnam. Simon & Schuster. ISBNs 0-68480-292-9, 978-06848-0292-3. Further reading Cooper, Garry and Robert Hillier (2006). Sock It to ’Em, Baby: Forward Air Controller in Vietnam''. Allen & Unwin. ISBNs 1741148499, 978-1741148497. Military personnel of the Vietnam War
1516037
https://en.wikipedia.org/wiki/Adobe%20Encore
Adobe Encore
Adobe Encore (previously Adobe Encore DVD) is a DVD authoring software tool produced by Adobe Systems and targeted at professional video producers. Video and audio resources may be used in their current format for development, allowing the user to transcode them to MPEG-2 video and Dolby Digital audio upon project completion. DVD menus can be created and edited in Adobe Photoshop using special layering techniques. Adobe Encore does not support writing to a Blu-ray Disc using AVCHD 2.0. Encore is bundled with Adobe Premiere Pro CS6. Adobe Encore CS6 was the last release. While Premiere Pro CC has moved to the Creative Cloud, Encore has now been discontinued. Licensing All forms of Adobe Encore use a proprietary licensing system from its developer, Adobe Systems. Versions 1.0 and 1.5 required a separate license fee (rather than making 1.5 available as a free update). Version 3, also known as CS3, was sold only in bundle with Premiere CS3. Encore CS4, CS5, CS5.5 and CS6 were only sold in the Premiere Pro CS4, CS5, CS5.5 and CS6 bundles, respectively. Adobe CC subscribers no longer have access to Adobe Encore CS6. Adobe Encore is not included with Premiere Pro CC. See also Video editing software Adobe Creative Cloud References External links Adobe Encore DVD Version History/Changelog Adobe Encore 1.5.1 Download Page Adobe Encore Download Page Encore Encore Adobe Encore DVD Discontinued Adobe software MacOS multimedia software Windows multimedia software 2003 software
1813588
https://en.wikipedia.org/wiki/Digital%20distribution
Digital distribution
Digital distribution (also referred to as content delivery, online distribution, or electronic software distribution (ESD), among others) is the delivery or distribution of digital media content such as audio, video, e-books, video games, and other software. The term is generally used to describe distribution over an online delivery medium, such as the Internet, thus bypassing physical distribution methods, such as paper, optical discs, and VHS videocassettes. The term online distribution is typically applied to freestanding products; downloadable add-ons for other products are more commonly known as downloadable content. With the advancement of network bandwidth capabilities, online distribution became prominent in the 21st century, with prominent platforms such as Amazon Video, and Netflix's streaming service starting in 2007. Content distributed online may be streamed or downloaded, and often consists of books, films and television programs, music, software, and video games. Streaming involves downloading and using content at a user's request, or "on-demand", rather than allowing a user to store it permanently. In contrast, fully downloading content to a hard drive or other forms of storage media may allow offline access in the future. Specialist networks known as content delivery networks help distribute content over the Internet by ensuring both high availability and high performance. Alternative technologies for content delivery include peer-to-peer file sharing technologies. Alternatively, content delivery platforms create and syndicate content remotely, acting like hosted content management systems. Unrelated to the above, the term "Digital distribution" is also used in film distribution to describe the distribution of content through physical digital media, in opposition to distribution by analog media such as photographic film and magnetic tape (see digital cinema). Basis A primary characteristic of online distribution is its direct nature. To make a commercially successful work, artists usually must enter their industry's publishing chain. Publishers help artists advertise, fund and distribute their work to retail outlets. In some industries, particularly video games, artists find themselves bound to publishers, and in many cases unable to make the content they want; the publisher might not think it will profit well. This can quickly lead to the standardization of the content and to the stifling of new, potentially risky ideas. By opting for online distribution, an artist can get their work into the public sphere of interest easily with potentially minimum business overhead. This often leads to cheaper goods for the consumer, increased profits for the artists, as well as increased artistic freedom. Online distribution platforms often contain or act as a form of digital rights management. Online distribution also opens the door to new business models (e.g., the Open Music Model). For instance, an artist could release one track from an album or one chapter from a book at a time instead of waiting for them all to be completed. This either gives them a cash boost to help continue their projects or indicates that their work might not be financially viable. This is hopefully done before they have spent excessive money and time on a project deemed to remain unprofitable. Video games have increased flexibility in this area, demonstrated by micropayment models. A clear result of these new models is their accessibility to smaller artists or artist teams who do not have the time, funds, or expertise to make a new product in one go. An example of this can be found in the music industry. Indie artists may access the same distribution channels as major record labels, with potentially fewer restrictions and manufacturing costs. There is a growing collection of 'Internet labels' that offer distribution to unsigned or independent artists directly to online music stores, and in some cases marketing and promotion services. Further, many bands are able to bypass this completely and offer their music for sale via their own independently controlled websites. An issue is the large number of incompatible formats in which content is delivered, restricting the devices that may be used, or making data conversion necessary. Impact on traditional retail The rise of online distribution has provided controversy for the traditional business models and resulted in challenges as well as new opportunities for traditional retailers and publishers. Online distribution affects all of the traditional media markets including music, press, and broadcasting. In Britain, the iPlayer, a software application for streaming television and radio, accounts for 5% of all bandwidth used in the United Kingdom. Music The move towards online distribution led to a dip in sales in the 2000s; CD sales were nearly cut in half around this time. One such example of online distribution taking its toll on a retailer is the Canadian music chain Sam the Record Man; the company blamed online distribution for having to close a number of its traditional retail venues in 2007–08. One main reason that sales took such a big hit was that unlicensed downloads of music were very accessible. With copyright infringement affecting sales, the music industry realized it needed to change its business model to keep up with the rapidly changing technology. The step that was taken to move the music industry into the online space has been successful for several reasons. The development of lossy audio compression file formats such as MP3, allows users to compress music files into a high-quality format, compressed down to usually a 3-megabyte (MB) file. The lossless FLAC format may require only a few megabytes more. In comparison, the same song might require 30–40 megabytes of storage on a CD. The smaller file size yields much greater Internet transfer speeds. The transition into the online space has boosted sales, and profit for some artists. It has also allowed for potentially lower expenses such as lower coordination costs, lower distribution costs, as well as the possibility for redistributed total profits. These lower costs have aided new artists in breaking onto the scene and gaining recognition. In the past, some emerging artists have struggled to find a way to market themselves and compete in the various distribution channels. The Internet may give artists more control over their music in terms of ownership, rights, creative process, pricing, and more. In addition to providing global users with easier access to content, online stores allow users to choose the songs they wish instead of having to purchase an entire album from which there may only be one or two titles that the buyer enjoys. The number of downloaded single tracks rose from 160 million in 2004 to 795 million in 2006 which accounted for a revenue boost from US$397 million to US$2 billion. Videos Many traditional network television shows, movies and other video content is now available online, either from the content owner directly or from third-party services. YouTube, Netflix, Hulu, Vudu, Amazon Prime Video, DirecTV, SlingTV and other Internet-based video services allow content owners to let users access their content on computers, smartphones, tablets or by using appliances such as video game consoles, set-top boxes or Smart TVs. Many film distributors also include a Digital Copy, also called Digital HD, with Blu-ray disc, Ultra HD Blu-ray, 3D Blu-ray or a DVD. Books Some companies, such as Bookmasters Distribution, which invested US$4.5 million in upgrading its equipment and operating systems, have had to direct capital toward keeping up with the changes in technology. The phenomenon of books going digital has given users the ability to access their books on handheld digital book readers. One benefit of electronic book readers is that they allow users to access additional content via hypertext links. These electronic book readers also give users portability for their books since a reader can hold multiple books depending on the size of its hard drive. Companies that are able to adapt and make changes to capitalize on the digital media market have seen sales surge. Vice President of Perseus Books Group stated that since shifting to electronic books (e-books), it saw sales rise by 68%. Independent Publishers Group experienced a sales boost of 23% in the first quarter of 2012 alone. Tor Books, a major publisher of science fiction and fantasy books, started to sell e-books DRM-free by July 2012. One year later the publisher stated that they will keep this model as removing DRM was not hurting their digital distribution ebook business. Smaller e-book publishers such as O'Reilly Media, Carina Press and Baen Books had already forgone DRM previously. Video games Online distribution is changing the structure of the video game industry. Gabe Newell, creator of the digital distribution service Steam, formulated the advantages over physical retail distribution as such: Since the 2000s, there has been an increasing number of smaller and niche titles available and commercially successful, e.g. remakes of classic games. The new possibility of the digital distribution stimulated also the creation of game titles of very small video game producers like Independent game developer and Modders (e.g. Garry's Mod), which were before not commercially feasible. The years after 2004 saw the rise of many digital distribution services on the PC, such as Amazon Digital Services, Desura, GameStop, Games for Windows – Live, Impulse, Steam, Origin, Battle.net, Direct2Drive, GOG.com, Epic Games Store and GamersGate. The offered properties differ significantly: while most of these digital distributors don't allow reselling of bought games, Green Man Gaming allows this. Another example is gog.com which has a strict non-DRM policy while most other services allow various (strict or less strict) forms of DRM. Digital distribution is also more eco-friendly than physical. Optical discs are made of polycarbonate plastic and aluminum. The creation of 30 of them requires the use of 300 cubic feet of natural gas, two cups of oil and 24 gallons of water. The protective cases for an optical disc is made from polyvinyl chloride (PVC), a known carcinogen. Challenges A general issue is the large number of incompatible data formats in which content is delivered, possibly restricting the devices that may be used, or making data conversion necessary. Streaming services can have several drawbacks: requiring a constant Internet connection to use content; the restriction of some content to never be stored locally; the restriction of content from being transferred to physical media; and the enabling of greater censorship at the discretion of owners of content, infrastructure, and consumer devices. Decades after the launch of the World Wide Web, in 2019 businesses were still adapting to the evolving world of distributing content digitally—even regarding the definition and understanding of basic terminology. See also Application store Online shopping Cloud gaming Comparison of online music stores Content delivery network Digital distribution in video games E-book Electronic publishing Electronic commerce Film distribution Film distributor Internet pornography List of Internet television providers List of mobile software distribution platforms Streaming media Video on demand Uberisation References Distribution Film distribution Non-store retailing . . Software delivery methods . . Computer science articles needing attention
9838
https://en.wikipedia.org/wiki/Eiffel%20%28programming%20language%29
Eiffel (programming language)
Eiffel is an object-oriented programming language designed by Bertrand Meyer (an object-orientation proponent and author of Object-Oriented Software Construction) and Eiffel Software. Meyer conceived the language in 1985 with the goal of increasing the reliability of commercial software development; the first version becoming available in 1986. In 2005, Eiffel became an ISO-standardized language. The design of the language is closely connected with the Eiffel programming method. Both are based on a set of principles, including design by contract, command–query separation, the uniform-access principle, the single-choice principle, the open–closed principle, and option–operand separation. Many concepts initially introduced by Eiffel later found their way into Java, C#, and other languages. New language design ideas, particularly through the Ecma/ISO standardization process, continue to be incorporated into the Eiffel language. Characteristics The key characteristics of the Eiffel language include: An object-oriented program structure in which a class serves as the basic unit of decomposition. Design by contract tightly integrated with other language constructs. Automatic memory management, typically implemented by garbage collection. Inheritance, including multiple inheritance, renaming, redefinition, "select", non-conforming inheritance, and other mechanisms intended to make inheritance safe. Constrained and unconstrained generic programming A uniform type system handling both value and reference semantics in which all types, including basic types such as INTEGER, are class-based. Static typing Void safety, or static protection against calls on null references, through the attached-types mechanism. Agents, or objects that wrap computations, closely connected with closures and lambda calculus. Once routines, or routines evaluated only once, for object sharing and decentralized initialization. Keyword-based syntax in the ALGOL/Pascal tradition but separator-free, insofar as semicolons are optional, with operator syntax available for routines. Case insensitivity Simple Concurrent Object-Oriented Programming (SCOOP) facilitates creation of multiple, concurrently active execution vehicles at a level of abstraction above the specific details of these vehicles (e.g. multiple threads without specific mutex management). Design goals Eiffel emphasizes declarative statements over procedural code and attempts to eliminate the need for bookkeeping instructions. Eiffel shuns coding tricks or coding techniques intended as optimization hints to the compiler. The aim is not only to make the code more readable, but also to allow programmers to concentrate on the important aspects of a program without getting bogged down in implementation details. Eiffel's simplicity is intended to promote simple, extensible, reusable, and reliable answers to computing problems. Compilers for computer programs written in Eiffel provide extensive optimization techniques, such as automatic in-lining, that relieve the programmer of part of the optimization burden. Background Eiffel was originally developed by Eiffel Software, a company founded by Bertrand Meyer. Object-Oriented Software Construction contains a detailed treatment of the concepts and theory of the object technology that led to Eiffel's design. The design goal behind the Eiffel language, libraries, and programming methods is to enable programmers to create reliable, reusable software modules. Eiffel supports multiple inheritance, genericity, polymorphism, encapsulation, type-safe conversions, and parameter covariance. Eiffel's most important contribution to software engineering is design by contract (DbC), in which assertions, preconditions, postconditions, and class invariants are employed to help ensure program correctness without sacrificing efficiency. Eiffel's design is based on object-oriented programming theory, with only minor influence of other paradigms or concern for support of legacy code. Eiffel formally supports abstract data types. Under Eiffel's design, a software text should be able to reproduce its design documentation from the text itself, using a formalized implementation of the "Abstract Data Type". Implementations and environments EiffelStudio is an integrated development environment available under either an open source or a commercial license. It offers an object-oriented environment for software engineering. EiffelEnvision is a plug-in for Microsoft Visual Studio that allows users to edit, compile, and debug Eiffel projects from within the Microsoft Visual Studio IDE. Five other open source implementations are available: "The Eiffel Compiler" tecomp; Gobo Eiffel; SmartEiffel, the GNU implementation, based on an older version of the language; LibertyEiffel, based on the SmartEiffel compiler; and Visual Eiffel. Several other programming languages incorporate elements first introduced in Eiffel. Sather, for example, was originally based on Eiffel but has since diverged, and now includes several functional programming features. The interactive-teaching language Blue, forerunner of BlueJ, is also Eiffel-based. The Apple Media Tool includes an Eiffel-based Apple Media Language. Specifications and standards The Eiffel language definition is an international standard of the ISO. The standard was developed by ECMA International, which first approved the standard on 21 June 2005 as Standard ECMA-367, Eiffel: Analysis, Design and Programming Language. In June 2006, ECMA and ISO adopted the second version. In November 2006, ISO first published that version. The standard can be found and used free of charge on the ECMA site. The ISO version is identical in all respects except formatting. Eiffel Software, "The Eiffel Compiler" tecomp and Eiffel-library-developer Gobo have committed to implementing the standard; Eiffel Software's EiffelStudio 6.1 and "The Eiffel Compiler" tecomp implement some of the major new mechanisms—in particular, inline agents, assigner commands, bracket notation, non-conforming inheritance, and attached types. The SmartEiffel team has turned away from this standard to create its own version of the language, which they believe to be closer to the original style of Eiffel. Object Tools has not disclosed whether future versions of its Eiffel compiler will comply with the standard. LibertyEiffel implements a dialect somewhere in between the SmartEiffel language and the standard. The standard cites the following, predecessor Eiffel-language specifications: Bertrand Meyer: Eiffel: The Language, Prentice Hall, second printing, 1992 (first printing: 1991) Bertrand Meyer: Standard Eiffel (revision of preceding entry), ongoing, 1997–present, at Bertrand Meyer's ETL3 page, and Bertrand Meyer: Object-Oriented Software Construction, Prentice Hall: first edition, 1988; second edition, 1997. Bertrand Meyer: Touch of Class: Learning to Program Well with Objects and Contracts, Springer-Verlag, 2009 lxiv + 876 pages Full-color printing, numerous color photographs The current version of the standard from June 2006 contains some inconsistencies (e.g. covariant redefinitions). The ECMA committee has not yet announced any timeline and direction on how to resolve the inconsistencies. Syntax and semantics Overall structure An Eiffel "system" or "program" is a collection of classes. Above the level of classes, Eiffel defines cluster, which is essentially a group of classes, and possibly of subclusters (nested clusters). Clusters are not a syntactic language construct, but rather a standard organizational convention. Typically an Eiffel program will be organized with each class in a separate file, and each cluster in a directory containing class files. In this organization, subclusters are subdirectories. For example, under standard organizational and casing conventions, x.e might be the name of a file that defines a class called X. A class contains features, which are similar to "routines", "members", "attributes" or "methods" in other object-oriented programming languages. A class also defines its invariants, and contains other properties, such as a "notes" section for documentation and metadata. Eiffel's standard data types, such as INTEGER, STRING and ARRAY, are all themselves classes. Every system must have a class designated as "root", with one of its creation procedures designated as "root procedure". Executing a system consists of creating an instance of the root class and executing its root procedure. Generally, doing so creates new objects, calls new features, and so on. Eiffel has five basic executable instructions: assignment, object creation, routine call, condition, and iteration. Eiffel's control structures are strict in enforcing structured programming: every block has exactly one entry and exactly one exit. Scoping Unlike many object-oriented languages, but like Smalltalk, Eiffel does not permit any assignment into attributes of objects, except within the features of an object, which is the practical application of the principle of information hiding or data abstraction, requiring formal interfaces for data mutation. To put it in the language of other object-oriented programming languages, all Eiffel attributes are "protected", and "setters" are needed for client objects to modify values. An upshot of this is that "setters" can, and normally do, implement the invariants for which Eiffel provides syntax. While Eiffel does not allow direct access to the features of a class by a client of the class, it does allow for the definition of an "assigner command", such as: some_attribute: SOME_TYPE assign set_some_attribute set_some_attribute (v: VALUE_TYPE) -- Set value of some_attribute to `v'. do some_attribute := v end While a slight bow to the overall developer community to allow something looking like direct access (e.g. thereby breaking the Information Hiding Principle), the practice is dangerous as it hides or obfuscates the reality of a "setter" being used. In practice, it is better to redirect the call to a setter rather than implying a direct access to a feature like some_attribute as in the example code above. Unlike other languages, having notions of "public", "protected", "private" and so on, Eiffel uses an exporting technology to more precisely control the scoping between client and supplier classes. Feature visibility is checked statically at compile-time. For example, (below), the "{NONE}" is similar to "protected" in other languages. Scope applied this way to a "feature set" (e.g. everything below the 'feature' keyword to either the next feature set keyword or the end of the class) can be changed in descendant classes using the "export" keyword. feature {NONE} -- Initialization default_create -- Initialize a new `zero' decimal instance. do make_zero end Alternatively, the lack of a {x} export declaration implies {ANY} and is similar to the "public" scoping of other languages. feature -- Constants Finally, scoping can be selectively and precisely controlled to any class in the Eiffel project universe, such as: feature {DECIMAL, DCM_MA_DECIMAL_PARSER, DCM_MA_DECIMAL_HANDLER} -- Access Here, the compiler will allow only the classes listed between the curly braces to access the features within the feature group (e.g. ). "Hello, world!" A programming language's look and feel is often conveyed using a "Hello, world!" program. Such a program written in Eiffel might be: class HELLO_WORLD create make feature make do print ("Hello, world!%N") end end This program contains the class HELLO_WORLD. The constructor (create routine) for the class, named make, invokes the print system library routine to write a "Hello, world!" message to the output. Design by contract The concept of Design by Contract is central to Eiffel. The contracts assert what must be true before a routine is executed (precondition) and what must hold to be true after the routine finishes (post-condition). Class Invariant contracts define what assertions must hold true both before and after any feature of a class is accessed (both routines and attributes). Moreover, contracts codify into executable code developer and designers assumptions about the operating environment of the features of a class or the class as a whole by means of the invariant. The Eiffel compiler is designed to include the feature and class contracts in various levels. EiffelStudio, for example, executes all feature and class contracts during execution in the "Workbench mode." When an executable is created, the compiler is instructed by way of the project settings file (e.g. ECF file) to either include or exclude any set of contracts. Thus, an executable file can be compiled to either include or exclude any level of contract, thereby bringing along continuous levels of unit and integration testing. Moreover, contracts can be continually and methodically exercised by way of the Auto-Test feature found in EiffelStudio. The Design by Contract mechanisms are tightly integrated with the language and guide redefinition of features in inheritance: Routine precondition: The precondition may only be weakened by inheritance; any call that meets the requirements of the ancestor meets those of the descendant. Routine postcondition: The postcondition can only be strengthened by inheritance; any result guaranteed by the ancestor is still provided by the descendant. Class invariant: Conditions that must hold true after the object's creation and after any call to an exported class routine. Because the invariant is checked so often, it makes it simultaneously the most expensive and most powerful form of condition or contract. In addition, the language supports a "check instruction" (a kind of "assert"), loop invariants, and loop variants (which guarantee loop termination). Void-safe capability Void-safe capability, like static typing, is another facility for improving software quality. Void-safe software is protected from run time errors caused by calls to void references, and therefore will be more reliable than software in which calls to void targets can occur. The analogy to static typing is a useful one. In fact, void-safe capability could be seen as an extension to the type system, or a step beyond static typing, because the mechanism for ensuring void safety is integrated into the type system. The guard against void target calls can be seen by way of the notion of attachment and (by extension) detachment (e.g. detachable keyword). The void-safe facility can be seen in a short re-work of the example code used above: some_attribute: detachable SOME_TYPE use_some_attribute -- Set value of some_attribute to `v'. do if attached some_attribute as l_attribute then do_something (l_attribute) end end do_something (a_value: SOME_TYPE) -- Do something with `a_value'. do ... doing something with `a_value' ... end The code example above shows how the compiler can statically address the reliability of whether some_attribute will be attached or detached at the point it is used. Notably, the attached keyword allows for an "attachment local" (e.g. l_attribute), which is scoped to only the block of code enclosed by the if-statement construct. Thus, within this small block of code, the local variable (e.g. l_attribute) can be statically guaranteed to be non-void (i.e. void safe). Features: commands and queries The primary characteristic of a class is that it defines a set of features: as a class represents a set of run-time objects, or "instances", a feature is an operation on these objects. There are two kinds of features: queries and commands. A query provides information about an instance. A command modifies an instance. The command-query distinction is important to the Eiffel method. In particular: Uniform-Access Principle: from the point of view of a software client making a call to a class feature, whether a query is an attribute (field value) or a function (computed value) should not make any difference. For example, a_vehicle.speed could be an attribute accessed on the object a_vehicle, or it could be computed by a function that divides distance by time. The notation is the same in both cases, so that it is easy to change the class's implementation without affecting client software. Command-Query Separation Principle: Queries must not modify the instance. This is not a language rule but a methodological principle. So in good Eiffel style, one does not find "get" functions that change something and return a result; instead there are commands (procedures) to change objects, and queries to obtain information about the object, resulting from preceding changes. Overloading Eiffel does not allow argument overloading. Each feature name within a class always maps to a specific feature within the class. One name, within one class, means one thing. This design choice helps the readability of classes, by avoiding a cause of ambiguity about which routine will be invoked by a call. It also simplifies the language mechanism; in particular, this is what makes Eiffel's multiple inheritance mechanism possible. Names can, of course, be reused in different classes. For example, the feature (along with its infix alias ) is defined in several classes: , , , etc. Genericity A generic class is a class that varies by type (e.g. LIST [PHONE], a list of phone numbers; ACCOUNT [G->ACCOUNT_TYPE], allowing for ACCOUNT [SAVINGS] and ACCOUNT [CHECKING], etc.). Classes can be generic, to express that they are parameterized by types. Generic parameters appear in square brackets: class LIST [G] ... G is known as a "formal generic parameter". (Eiffel reserves "argument" for routines, and uses "parameter" only for generic classes.) With such a declaration G represents within the class an arbitrary type; so a function can return a value of type G, and a routine can take an argument of that type: item: G do ... end put (x: G) do ... end The LIST [INTEGER] and LIST [WORD] are "generic derivations" of this class. Permitted combinations (with n: INTEGER, w: WORD, il: LIST [INTEGER], wl: LIST [WORD]) are: n := il.item wl.put (w) INTEGER and WORD are the "actual generic parameters" in these generic derivations. It is also possible to have 'constrained' formal parameters, for which the actual parameter must inherit from a given class, the "constraint". For example, in class HASH_TABLE [G, KEY -> HASHABLE] a derivation HASH_TABLE [INTEGER, STRING] is valid only if STRING inherits from HASHABLE (as it indeed does in typical Eiffel libraries). Within the class, having KEY constrained by HASHABLE means that for x: KEY it is possible to apply to x all the features of HASHABLE, as in x.hash_code. Inheritance basics To inherit from one or more others, a class will include an inherit clause at the beginning: class C inherit A B -- ... Rest of class declaration ... The class may redefine (override) some or all of the inherited features. This must be explicitly announced at the beginning of the class through a redefine subclause of the inheritance clause, as in class C inherit A redefine f, g, h end B redefine u, v end See for a complete discussion of Eiffel inheritance. Deferred classes and features Classes may be defined with deferred class rather than with class to indicate that the class may not be directly instantiated. Non-instantiatable classes are called abstract classes in some other object-oriented programming languages. In Eiffel parlance, only an "effective" class can be instantiated (it may be a descendant of a deferred class). A feature can also be deferred by using the deferred keyword in place of a do clause. If a class has any deferred features it must be declared as deferred; however, a class with no deferred features may nonetheless itself be deferred. Deferred classes play some of the same role as interfaces in languages such as Java, though many object-oriented programming theorists believe interfaces are themselves largely an answer to Java's lack of multiple inheritance (which Eiffel has). Renaming A class that inherits from one or more others gets all its features, by default under their original names. It may, however, change their names through rename clauses. This is required in the case of multiple inheritance if there are name clashes between inherited features; without renaming, the resulting class would violate the no-overloading principle noted above and hence would be invalid. Tuples Tuples types may be viewed as a simple form of class, providing only attributes and the corresponding "setter" procedure. A typical tuple type reads TUPLE [name: STRING; weight: REAL; date: DATE] and could be used to describe a simple notion of birth record if a class is not needed. An instance of such a tuple is simply a sequence of values with the given types, given in brackets, such as ["Brigitte", 3.5, Last_night] Components of such a tuple can be accessed as if the tuple tags were attributes of a class, for example if t has been assigned the above tuple then t.weight has value 3.5. Thanks to the notion of assigner command (see below), dot notation can also be used to assign components of such a tuple, as in t.weight := t.weight + 0.5 The tuple tags are optional, so that it is also possible to write a tuple type as TUPLE [STRING, REAL, DATE]. (In some compilers this is the only form of tuple, as tags were introduced with the ECMA standard.) The precise specification of e.g. TUPLE [A, B, C] is that it describes sequences of at least three elements, the first three being of types A, B, C respectively. As a result, TUPLE [A, B, C] conforms to (may be assigned to) TUPLE [A, B], to TUPLE [A] and to TUPLE (without parameters), the topmost tuple type to which all tuple types conform. Agents Eiffel's "agent" mechanism wraps operations into objects. This mechanism can be used for iteration, event-driven programming, and other contexts in which it is useful to pass operations around the program structure. Other programming languages, especially ones that emphasize functional programming, allow a similar pattern using continuations, closures, or generators; Eiffel's agents emphasize the language's object-oriented paradigm, and use a syntax and semantics similar to code blocks in Smalltalk and Ruby. For example, to execute the my_action block for each element of my_list, one would write: my_list.do_all (agent my_action) To execute my_action only on elements satisfying my_condition, a limitation/filter can be added: my_list.do_if (agent my_action, agent my_condition) In these examples, my_action and my_condition are routines. Prefixing them with agent yields an object that represents the corresponding routine with all its properties, in particular the ability to be called with the appropriate arguments. So if a represents that object (for example because a is the argument to do_all), the instruction a.call ([x]) will call the original routine with the argument x, as if we had directly called the original routine: my_action (x). Arguments to call are passed as a tuple, here [x]. It is possible to keep some arguments to an agent open and make others closed. The open arguments are passed as arguments to call: they are provided at the time of agent use. The closed arguments are provided at the time of agent definition. For example, if action2 has two arguments, the iteration my_list.do_all (agent action2 (?, y)) iterates action2 (x, y) for successive values of x, where the second argument remains set to y. The question mark ? indicates an open argument; y is a closed argument of the agent. Note that the basic syntax agent f is a shorthand for agent f (?, ?, ...) with all arguments open. It is also possible to make the target of an agent open through the notation {T}? where T is the type of the target. The distinction between open and closed operands (operands = arguments + target) corresponds to the distinction between bound and free variables in lambda calculus. An agent expression such as action2 (?, y) with some operands closed and some open corresponds to a version of the original operation curried on the closed operands. The agent mechanism also allows defining an agent without reference to an existing routine (such as my_action, my_condition, action2), through inline agents as in my_list.do_all (agent (s: STRING) require not_void: s /= Void do s.append_character (',') ensure appended: s.count = old s.count + 1 end) The inline agent passed here can have all the trappings of a normal routine, including precondition, postcondition, rescue clause (not used here), and a full signature. This avoids defining routines when all that's needed is a computation to be wrapped in an agent. This is useful in particular for contracts, as in an invariant clause that expresses that all elements of a list are positive: my_list.for_all (agent (x: INTEGER): BOOLEAN do Result := (x > 0) end) The current agent mechanism leaves a possibility of run-time type error (if a routine with n arguments is passed to an agent expecting m arguments with m < n). This can be avoided by a run-time check through the precondition valid_arguments of call. Several proposals for a purely static correction of this problem are available, including a language change proposal by Ribet et al. Once routines A routine's result can be cached using the once keyword in place of do. Non-first calls to a routine require no additional computation or resource allocation, but simply return a previously computed result. A common pattern for "once functions" is to provide shared objects; the first call will create the object, subsequent ones will return the reference to that object. The typical scheme is: shared_object: SOME_TYPE once create Result.make (args) -- This creates the object and returns a reference to it through `Result'. end The returned object—Result in the example—can itself be mutable, but its reference remains the same. Often "once routines" perform a required initialization: multiple calls to a library can include a call to the initialization procedure, but only the first such call will perform the required actions. Using this pattern initialization can be decentralized, avoiding the need for a special initialization module. "Once routines" are similar in purpose and effect to the singleton pattern in many programming languages, and to the Borg pattern used in Python. By default, a "once routine" is called once per thread. The semantics can be adjusted to once per process or once per object by qualifying it with a "once key", e.g. once ("PROCESS"). Conversions Eiffel provides a mechanism to allow conversions between various types. The mechanisms coexists with inheritance and complements it. To avoid any confusion between the two mechanisms, the design enforces the following principle: (Conversion principle) A type may not both conform and convert to another. For example, NEWSPAPER may conform to PUBLICATION, but INTEGER converts to REAL (and does not inherit from it). The conversion mechanism simply generalizes the ad hoc conversion rules (such as indeed between INTEGER and REAL) that exist in most programming languages, making them applicable to any type as long as the above principle is observed. For example, a DATE class may be declared to convert to STRING; this makes it possible to create a string from a date simply through my_string := my_date as a shortcut for using an explicit object creation with a conversion procedure: create my_string.make_from_date (my_date) To make the first form possible as a synonym for the second, it suffices to list the creation procedure (constructor) make_from_date in a convert clause at the beginning of the class. As another example, if there is such a conversion procedure listed from TUPLE [day: INTEGER; month: STRING; year: INTEGER], then one can directly assign a tuple to a date, causing the appropriate conversion, as in Bastille_day := [14, "July", 1789] Exception handling Exception handling in Eiffel is based on the principles of design by contract. For example, an exception occurs when a routine's caller fails to satisfy a precondition, or when a routine cannot ensure a promised postcondition. In Eiffel, exception handling is not used for control flow or to correct data-input mistakes. An Eiffel exception handler is defined using the keyword. Within the section, the keyword executes the routine again. For example, the following routine tracks the number of attempts at executing the routine, and only retries a certain number of times: connect_to_server (server: SOCKET) -- Connect to a server or give up after 10 attempts. require server /= Void and then server.address /= Void local attempts: INTEGER do server.connect ensure connected: server.is_connected rescue if attempts < 10 then attempts := attempts + 1 retry end end This example is arguably flawed for anything but the simplest programs, however, because connection failure is to be expected. For most programs a routine name like would be better, and the postcondition would not promise a connection, leaving it up to the caller to take appropriate steps if the connection was not opened. Concurrency A number of networking and threading libraries are available, such as EiffelNet and EiffelThreads. A concurrency model for Eiffel, based on the concepts of design by contract, is SCOOP, or Simple Concurrent Object-Oriented Programming, not yet part of the official language definition but available in EiffelStudio. CAMEO is an (unimplemented) variation of SCOOP for Eiffel. Concurrency also interacts with exceptions. Asynchronous exceptions can be troublesome (where a routine raises an exception after its caller has itself finished). Operator and bracket syntax, assigner commands Eiffel's view of computation is completely object-oriented in the sense that every operation is relative to an object, the "target". So for example an addition such as a + b is conceptually understood as if it were the method call a.plus (b) with target a, feature plus and argument b. Of course, the former is the conventional syntax and usually preferred. Operator syntax makes it possible to use either form by declaring the feature (for example in INTEGER, but this applies to other basic classes and can be used in any other for which such an operator is appropriate): plus alias "+" (other: INTEGER): INTEGER -- ... Normal function declaration... end The range of operators that can be used as "alias" is quite broad; they include predefined operators such as "+" but also "free operators" made of non-alphanumeric symbols. This makes it possible to design special infix and prefix notations, for example in mathematics and physics applications. Every class may in addition have one function aliased to "[]", the "bracket" operator, allowing the notation a [i, ...] as a synonym for a.f (i, ...) where f is the chosen function. This is particularly useful for container structures such as arrays, hash tables, lists etc. For example, access to an element of a hash table with string keys can be written number := phone_book ["JILL SMITH"] "Assigner commands" are a companion mechanism designed in the same spirit of allowing well-established, convenient notation reinterpreted in the framework of object-oriented programming. Assigner commands allow assignment-like syntax to call "setter" procedures. An assignment proper can never be of the form a.x := v as this violates information hiding; you have to go for a setter command (procedure). For example, the hash table class can have the function and the procedure item alias "[]" (key: STRING): ELEMENT [3] -- The element of key `key'. -- ("Getter" query) do ... end put (e: ELEMENT; key: STRING) -- Insert the element `e', associating it with the key `key'. -- ("Setter" command) do ... end Then to insert an element you have to use an explicit call to the setter command: [4] phone_book.put (New_person, "JILL SMITH") It is possible to write this equivalently as [5] phone_book ["JILL SMITH"] := New_person (in the same way that phone_book ["JILL SMITH"] is a synonym for number := phone_book.item ("JILL SMITH")), provided the declaration of item now starts (replacement for [3]) with item alias "[]" (key: STRING): ELEMENT assign put This declares put as the assigner command associated with item and, combined with the bracket alias, makes [5] legal and equivalent to [4]. (It could also be written, without taking advantage of the bracket, as phone_book.item ("JILL SMITH") := New_person. Note: The argument list of a's assigner is constrained to be: (a's return type;all of a's argument list...) Lexical and syntax properties Eiffel is not case-sensitive. The tokens make, maKe and MAKE all denote the same identifier. See, however, the "style rules" below. Comments are introduced by -- (two consecutive dashes) and extend to the end of the line. The semicolon, as instruction separator, is optional. Most of the time the semicolon is just omitted, except to separate multiple instructions on a line. This results in less clutter on the program page. There is no nesting of feature and class declarations. As a result, the structure of an Eiffel class is simple: some class-level clauses (inheritance, invariant) and a succession of feature declarations, all at the same level. It is customary to group features into separate "feature clauses" for more readability, with a standard set of basic feature tags appearing in a standard order, for example: class HASH_TABLE [ELEMENT, KEY -> HASHABLE] inherit TABLE [ELEMENT] feature -- Initialization -- ... Declarations of initialization commands (creation procedures/constructors) ... feature -- Access -- ... Declarations of non-boolean queries on the object state, e.g. item ... feature -- Status report -- ... Declarations of boolean queries on the object state, e.g. is_empty ... feature -- Element change -- ... Declarations of commands that change the structure, e.g. put ... -- etc. end In contrast to most curly bracket programming languages, Eiffel makes a clear distinction between expressions and instructions. This is in line with the Command-Query Separation principle of the Eiffel method. Style conventions Much of the documentation of Eiffel uses distinctive style conventions, designed to enforce a consistent look-and-feel. Some of these conventions apply to the code format itself, and others to the standard typographic rendering of Eiffel code in formats and publications where these conventions are possible. While the language is case-insensitive, the style standards prescribe the use of all-capitals for class names (LIST), all-lower-case for feature names (make), and initial capitals for constants (Avogadro). The recommended style also suggests underscore to separate components of a multi-word identifier, as in average_temperature. The specification of Eiffel includes guidelines for displaying software texts in typeset formats: keywords in bold, user-defined identifiers and constants are shown in italics, comments, operators, and punctuation marks in Roman, with program text in blue as in the present article to distinguish it from explanatory text. For example, the "Hello, world!" program given above would be rendered as below in Eiffel documentation: class HELLO_WORLD create make feature make do print ("Hello, world!") end end Interfaces to other tools and languages Eiffel is a purely object-oriented language but provides an open architecture for interfacing with "external" software in any other programming language. It is possible for example to program machine- and operating-system level operations in C. Eiffel provides a straightforward interface to C routines, including support for "inline C" (writing the body of an Eiffel routine in C, typically for short machine-level operations). Although there is no direct connection between Eiffel and C, many Eiffel compilers (Visual Eiffel is one exception) output C source code as an intermediate language, to submit to a C compiler, for optimizing and portability. As such, they are examples of transcompilers. The Eiffel Compiler tecomp can execute Eiffel code directly (like an interpreter) without going via an intermediate C code or emit C code which will be passed to a C compiler in order to obtain optimized native code. On .NET, the EiffelStudio compiler directly generates CIL (Common Intermediate Language) code. The SmartEiffel compiler can also output Java bytecode. References External links Eiffel Software web site of the company that introduced Eiffel, was Interactive Software Engineering (ISE). Object-oriented programming languages Class-based programming languages Object-oriented programming Formal specification languages High Integrity Programming Language Programming languages created in 1986 Programming languages with an ISO standard
45621597
https://en.wikipedia.org/wiki/Donut%20County
Donut County
Donut County is an indie video game developed by American indie designer Ben Esposito and published by Annapurna Interactive. In the game, the player moves a hole to swallow objects, which makes the hole increase in size. The concept originated in a game jam that used video game pitches from a Twitter account parody of game designer Peter Molyneux, and later added a mechanic similar to that of Katamari Damacy. Other inspirations for the game included Hopi figurinesa theme Esposito later relinquishedand locations from Bruce Springsteen songs. Donut County was released in August 2018 for iOS, macOS, PlayStation 4, and Microsoft Windows platforms while versions for Xbox One and Nintendo Switch were released in December 2018. It also released for Android in December 2020. Gameplay In Donut County, the players control a hole across several different levels, each of which is self contained but has several areas that open up as they progress. The goal of the game is to swallow every object to essentially clear out the level. As players move the hole to swallow objects, the hole increases in size. There are some puzzle aspects to this; to swallow objects floating on top of water, the player may need to maneuver the hole to drain part of the water, and then have that water consumed by a bird, repeating this until the water is drained. Later in the game, the player gains access to a catapult that fits in the top of the hole. This can be used to fling certain items swallowed by it to hit outcroppings to dislodge objects or to trigger switches. Plot Human Mira works for her friend BK, a raccoon, at the local donut shop in Donut County. She finds BK more interested in a new mobile app, trying to earn enough points for a quadcopter drone, which he does by scheduling the delivery of donuts to the residents of Donut County. However, Mira discovers that these aren't donuts being sent by the app, but actual holes which have been consuming the homes and residents of the place. BK refuses to acknowledge he did anything wrong, and when he receives his quadcopter, Mira purposely destroys it and then orders a donut to the shop, swallowing it up as well. They join the other residents trapped underground, and they all try to reason with BK of what he did was wrong. They come to learn that this was part of the plan of BK's boss, the Trash King - to acquire more trash, the Trash King and other raccoons developed the app to swallow all the trash it could find, ignoring the whims of the people that lived there. Mira and BK launch a mission to stop the Trash King. As Mira uses a hole to wreak havoc within the raccoon's facilities, BK tries to convince the Trash King to stop what he is doing. The Trash King tempts BK with a lucrative position, while sending a giant quadcopter to fight off Mira. BK rejects the offer and races out to help hack the quadcopter and destroy it, destroying the raccoons' facilities as well. BK agrees to help return all the people of Donut County back to the surface. Development Indie game developer Ben Esposito worked on Donut County in his free time while developing The Unfinished Swan. The core game concept was prompted by a game jam based on video game pitches from a Peter Molyneux parody Twitter account, Peter Molydeux. Esposito made a game called The Pits from a pitch wherein the player moves a hole around an environment. The game grew to work as a "reverse Katamari": instead of a ball that expands upon touching items, as in Katamari Damacy, the Donut County hole expands upon swallowing items. Esposito described the game as a "whimsical physics toy". Donut County was originally called Kachina, based on the Native American "spirit beings that personify nature". Esposito was inspired by the design of Hopi doll sculptures. Following a blog post that expressed criticism of his treatment of Hopi culture, and his subsequent effort to make an "authentic game" that incorporated the culture, he decided to change the title and abandon the theme. Donut County also took inspiration from indie game Windosill art style, Los Angeles's high density of doughnut shops, and locations from Bruce Springsteen songs, such as Asbury Park and the New Jersey Turnpike. A demo of the game was featured at IndieCade in October 2012. Its goal was to knock the sun out of the sky by regurgitating objects once swallowed by the hole. Polygon Michael McWherter noted that while some of its levels felt aimless or more like an "experiment" or "interactive toy", others "showed puzzle-like potential", including a level where chickens needed to cross a road. Rock, Paper, Shotgun reported that a presentation of the game at the GDC 2013 Experimental Gameplay Workshop made the audience "cheer and applaud in delight". The game was expected to show at the Austin, Texas, Fantastic Arcade event in September 2014. In March 2015, Esposito presented on the game's development at the 2015 GDC. Donut County was released on August 28, 2018 for iOS, macOS, PlayStation 4, and Windows platforms, and for Nintendo Switch and Xbox One on December 18, 2018. Annapurna Interactive has announced an upcoming physical edition of the game. Reception Critical reception Donut County received "generally favorable reviews" according to review aggregator Metacritic on all platforms. Cam Shea, writing for IGN praised the charming narrative and world of the game. Ashley Oh of Polygon appreciated the novel challenges that each level brought to the table. Philippa Warr, writing for PC Gamer praised the world design and the game's Trashopedia. Accolades During its development, the game was a 2015 Independent Games Festival finalist in the Excellence in Visual Art category, and an honorable mention in the Seumas McNally Grand Prize category. It won Apple's app store "iPhone game of the year" for 2018. References Further reading External links 2018 video games Game jam video games Single-player video games IOS games MacOS games Windows games Xbox Cloud Gaming games Xbox One games Indie video games IndieCade winners PlayStation 4 games Nintendo Switch games Annapurna Interactive games Video games about raccoons Video games developed in the United States Video games with cel-shaded animation Games Developed by Ben Esposito Video games designed by Ben Esposito
661849
https://en.wikipedia.org/wiki/Government%20Communications%20Security%20Bureau
Government Communications Security Bureau
The Government Communications Security Bureau (GCSB) () is the public-service department of New Zealand charged with promoting New Zealand's national security by collecting and analysing information of an intelligence nature. According to the Bureau's official website, it has a mission of contributing to the national security of New Zealand by providing information assurance and cybersecurity, foreign intelligence, and assistance to other New Zealand government agencies. History The Government Communications Security Bureau was created in 1977 on the instructions of Robert Muldoon, the Prime Minister. Prior to this, the functions now handled by the GCSB were split between three organisations: Communications security was the responsibility of the Communications Security Committee, based around the Prime Minister's office and the Ministry of Foreign Affairs. Signals intelligence was the responsibility of the Combined Signals Organisation, run by the military. Anti-bugging measures were the responsibility of the Security Intelligence Service. Upon its establishment, the GCSB assumed responsibility for these three roles. Officially, the new organisation was part of the Ministry of Defence, and its functions and activities were highly secreteven Cabinet was not informed. In the 1980s, however, information was gradually released, first about the GCSB's security role, and then about its signals intelligence operations. Also in the 1980s, the GCSB was split away from the Ministry of Defence, becoming a separate organisation. It was not until 2000, however, that it was decided to make the GCSB a government department in its own right. This decision was implemented through the Government Communications Security Bureau Act 2003. In 2001, the Centre for Critical Infrastructure Protection was formed within the GCSB with a mandate to assist in the protection of national critical infrastructure from information-borne threats. The National Cyber Security Centre was established within the GCSB in September 2011, and it absorbed the functions of the Centre for Critical Infrastructure Protection. Staff and budget The GCSB is considered to be a government department in its own right with its head office in Pipitea St, Wellington. Through its director, the GCSB reports to the minister holding the Intelligence portfolio, who, by convention, is always the Prime Minister. Its main functions are: the collection and processing of intelligence, the distribution of intelligence, IT security, technology and administration. It has slightly over 400 employees with a range of disciplines including foreign language experts, communications and cryptography specialists, engineers, technicians and support staff. In 2015/16 the budget for the GCSB was $89.6 million. Former Green MP Keith Locke says that despite the attention the GCSB received as a result of its illegal surveillance of Kim Dotcom, there has been little public discussion about its value. Locke questions GCSB's suitability for the task of protecting government computers given its security failures. Cabinet Secretary Rebecca Kitteridge's report noted the Bureau's problems included "under-resourcing and a lack of legal staff". Oversight An Inspector-General has oversight of the GCSB (and other intelligence organisations). The current Inspector-General is Cheryl Gwyn, who began her three year term on 5 May 2014. The office of the Inspector-General also consists of Deputy Inspector-General Ben Keith, and a number of investigating staff. A statutory advisory panel of two members also provides advice to the Inspector-General. The Prime Minister appoints both the director of the GCSB and the Inspector General. Associate Professor of law at Auckland University, Bill Hodge, says the watchdog should be appointed by Parliament rather than by the Prime Minister. Former prime minister, Sir Geoffrey Palmer agrees: "There needs to be some separation between the inspector and the agency he oversees." Operations The functions of the GCSB include signals intelligence, communications security, anti-bugging measures, and computer security. The GCSB does not publicly disclose the nature of the communications which it intercepts. It is frequently described by some authors, such as Nicky Hager, as part of ECHELON. In 2006, after the death of former Prime Minister David Lange, a 1985–86 report given to Lange was found among his papers, having been mistakenly released. The report listed a number of countries as targets of GCSB efforts, including Japan, the Philippines, Argentina, France, Vietnam, and many small Pacific island states. It also mentioned United Nations diplomatic traffic. In his book on the GCSB, Nicky Hager says that during the Cold War, the locations and activities of Soviet ships (including civilian craft such as fishing trawlers) were a major focus of the organisation's activities. For the purposes of its signals intelligence activities, the GCSB maintains two "listening stations": a satellite communications interception station at GCSB Waihopai near Blenheim and a radio communications interception station at GCSB Tangimoana near Palmerston North. On 16 March 2015, the former National Security Agency contractor and whistleblower Edward Snowden disclosed that New Zealand's GCSB agency had a secret listening post, codenamed "Caprica", at the New Zealand High Commission in Honiara, the capital of the Solomon Islands. The "Caprica" outpost was reportedly modeled after the American National Security Agency's Stateroom outposts at selected United States Embassies across the world. The GCSB is characterised by its focus on foreign intelligence gathering and is unable to collect intelligence on New Zealand citizens. Because of this, the agency is reliant on the New Zealand Security Intelligence Service for domestic intelligence gathering. If the GCSB were to collect data on New Zealanders, this would be in violation of the GCSB Amendment Bill. GCSB strategic plan; 2016–2020 The 2016 - 2020 strategic plan entails what the GCSB is aiming to achieve in the years until 2020. Its two main focuses are; "impenetrable infrastructure" and "indispensable intelligence." "New Zealand's most important information infrastructures are impenetrable to technology-borne compromise. We call this aim impenetrable infrastructure; and New Zealand's intelligence generates unique policy and operational impacts for New Zealand. We call this aim indispensable intelligence." They plan to do this through the set up of eight priority objectives, including; recruiting and retaining the best employees, replacing high grade infrastructure and continuing to modernise the GCSB's access and tradecraft. Waihopai station The Waihopai Station has been operating since 1989. It is described as a satellite communications monitoring facility in the Waihopai Valley, near Blenheim. The facility has been identified by MP Keith Locke as part of ECHELON. Few details of the facility are known, but it is believed that it intercepts and processes all phone calls, faxes, e-mail and computer data communications. The site is a regular target for protesters and activists who are attempting to have the base closed down. The Anti-Bases Campaign have had regular yearly protests at the base. In October 2021, the GCSB announced that Waihopai Station's two dishes and radomes would be decommissioned as the technology is obsolete. However, other data collection and information gathering will continue at the station. Tangimoana station The Tangimoana Station was opened in 1982, replacing an earlier facility at Irirangi, near Waiouru. According to the Federation of American Scientists (FAS), the facility is part of ECHELON; its role in this capacity was first identified publicly by peace researcher Owen Wilkes in 1984, and investigated in detail by peace activist and independent journalist Nicky Hager. Notable activities and controversies Appointment of Ian Fletcher Ian Fletcher was appointed as director of the GCSB in February 2012. Mr Fletcher is a former diplomat. Fletcher was interviewed by the appointment panel after an earlier short-list of four candidates had been rejected by the Prime Minister on the recommendation of the State Services Commissioner. In March 2013, Mr Key admitted he had known Mr Fletcher since they were in school, but denied they were friends. Answering questions in parliament about Mr Fletcher's appointment, Key said he hadn't "seen the guy in a long time" and hadn't mentioned he had made a phone call to Mr Fletcher when the question first came up in parliament because he had "forgotten" about it. Former GCSB director Sir Bruce Ferguson said the way Key had intervened in the selection process was "disturbing". The Labour Party called for an inquiry into the matter. Illegal spying Shortly before Fletcher was appointed, the GCSB was found to have illegally spied on Kim Dotcom, a German national but New Zealand resident. By law the agency cannot spy on New Zealand residents. The GCSB admitted that Hugh Wolfensohn, acting director at the time, knew the organisation was spying on Dotcom. It is believed Mr Wolfensohn was placed on "gardening leave" after it became clear the GCSB had made a mistake in spying on Dotcom. In December, the High Court of New Zealand ruled Kim Dotcom could sue the GCSB for damages. The attorney-general appealed the ruling, but was unsuccessful. In March 2013, the NZ Herald reported that Wolfensohn "no longer works for the GCSB intelligence agency as it braces for fresh exposure of its failings". Kitteridge report As a result of the Dotcom saga, a review into the bureau's compliance with legislation and its internal systems and processes was conducted by Cabinet Secretary Rebecca Kitteridge. In April 2013, Kitteridge's report was leaked to the media. It contradicted GCSB head Ian Fletcher's comments that the bureau had not unlawfully spied on anyone other than Dotcom showing that the GCSB may have unlawfully spied on up to 85 people between April 2003 and September 2012. Fairfax reported "The review noted a series of failings had led to the illegal spying, including under-resourcing and a lack of legal staff." It found "the GCSB structure was overly complex and top heavy, while staff who performed poorly were tolerated, rather than dismissed or disciplined, so they would not pose a security risk upon leaving the bureau." The Green Party asked police to investigate the illegal spying. Kitteridge also said she had trouble accessing a number of "basic files". Prime Minister John Key said there was no "cover-up", and the files were probably either misfiled or never existed in the first place. GCSB Amendment Bill On 8 May 2013, the National Prime Minister John Key introduced the Government Communications Security Bureau and Related Legislation Amendment Bill, which would extend the powers of the GCSB to enable it to collect information from all New Zealanders for the use of other government departments including the New Zealand Police, Defence Force and the Security Intelligence Service. Under the bill, the GCSB will have three main functions. Firstly, it will continue to collect foreign intelligence but it will not be allowed to spy on New Zealanders. Secondly, it will give the GCSB a legal mandate to assist the police, Defence Force and the Security Intelligence Service. Thirdly, it will extend the GCSB's cyber-security functions to encompass protecting private-sector cyber systems. While this Bill was supported by the ruling National Party and its coalition partners ACT New Zealand and the United Future MP Peter Dunne, it was opposed by the opposition Labour and the Green parties, several left-wing groups, and the internet millionaire Kim Dotcom, the NZ Law Society, and the Human Rights Tribunal. On 27 July, opponents of the GCSB Amendment Bill staged nationwide protests in eleven major towns and cities, thousands attended. Critics of the GCSB Amendment Bill claimed that the Bill would turn New Zealand into a police state like the former German Democratic Republic and made references to George Orwell's novel 1984 and the ongoing Edward Snowden NSA Leaks scandal. In response, Prime Minister Key acknowledged that the protests were part of a "healthy democracy" with people being "allowed" to make their voices heard for the moment. On 14 August 2013 the Prime Minister of New Zealand John Key addressed what he identified as "misinformation" surrounding the GCSB Amendment Bill, claiming that the actions of the Government Communications Security Bureau were analogous to Norton AntiVirus. On 21 August, the House of Representatives voted to pass the GCSB Amendment Bill by 61 to 59. The bill passed its third reading despite protests from the opposition parties, human rights groups, legal advocates, and technology groups. John Key defended the GCSB Amendment Bill by arguing that it did not authorize "wholesale spying" on New Zealanders and that its opponents were misinformed. Southern Cross Cable mass surveillance In 2013 the New Zealand Herald reported that the owners of the Southern Cross Cable, New Zealand's majority (≈95%) international internet access point, had asked the United States National Security Agency (NSA) to pay them for mass surveillance of New Zealand internet activity through the cable. In May 2014, John Minto, vice-president of the New Zealand Mana Party, alleged that the NSA was carrying out mass surveillance on all meta-data and content that went out of New Zealand through the cable. In August 2014, New Zealand Green Party co-leader Russel Norman stated that an interception point was being established on the Southern Cross Cable. Norman said that as the cable is the only point of telecommunications access from New Zealand, this would allow the Government to spy on all phone calls and internet traffic from New Zealand. Norman's claims followed the revelation that an engineer from the NSA had visited New Zealand earlier in the year to discuss how to intercept traffic on the Southern Cross cable. The office of National Party New Zealand Prime Minister John Key denied the claims, but admitted that they were negotiating a "cable access programme" with the NSA, while refusing to clarify what that was or why the NSA was involved. 2015 Edward Snowden surveillance disclosures On 5 March 2015, The Intercept website and The New Zealand Herald newspaper disclosed that the Government Communications Security Bureau had been spying on New Zealand's South Pacific neighbors including Tuvalu, Nauru, Kiribati, Samoa, Vanuatu, the Solomon Islands, Fiji, Tonga, and the French overseas departments of New Caledonia and French Polynesia. The Intercept provided documents supplied by the US whistleblower Edward Snowden, who had earlier released leaked documents relating to the surveillance activities of other Five Eyes partners including the United States, Australia, Canada and the United Kingdom. The Snowden documents show that information collected by the GCSB is sent to the American National Security Agency to plug holes in the global intelligence network. Most of the surveillance was carried out from the GCSB's Waihopai Station in the South Island. Under the premiership of Prime Minister John Key, the GCSB had expanded its intelligence-gathering activities in support of the Five Eyes. According to investigative journalist and peace activist Nicky Hager, the GCSB had gone from a selective targeting of South Pacific targets to collected a wide breadth of emails messages and telephone calls. He added that the spy agency had upgraded its Waihopai spy base in 2009 to collect both the content and meta-data of all communications, rather than specific individuals and agencies. According to leaked documents supplied by Snowden, the GCSB collected a wide trove of electronic information including emails, mobile and fixed line phone calls, and social media messages from various South Pacific countries. In addition, Snowden alleged that a GCSB officer had also worked with the Australian Signals Directorate to spy on the Indonesian cellphone company Telkomsel. The GCSB's mass surveillance program was criticized by opposition parties including the Green Party co-leader Russel Norman and the Labour Party leader Andrew Little, who told the press that New Zealand risked damaging its relationship with the South Pacific and that the GCSB's actions amounted to an invasion of people's privacy. In 2014, New Zealand had secured a seat on the United Nations Security Council with the support of the entire Pacific region on the platform that "New Zealand stands up for small states." The Green Party also laid a complaint with the Inspector-General of Intelligence and Security, alleging that the GCSB had broken the law by spying on New Zealanders who were holidaying in the South Pacific. In response, Brian Fergurson, a former director of the GCSB, acknowledged that the spy agency did collect emails and other electronic communications but that it did not use material about New Zealanders captured inadvertently. The Tongan Prime Minister ʻAkilisi Pohiva has denounced New Zealand's espionage activities as a "breach of trust." He also expressed concerns about similar surveillance activities carried out by China. The Samoan Prime Minister Tuilaepa Sa'ilele has by contrast dismissed allegations of New Zealand espionage against Samoa, commenting that "it would be far fetched to think that a spy agency in any country would waste their resources doing that kind of thing to Samoa." In response to these disclosures, Prime Minister John Key issued a statement on 5 March 2015 saying that he would "neither confirm nor deny" whether New Zealand's spy agencies were spying on the South Pacific. Key had earlier acknowledged that New Zealand was a member of the Five Eyes club, which included the United States, Britain, Canada, Australia, and New Zealand, during a speech calling for New Zealand to deploy troops to Iraq to combat the Islamic State of Iraq and the Levant. On 11 March 2015, Edward Snowden disclosed that the Government Communications Security Bureau was also using the Waihopai Station to intercept transmissions from several Pacific Rim and Asian countries including Vietnam, China, India, Pakistan, and several unspecified South American nations. He added that the GCSB was helping the National Security Agency to fill gaps in its world surveillance data collection. In response to Snowden's disclosures, Una Jagose, the Acting-Director of the GCSB issued a statement that the spy agency was collecting less information that it was seven years ago during a session of the New Zealand Parliament's Intelligence and Security Committee. According to the GCSB's latest annual report, the volume of phone and electronic surveillance carried out on New Zealanders surged throughout 2014. On 13 March 2015, the Fijian military commander Brigadier-General Mosese Tikoitoga confirmed that the Fijian Military Forces were aware of the GCSB's intelligence-gathering activities in Fiji. On 15 March 2015, the journalists Nicky Hager and Ryan Gallagher reported in the New Zealand Herald that the GCSB was using the NSA's internet mass surveillance system XKeyscore to intercept email communications from several leading Solomon Islands government ministers, the Solomons Islands Truth and Reconciliation Commission, and the Solomons anti-corruption campaigner Benjamin Afuga. In response the New Zealand Minister of Foreign Affairs Murray McCully downplayed reports of the spying disclosures by asserting that Pacific Islands ministers "were smart enough not to believe what they read in New Zealand newspapers." He also offered to discuss their concerns about the mass surveillance program in private. The Solomons Chief of Staff, Robert Iroga, has condemned the New Zealand Government's actions for damaging New Zealand's image as a "friendly government" in the South Pacific. He added that communications within the inner circle of the Solomons Government was "highly secret information" that rightfully belong to the Solomon Islanders. On 16 March 2015, Snowden released more documents which revealed that the GCSB had a secret listening post, codenamed "Caprica", at the New Zealand High Commission in the Solomon Islands capital of Honiara. The "Caprica" outpost was reportedly modeled after the NSA's Stateroom outposts at selected United States Embassies across the world. On 22 March 2015, The Intercept released a new document which showed that the GCSB had monitored the email and internet communications of several foreign diplomats vying for the position of Director-General of the World Trade Organization. This surveillance was carried out on behalf of the New Zealand Trade Minister Tim Groser, who was also competing for that position. Known targets included candidates from Brazil, Costa Rica, Ghana, Jordan, Indonesia, Kenya, Mexico, and South Korea. Ultimately, Groser's candidature was unsuccessful and the Brazilian diplomat Roberto Azevêdo was elected as the Director General of the WTO on May 2013. In response to these disclosures, Sergio Danese, the Secretary-General of the Brazilian Ministry of External Relations summoned the New Zealand Ambassador Caroline Bilkey to explain the actions of her government. On 26 March 2015, the Inspector-General of Intelligence and Security Cheryl Gywn announced that she would lead an inquiry into the allegations that the GCSB had spied on New Zealanders working in the Pacific. Prime Minister John Key has welcomed this inquiry. On 16 April 2015, The Intercept and New Zealand Herald disclosed that the GCSB had been both spying on and sharing intelligence with the Bangladesh government, using a leaked National Security Agency document entitled "NSA Intelligence Relationship with New Zealand." The Bangladeshi security forces have been implicated in various human rights abuses including extrajudicial killings and torture. The New Zealand Government has refused to respond to these disclosures but opposition parties have criticized the GCSB for cooperating with Bangladeshi security forces. On 19 April 2015, The Intercept and the New Zealand Herald revealed that the GCSB and the National Security Agency had worked together to tap into a data link between the Chinese Consulate-General and the Chinese Visa Office in Auckland, New Zealand's largest city. According to a leaked secret report entitled "NSA activities in progress 2013", the GCSB was providing additional technical data on the data link to the NSA's "Tailored Accessed Operations", a powerful system that hacks into computer systems and networks to intercept communications. Other leaked documents also indicated that the GCSB codenamed their Auckland tapping operation "Frostbite" while their American counterparts called it "Basilhayden", after a Kentucky bourbon that was once regarded as the fictional spy James Bond's favourite alcoholic beverage. In response, a Chinese Embassy spokesman told the New Zealand Herald that China was concerned about the report and attached great importance to the cybersecurity issue. On 5 May 2015, the Department of the Prime Minister and Cabinet acknowledged that Snowden's leaked documents on the GCSB and NSA were authentic but accused Snowden's associates, particularly the journalist Glenn Greenwald, of "misrepresenting, misinterpreting, and misunderstanding" the leaked information. 2018–2019 Huawei ban In late November 2018, the Government Communications Security Bureau prevented national telecommunications provider Spark New Zealand from using Chinese telecommunication giant Huawei's equipment in its 5G mobile tower expansion, with the agency's Director-General Andrew Hampton citing a "a significant network security risk." New Zealand's decision to ban Huawei from its 5G expansion program accompanied moves by several Western governments including the United States, United Kingdom, and Australia to exclude Huawei from participating in their 5G mobile network expansion programs as well as the ongoing China-United States trade war. 2019 Christchurch mosque shootings Following the Christchurch mosque shootings in March 2019, the GCSB assembled a 24-hour operation response team which worked with domestic agencies and foreign partners to support the New Zealand Police and its domestic intelligence counterpart, the New Zealand Security Intelligence Service. In December 2020, a Royal Commission of Inquiry into the mosque shootings criticised the GCSB and other security services and intelligence agencies for focusing on Islamic extremism at the expense of other threats including White supremacy. In response to the Royal Commission, the GCSB's Director-General Andrew Hampton stated that the agency was committed to making its role and capabilities "more widely understood and utilised by domestic partner agencies". Hampton also claimed that the GCSB did not distinguish between different forms of violent extremism "before and after" the Christchurch attacks and vowed to support national and global efforts against the "full spectrum of violent extremism." 2021 Chinese cyber attacks On 20 July 2021, the Minister in charge of GCSB Andrew Little confirmed that the spy agency had established links between Chinese state-sponsored actors known as "Advanced Persistent Threat 40" (APT40) and malicious cyber activity in New Zealand. In addition, Little confirmed that New Zealand was joining other Western governments including the United States, United Kingdom, Australia and the European Union in condemning the Chinese Ministry of State Security and other Chinese state-sponsored actors for their involvement in the 2021 Microsoft Exchange Server data breach. In response, the Chinese Embassy in New Zealand described the New Zealand Government's statement as "groundless and irresponsible" and lodged a "solemn representation" with the New Zealand Government. The Embassy claimed that China was a staunch defender of cybersecurity and firmly opposed all forms of cyber attacks and crimes. On 21 July, Foreign Minister Nanaia Mahuta confirmed that New Zealand Foreign Ministry officials had met with Chinese Embassy officials at the request of the Chinese Embassy in response to the cyber attack allegations. The Embassy urged the New Zealand Government to abandon its so-called "Cold War mentality." New Zealand exporters have expressed concerns that an escalation of diplomatic tensions could affect Sino-New Zealand trade. Directors The GCSB is administered by a Director. The directors have been: Group Captain Colin Hanson OBE (1977–1988) Ray Parker (1988–1999) Dr Warren Tucker (1999–2006) Air Marshal Sir Bruce Ferguson KNZM OBE AFC (2006–2010) Simon Murdoch CNZM (acting November 2010 – February 2011) Lieutenant General Sir Jerry Mateparae GNZM QSO (7 February – 30 June 2011) Simon Murdoch CNZM (acting 1 July – 19 December 2011) Ian Fletcher (29 January 2012 – 27 February 2015) Una Jagose (acting 28 February 2015 – February 2016) Lisa Fong (acting February–April 2016) Andrew Hampton (May 2016 – present) Jerry Mateparae was appointed by Prime Minister John Key on 26 August 2010 taking up the role on 7 February 2011. On 8 March 2011 Mateparae was announced as the next Governor-General. He continued as Director until June 2011. Ian Fletcher (who had been appointed for five years) unexpectedly announced his resignation for family reasons in January 2015, with an acting director to take over at the end of the month. See also New Zealand intelligence agencies New Zealand Security Intelligence Service Anti-Bases Campaign References Further reading Hager, Nicky (1996). Secret Power: New Zealand's Role in the International Spy Network. Nelson, NZ: Craig Potton Publishing. . External links Government Communications Security Bureau Government Communications Security Bureau Act 2003 Anti Bases Campaign New Zealand and XKEYSCORE: not much evidence for mass surveillance New Zealand intelligence agencies Signals intelligence agencies New Zealand Public Service departments
13398581
https://en.wikipedia.org/wiki/Indic%20computing
Indic computing
Indic Computing means "computing in Indic", i.e., Indian Scripts and Languages. It involves developing software in Indic Scripts/languages, Input methods, Localization of computer applications, web development, Database Management, Spell checkers, Speech to Text and Text to Speech applications and OCR in Indian languages. Most of the widely used Indic scripts are encoded in Unicode for working on Computers and Internet. As of version 10.0, Bengali, Devanagari, Gujarati, Gurmukhi, Kannada, Limbu, Malayalam, Masaram Gondi, Newari, Ol Chiki, Oriya, Sinhala, Tamil and Telugu scripts are encoded and supported. Historically used writing systems like Arwi, Ahom alphabet, Grantha, Khudabadi, Mahajani, Modi alphabet, Siddham script, Syloti Nagri, Tirhuta are also included. Some more Indic scripts are in development and will be included in unicode, for instance Tulu Script. A lot of Indic Computing projects are going on. They involve some government sector companies, some volunteer groups and individual people. Government sector Indian Union Government made it mandatory for Mobile phone companies whose handsets manufactured, stored, sold and distributed in India to have support for reading of text in all 22 languages. This move has seen rise in use of Indian languages by millions of users. TDIL The Department of Electronics and Information Technology, India initiated the TDIL (Technology Development for Indian Languages) with the objective of developing Information Processing Tools and Techniques to facilitate human-machine interaction without language barrier; creating and accessing multilingual knowledge resources; and integrating them to develop innovative user products and services. In 2005, it started distributing language software tools developed by Government/Academic/Private companies in the form of CD for non commercial use. Some of the outcome of TDIL program deployed on Indian Language Technology Proliferation & Deployment Centre. This Centre disseminate all the linguistic resources, tools & applications which have been developed under TDIL funding. This programme took to exponential expansion under the leadership of Dr. Swaran Lata who also created international foot-print of the programme. She has now retired. C-DAC C-DAC is an India based government software company which is involved in developing language related software. It is best known for developing InScript Keyboard, the standard keyboard for Indian languages. It has also developed lot of Indic language solutions including Word Processors, typing tools, text to speech software, OCR in Indian languages etc. BharateeyaOO.org The work developed out of CDAC, Bangalore (earlier known as NCST, Bangalore) became BharateeyaOO. OpenOffice 2.1 had support for over 10 Indian languages. BOSS BOSS is developed by National Resource Centre for free/open source software, an initiative of DIT. Its activities are coordinated by C-DAC Chennai and Anna University KBC Research Center. Support Centres are established at several cities in India to provide support to Users. NGO and Volunteer groups Indlinux Indlinux organisation helped organise the individual volunteers working on different indic language versions of Linux and its applications. Sarovar Sarovar.org is India's first portal to host projects under Free/Open source licenses. It is located in Trivandrum, India and hosted at Asianet data center. Sarovar.org is customised, installed and maintained by Linuxense as part of their community services and sponsored by River Valley Technologies. Sarovar.org is built on Debian Etch and GForge and runs off METTLE. Pinaak Pinaak is a non-government charitable society devoted to Indic language computing. It works for software localization, developing language software, localizing open source software, enriching online encyclopedias etc. In addition to this Pinaak works for educating people about computing, ethical use of Internet and use of Indian languages on Internet. Ankur Group Ankur Group is working toward supporting Bengali language (Bengali) on Linux operating system including localized Bengali GUI, Live CD, English-to-Bengali translator, Bengali OCR and Bengali Dictionary etc. BhashaIndia SMC SMC is a free software group, working to bridge the language divide in Kerala in the technology front and is today the biggest language computing community in India. Input methods Full size keyboards With the advent of Unicode inputting Indic text on computer has become very easy. A number of methods exist for this purpose, but the main ones are:- InScript Inscript is the standard keyboard for Indian languages. Developed by C-DAC and standardized by Government of India. Nowadays it comes inbuilt in all major operating systems including Microsoft Windows (2000, XP, Vista, 7), Linux and Macintosh. Phonetic transliteration This is a typing method in which, for instance, the user types text in an Indian language using Roman characters and it is phonetically converted to equivalent text in Indian script in real time. This type of conversion is done by phonetic text editors, word processors and software plugins. Building up on the idea, one can use phonetic IME tools that allow Indic text to be input in any application. Some examples of phonetic transliterators are Xlit, Google Indic Transliteration, BarahaIME, Indic IME, Rupantar, SMC's Indic Keyboard and Microsoft Indic Language Input Tool. SMC's Indic Keyboard has support for as many as 23 languages whereas Google Indic Keyboard only supports 11 Indian languages. They can be broadly classified as: Fixed transliteration scheme based tools – They work using a fixed transliteration scheme to convert text. Some examples are Indic IME, Rupantar and BarahaIME. Intelligent/Learning based transliteration tools – They compare the word with a dictionary and then convert it to the equivalent words in the target language. Some of the popular ones are Google Indic Transliteration, Xlit, Microsoft Indic Language Input Tool and QuillPad. Remington (typewriter) This layout was developed when computers had not been invented or deployed with Indic languages, and typewriters were the only means to type text in Indic scripts. Since typewriters were mechanical and could not include a script processor engine, each character had to be placed on the keyboard separately, which resulted in a very complex and difficult to learn keyboard layout. With the advent of Unicode, the Remington layout was added to various typing tools for sake of backward compatibility, so that old typists did not have to learn a new keyboard layout. Nowadays this layout is only used by old typists who are used to this layout due to several years of usage. One tool to include Remington layout is Indic IME. A font that is based on the Remington keyboard layout is Kruti Dev. Another online tool that very closely supports the old Remington keyboard layout using Kruti Dev is the Remington Typing tool. Braille IBus Sharada Braille, which supports seven Indian languages was developed by SMC. Mobile phones with Numeric keyboards Mobile/Hand/cell phone basic models have 12 keys like the plain old telephone keypad. Each key is mapped to 3 or 4 English letters to facilitate data entry in English. For inputting Indian languages with this kind of keypad, there are two ways to do so. First is the Multi-tap Method and second uses visual help from the screen like Panini Keypad. The primary usage is SMS. 140 characters size used for English/Roman languages can be used to accommodate only about 70 language characters when Unicode Proprietary compression is used some times to increase the size of single message for Complex script languages like Hindi. A research study of the available methods and recommendations of proposed standard was released by Broadband Wireless Consortium of India (BWCI). Transliteration/Phonetic methods English is used to type in Indian languages. QuillPad IndiSMS Native methods In native methods, the letters of the language are displayed on the screen corresponding to the numeral keys based on the probabilities of those letters for that language. Additional letters can be accessed by using a special key. When a word is partially typed, options are presented from which the user can make a selection. Smart phones with Qwerty keyboards Most smart phones have about 35 keys catering primarily to English language. Numerals and some symbols are accessed with a special key called Alt. Indic input methods are yet to evolve for these types of phones, as support of Unicode for rendering is not widely available. For Smart Phones with Soft/Virtual keyboards Inscript is being adopted for smart phone usage. For Android phones which can render Indic languages, Swalekh Multilingual Keypad Multiling Keyboard app are available. Gboard offers support for several Indian languages. Localization Localization means translating software, operating systems, websites etc. various applications in Indian language. Various volunteers groups are working in this direction. Mandrake Tamil Version A notable example is the Tamil version of Mandrake linux(defunct since 2011). Tamil speakers in Toronto (Canada) released Mandrake, a Linux software, in coming out with a Tamil version. It can be noted that all the features can be accessed in Tamil. By this, the prerequisite of English knowledge for using computers has been eliminated, for those who know Tamil. IndLinux IndLinux is a volunteer group aiming to translate the Linux operating system into Indian languages. By the efforts of this group, Linux has been localized almost completely in Hindi and other Indian languages. Nipun Nipun is an online translation system aimed to translate various application in Hindi. It is part of Akshargram Network. Localising Websites GoDaddy has localised its website in Hindi, Marathi and Tamil and also noted that 40% of the call volume for IVR is in Indian Languages. Indic blogging Indic blogging refers to blogging in Indic languages. Various efforts have been done to promote blogging in Indian languages. Social Networks Some Social networks are started in Indian languages. Programming Indic programming languages BangaBhasha - Programming in Bangla Programing using Hindi language Ezhil, a programming language in Tamil Frameworks Gherkin, a popular Domain-specific language has support for Gujarati, Hindi, Kannada, Punjabi, Tamil, Telugu and Urdu Libraries Natural Language processing in Indian languages is on rise. There are several libraries such as iNLTK, StanfordNLP are available. Translation Google offers improved translation feature for Hindi, Bengali, Marathi, Tamil, Telugu, Gujarati, Punjabi, Malayalam and Kannada, with offline support as well. Microsoft also offers translation for some of these languages. Software Indic Language Stack In a symposium jointly organized by FICCI and TDIL, Mr. Ajay Prakash Sawhney, Secretary, Ministry of Electronics and IT, Government of India said that India Language Stack can help overcome the barriers of communication. Spell Checkers Transliteration tools Transliteration tools allow users to read a text in a different script. As of now, Aksharamukha is the tool that allows most Indian scripts. Google also offers Indic Transliteration. Text from any of these scripts can be converted to any other scripts and vice versa. Whereas Google and Microsoft allow transliteration from Latin letters to Indic scripts. Text-to-Speech Carnegie Mellon University, in collaboration with the Hear2Read project, has developed a text-to-speech (TTS) software that helps the visually impaired listen to text in native Indian languages. Currently, Tamil is being offered and, releases in Hindi, Bengali, Gujarati, Marathi, Kannada, Punjabi and Telugu are expected over the remainder of 2016. Speech-to-Text Voice Recognition Apple Inc. added support for major Indian languages in Siri. Amazon's Alexa has support for Hindi and recognises major Indian languages partially. Google Assistant also has support for major Indian languages. Internationalized Domain Names Operating Systems Indus OS Virtual Assistants AI based Virtual Assistants Google Assistant provides support to various Indian languages. Usage and Growth According to GoDaddy, Hindi, Marathi and Tamil languages accounted for 61% of India's internet traffic. Less than 1% of online content is in Indian languages. The newly created top apps have support for multiple Indian languages and/or promote Indian language content. 61% of the Indian users of WhatsApp primarily use their native languages to communicate with it. A recent study revealed that adoption of Internet is highest among local languages such as Tamil, Hindi, Kannada, Bengali, Marathi, Telugu, Gujarati and Malayalam. It estimates that Marathi, Bengali, Tamil, and Telugu will form 30% of the total local-language user base in the country. Currently, Tamil at 42% has the highest Internet adoption levels, followed by Hindi at 39% and Kannada at 37%. Intex also reported that 87% of its regional language usage came from Hindi, Bengali, Tamil, Gujarati and Marathi speakers. Lava mobiles reported that Tamil and Malayalam are the most popular on their phones, more than even Hindi. See also Indic Unicode Hindi Blogosphere Indian Blogosphere Clip font References Indic
164430
https://en.wikipedia.org/wiki/Documentation
Documentation
Documentation is any communicable material that is used to describe, explain or instruct regarding some attributes of an object, system or procedure, such as its parts, assembly, installation, maintenance and use. Documentation can be provided on paper, online, or on digital or analog media, such as audio tape or CDs. Examples are user guides, white papers, online help, and quick-reference guides. Paper or hard-copy documentation has become less common. Documentation is often distributed via websites, software products, and other online applications. Documentation as a set of instructional materials shouldn't be confused with documentation science, the study of the recording and retrieval of information. Principles for producing documentation While associated ISO standards are not easily available publicly, a guide from other sources for this topic may serve the purpose.,, Documentation development may involve document drafting, formatting, submitting, reviewing, approving, distributing, reposting and tracking, etc., and are convened by associated SOPs in a regulatory industry. It could also involve creating content from scratch. Documentation should be easy to read and understand. If it's too long and too wordy, it may be misunderstood or ignored. Clear, concise words should be used, and sentences should be limited to a maximum of 15 words. Documentation intended for a general audience should avoid gender-specific terms and cultural biases. In a series of procedures, steps should be clearly numbered.,,, Producing documentation Technical writers and corporate communicators are professionals whose field and work is documentation. Ideally, technical writers have a background in both the subject matter and also in writing, managing content, and information architecture. Technical writers more commonly collaborate with subject matter experts (SMEs), such as engineers, technical experts, medical professionals, etc. to define and then create documentation to meet the user's needs. Corporate communications includes other types of written documentation, for example: Market communications (MarCom): MarCom writers endeavor to convey the company's value proposition through a variety of print, electronic, and social media. This area of corporate writing is often engaged in responding to proposals. Technical communication (TechCom): Technical writers document a company's product or service. Technical publications can include user guides, installation and configuration manuals, and troubleshooting and repair procedures. Legal writing: This type of documentation is often prepared by attorneys or paralegals. Compliance documentation: This type of documentation codifies Standard Operating Procedures (SOPs), for any regulatory compliance needs, as for safety approval, taxation, financing, technical approval, and all Healthcare documentation: This field of documentation encompasses the timely recording and validation of events that have occurred during the course of providing health care. Documentation in computer science Types The following are typical software documentation types: Request for Proposal (RFP) Requirements/ Statement of work/ Scope of Work (SOW) Software Design and Functional Specification System Design and Functional Specifications Change Management, Error and Enhancement Tracking User Acceptance Testing Manpages The following are typical hardware and service documentation types: Network diagrams Network maps Datasheet for IT systems (Server, Switch, e.g.) Service Catalog and Service Portfolio (ITIL) Software Documentation Folder (SDF) tool A common type of software document written in the simulation industry is the SDF. When developing software for a simulator, which can range from embedded avionics devices to 3D terrain databases by way of full motion control systems, the engineer keeps a notebook detailing the development "the build" of the project or module. The document can be a wiki page, MS word document or other environment. They should contain a requirements section, an interface section to detail the communication interface of the software. Often a notes section is used to detail the proof of concept, and then track errors and enhancements. Finally, a testing section to document how the software was tested. This documents conformance to the client's requirements. The result is a detailed description of how the software is designed, how to build and install the software on the target device, and any known defects and work-arounds. This build document enables future developers and maintainers to come up to speed on the software in a timely manner, and also provides a roadmap to modifying code or searching for bugs. Software tools for Network Inventory and Configuration These software tools can automatically collect data of your network equipment. The data could be for inventory and for configuration information. The ITIL Library requests to create such a database as a basis for all information for the IT responsible. It's also the basis for IT documentation. Examples include XIA Configuration. Documentation in criminal justice "Documentation" is the preferred term for the process of populating criminal databases. Examples include the National Counter-terrorism Center's Terrorist Identities Datamart Environment ("TIDE"), sex offender registries, and gang databases. Documentation in early childhood education Documentation, as it pertains to the early childhood education field, is "when we notice and value children's ideas, thinking, questions, and theories about the world and then collect traces of their work (drawings, photographs of the children in action, and transcripts of their words) to share with a wider community" Thus, documentation is a process, used to link the educator's knowledge and learning of the child/children with the families, other collaborators, and even to the children themselves. Documentation is an integral part of the cycle of inquiry - observing, reflecting, documenting, sharing and responding. Pedagogical documentation, in terms of the teacher documentation, is the "teacher's story of the movement in children's understanding". According to Stephanie Cox Suarez in 'Documentation - Transforming our Perspectives', "teachers are considered researchers, and documentation is a research tool to support knowledge building among children and adults" Documentation can take many different styles in the classroom. The following exemplifies ways in which documentation can make the 'research', or learning, visible: Documentation Panels (bulletin-board-like presentation with multiple pictures and descriptions about the project or event). Daily Log (a log kept every day that records the play and learning in the classroom) Documentation developed by or with the children (when observing children during documentation, the child's lens of the observation is used in the actual documentation) Individual Portfolios (documentation used to track and highlight the development of each child) Electronic Documentation (using apps and devices to share documentation with families and collaborators) Transcripts or Recordings of Conversations (using recording in documentation can bring about deeper reflections for both the educator and the child) Learning Stories (a narrative used to "describe learning and help children see themselves as powerful learners") The Classroom as Documentation (reflections and documentation of the physical environment of a classroom). Documentation is certainly a process in and of itself, and it is also a process within the educator. The following is the development of documentation as it progresses for and in the educator themselves: Develop(s) habits of documentation Become(s) comfortable with going public with recounting of activities Develop(s) visual literacy skills Conceptualize(s) the purpose of documentation as making learning styles visible, and Share(s) visible theories for interpretation purposes and further design of curriculum. See also Authoring Bibliographic control Change control Citation Index Copyright Description Document Documentation (field) Documentation science Document identifier Document management system Documentary Freedom of information Glossary Historical document Index (publishing) ISO 2384:1977 ISO 259:1984 ISO 5123:1984 ISO 3602:1989 ISO 6357:1985 ISO 690 ISO 5964 ISO 9001 IEC 61355 International Standard Bibliographic Description Journal of Documentation Licensing Letterhead List of Contents Technical documentation User guide Medical certificate Publishing Records management Software documentation Style guide Technical communication References External links IEEE Professional Communication Society Documentation Definition by The Linux Information Project (LINFO) Information & Documentation List of selected tools Library of articles on documentation: Technical writing and documentation articles Technical communication Information science
713412
https://en.wikipedia.org/wiki/Sophos
Sophos
Sophos Group plc is a British based security software and hardware company. Sophos develops products for communication endpoint, encryption, network security, email security, mobile security and unified threat management. Sophos is primarily focused on providing security software to 100- to 5,000-seat organizations. While not a primary focus, Sophos also protects home users, through free and paid antivirus solutions (Sophos Home/Home Premium) intended to demonstrate product functionality. It was listed on the London Stock Exchange until it was acquired by Thoma Bravo in February 2020. History Sophos was founded by Jan Hruska and Peter Lammer and began producing its first antivirus and encryption products in 1985. During the late 1980s and into the 1990s, Sophos primarily developed and sold a range of security technologies in the UK, including encryption tools available for most users (private or business). In the late 1990s, Sophos concentrated its efforts on the development and sale of antivirus technology, and embarked on a program of international expansion. In 2003, Sophos acquired ActiveState, a North American software company that developed anti-spam software. At that time viruses were being spread primarily through email spam and this allowed Sophos to produce a combined anti-spam and antivirus solution. In 2006, Peter Gyenes and Steve Munford were named chairman and CEO of Sophos, respectively. Jan Hruska and Peter Lammer remain as members of the board of directors. In 2010, the majority interest of Sophos was sold to Apax. In 2010, Nick Bray, formerly Group CFO at Micro Focus International, was named CFO of Sophos. In 2011, Utimaco Safeware AG (acquired by Sophos in 2008–9) were accused of supplying data monitoring and tracking software to partners that have sold to governments such as Syria: Sophos issued a statement of apology and confirmed that they had suspended their relationship with the partners in question and launched an investigation. In 2012, Kris Hagerman, formerly CEO at Corel Corporation, was named CEO of Sophos and joined the company's board. Former CEO Steve Munford became non-executive chairman of the board. In February 2014, Sophos announced that it had acquired Cyberoam Technologies, a provider of network security products. In June 2015, Sophos announced plans to raise $US100 million on the London Stock Exchange. Sophos was floated on the FTSE in September 2015. On 14 October 2019 Sophos announced that Thoma Bravo, a US-based private equity firm, made an offer to acquire Sophos for US$7.40 per share, representing an enterprise value of approximately $3.9 billion. The board of directors of Sophos stated their intention to unanimously recommend the offer to the company's shareholders. On 2 March 2020 Sophos announced the completion of the acquisition. Acquisitions and partnerships From September 2003 to February 2006, Sophos served as the parent company of ActiveState, a developer of programming tools for dynamic programming languages: in February 2006, ActiveState became an independent company when it was sold to Vancouver-based venture capitalist firm Pender Financial. In 2007, Sophos acquired ENDFORCE, a company based in Ohio, United States, which developed and sold security policy compliance and Network Access Control (NAC) software. In May 2011, Sophos announced the acquisition of Astaro, a privately held provider of network security solutions, headquartered in Wilmington, Massachusetts, USA and Karlsruhe, Germany. At the time Astaro was the 4th largest UTM vendor and while the deal made sense at the time Forbes questioned its viability. Sophos subsequently renamed the Astaro UTM to Sophos UTM. In November 2016, Sophos acquired Barricade, a pioneering start-up with a powerful behavior-based analytics engine built on machine learning techniques, to strengthen synchronized security capabilities and next-generation network and endpoint protection. In February 2017, Sophos acquired Invincea, a software company that provides malware threat detection, prevention, and pre-breach forensic intelligence. In March 2020, Thoma Bravo acquired Sophos for $3.9 billion. See also Antivirus software Comparison of antivirus software Comparison of computer viruses Comparison of firewalls Cryptography Identity-based security References External links 1985 establishments in England British companies established in 1985 Antivirus software English brands Companies based in Oxfordshire Computer security companies Computer security software companies Companies formerly listed on the London Stock Exchange Security software Software companies of England Software companies established in 1985 Windows security software 2020 mergers and acquisitions Private equity portfolio companies Abingdon-on-Thames
3129075
https://en.wikipedia.org/wiki/Coving%20%28urban%20planning%29
Coving (urban planning)
Coving is a method of urban planning used in subdivision and redevelopment of cities characterized by non-uniform lot shapes and home placement. When combined with winding roads, lot area is increased and road area reduced. Coving is used as an alternative to conventional urban "grid" and suburban zoning-driven land development layouts in order to enhance curb appeal, eliminate monotony, reduce costs, such as road surfacing and street length, while increasing the amount of land available for construction. History Coving was pioneered by Minneapolis-based urban designer Rick Harrison. His design intent was that no two houses look directly into each other's windows. The name comes from coves of green spaces among the homes which are made possible by winding roads and meandering setbacks. Coving was first discovered by accident when Rick Harrison was experimenting with design options on a Chicago subdivision layout in 1990. By the meandering of setbacks and the elimination of pavement bubbles running the calculations through Land Innovation software it was discovered that street pavement was reduced 20%. Assuming it was a software error, the site was manually checked. What he had discovered was that by a careful meandering of the homes to form curved shapes separate from the direction of the street, there could be a significant reduction in street length. Coving has led to many new discoveries and pioneering design methods and techniques as well as new software technologies and user interfaces. Currently Coving is in its fourth generation, and has demonstrated an average reduction of public street length by 25% while maintaining density of conventional (curved street) subdivision platting. More recently in 2013, advancements in architecture were made possible by the lot shaping and open space interconnection with living spaces within the home – redefining architecture as well as land planning. Another area where coved design has made advancement is in the realm of Urban Redevelopment. By abandoning excessive streets and right-of-ways a demonstrated reduction of public street of upwards of 60% can redefine how inner core suburban (typically tight gridded lots) are redeveloped. This model was first created at the beginning of the 2008 recession and is being proposed in blighted urban spaces to bring about housing affordability and increased quality of life. Advantages and disadvantages A coved layout reduces construction costs by reducing roadway length, thereby lowering paving and utility-line costs. The reduction in road surface adds usable land for lots and parks. Other benefits are increased pedestrian safety due to less road and fewer intersections. Individual properties also gain aesthetic value from the separate meandering setback lines, sidewalks, and roadways. Very early coved design (first generation) were somewhat experimental with potential problems: Coving has been cited as having several disadvantages: greater set-back from the street, larger lots, reduced usability for mixed application, decreased walkability, decreased street and pedestrian connectivity of a tract to its surroundings, increased suburban sprawl, leaving little or no public open space, and allowing more soil runoff and less communal open space than alternate development types such as new urbanism. In an effort to eliminate any negative elements of the design, research was done to visit early coved sites and query both residents and cities to revise and update the design methods. Along with receiving comments from land developers and builders, coved design had gone through 4 iterations of evolution. Today's coved designs have better vehicular flow reducing energy and time in transit, direct and connective pedestrian systems with safer and elegant meandering walks, as well as curve radii standards that reduce excessive infrastructure of early designs. These comparisons are for traditional curved subdivision designs as well as the New Urbanism or smart growth methods of planning. In the top two images on this page, the before TND, or Smart Growth design has 54 street intersections and 38% more street length than the coved design of the same housing mix and density. The coved plan has much better walking connectivity and is far safer with less interaction between vehicles and people. More important is that the average lot size increases 15% and monotony is eliminated. As with all coved development, the design methods accomplish all of this and more by exceeding every existing regulatory minimum. Coved development is unique in all of land planning in that it actually gains efficiency by exceeding existing regulatory minimums, it is the first method of design with such a claim to boast. What this means is there is nothing special to request for approvals - the design exceeds the regulatory minimums. Design An external early commentary was made about the covered design methods: Designing coved developments is considered comparatively difficult. Specialized software is often used and designers often need several years of experience to become proficient. The design also isn't feasible for narrow tracts of land, and house footprints need to be less than 85% of the lot size. Existing CAD based software is not intended to create plans that are not based upon replication. With coved design there is little if any replication. LandMentor, a software product can produce plans using coving concepts within an acceptable time frame. External links Creating a New Concept in Subdivision Layouts - New York Times A Better Arrangement - POB Curve Control - Big Builder Magazine Architectural Space as a Component of Site Design - iGreenBuild.com Suburbs & Cul-De-Sacs: Is the Romance Over? - New Geography Land Planning, Part 1 thru 4 - Professional Surveyor Prefurbia, Part 1: A Prefurbia Development Solution - Environmental Protection Online References Planned municipal developments Urban studies and planning terminology
12454670
https://en.wikipedia.org/wiki/Marcus%20Warren%20Hobbs
Marcus Warren Hobbs
Marcus Warren Hobbs (born 1970), known by his stage name Marcus Satellite is an American composer, electronic musician, Microtonal music, and computer graphics professional noted for creating microtonal electronic music and animated films using advanced computer software. Early life Hobbs was born in Fontana, California to Vicki Jo Adams and Evan Kenneth Hobbs. He has three brothers Sean, Daemon, and Alan, and a sister, Erin. His mother was a singer, pianist, and church organist. His father was a painter and guitarist. His parents exposed Marcus to art and music, especially rock music and classical music and informally taught him guitar and piano. In 1982 they bought a Commodore VIC-20 on which Marcus learned to program by creating personal computer games. Marcus graduated from the University of California, Riverside in 1991 with a Bachelor's Degree in Computational mathematics. Film credits In 1992 Hobbs began working at Walt Disney Animation Studios. His first film credits were Aladdin, The Lion King, and Trail Mix-Up. He learned how to apply his programming skills to the synthesis of 3-dimensional computer imagery and in 1995 was promoted to Technical Director on Pocahontas and Hercules. He became a Supervisor for Atlantis: The Lost Empire, Mickey's PhilharMagic, Mickey's Twice Upon A Christmas, and Meet The Robinsons. Music In 1995 Hobbs began integrating his software expertise with music composition, purchasing numerous electronic instruments including a Devilfish (a modified Roland TB-303) and a Kyma (sound design language) from Symbolic Sound Corporation. He was introduced to microtonalist and music theorist Erv Wilson by Carla Scaletti and Kurt J. Hebel. He created many software instruments which implemented several of Wilson's tuning theories such as Moments Of Symmetry, Combination Product Sets. In 1998 he released his first album From On High under the name Marcus Satellite. Each track features software instruments tuned to a different tuning of Wilson's and lyrics and vocals by American singer and songwriter K Blu. The project was sequenced using Steinberg Cubase. In 2002 he completed his second album Way Beyond, Way Above, a collaboration with Russian singer and songwriter Masha d'Elephenden. The project featured Kyma software instruments, a Roland MC-505, a Roland MC-303, sequenced on an Apple Power Mac G4 using Steinberg Cubase. The album was released 2007. In 2004 he completed his third album My Silent Wings, a collaboration with English singer and songwriter Rocket Excelsior. This was Hobbs' first album to use only software instruments, rendered on an Apple Macintosh G4. Hobbs also switched sequencers to Ableton Live. The computation necessary to render the many layers of software instruments required many passes and motivated Hobbs to search for more efficient synthesis methods. The album has yet to be released. In 2006 he released 9 microtonal singles A Boy Named Peace, A Girl Named Love, The Exquisite Corpses Play Darwin, Full Moon Fire, I Love, I, Marcus, Scaling Lambdoma 6, Stormy Eyes, To The Moon, and Trading Tykes The Magick Words. Hobbs invested nearly 6 months programming software instruments in Native Instruments's Reaktor and purchased a Power Mac G5 to achieve real-time rendering of each of these works. In 2007 he released The Marcus Satellite Tribute To U2, consisting of 13 electronica cover songs of popular U2 songs. The work is faithful to the form of the original songs, and features dozens of layers of software synthesizers rendered in real-time. Hobbs has also scored the forthcoming independent film, Desert Vows. Personal life Marcus and his wife Heather have two children, Lucas and Madeline. External links Marcus Satellite University of California, Riverside alumni Living people 1970 births
46406427
https://en.wikipedia.org/wiki/Aaron%20Fuller
Aaron Fuller
Aaron Craig Fuller (born December 3, 1989) is an American professional basketball player for the TNT Tropang Giga of the Philippine Basketball Association (PBA). He played college basketball for the University of Iowa and the University of Southern California before playing professionally in Portugal, New Zealand, Mexico, the Philippines, Luxembourg, Israel, and Uruguay. High school career Fuller attended Mesa High School in Mesa, Arizona, where he earned all-state and all-conference honors as a junior and senior. As a senior in 2007–08, he averaged 24.5 points, 10.8 rebounds and 2.1 blocked shots per game, as he led Mesa to a 17–11 record and an East Valley Region championship. He was named East Valley Region Player of the Year and East Valley Tribune Player of the Year, and was named Player of the Year for Class 4A-5A in Arizona by the Arizona Republic. College career As a freshman for Iowa in 2008–09, Fuller appeared in 32 games with 19 starts, while averaging 4.0 points and 2.7 points per game. He scored a season-high 16 points against Penn State on January 24. As a sophomore in 2009–10, Fuller earned All-Big Ten Honorable Mention honors after averaging 9.7 points and a team-leading 6.2 rebounds in 30 games (22 starts). He posted a team-best six double-doubles and led Iowa in rebounding a team-best 14 times, including 12 of the last 16 games. On February 16, he recorded career-highs of 30 points and 13 rebounds against Michigan. On April 9, 2010, Fuller announced his decision to leave the Hawkeyes basketball program in order to move closer to home and his family. Less than a month later, on May 4, he signed with USC and subsequently redshirted the 2010–11 season due to NCAA transfer regulations. As a redshirted junior in 2011–12, Fuller suffered a labral tear in his left (shooting) shoulder in October, and in December he suffered one in his right shoulder. He was later ruled out for the rest of the season in January due to the injuries, opting to have season-ending surgery to deal with the labrum tear in his left shoulder. Shooting with his non-preferred right hand for most of the season, he was second on the Trojans team in scoring (10.6) and first in rebounding (5.9). As a senior in 2012–13, Fuller's role was dramatically reduced as he started just six games and averaged 14.7 minutes per game. He scored a season-high 15 points against UCLA and finished the season with averages of 4.1 points and 3.6 rebounds in 32 games. At the Trojans' annual awards banquet on April 9, he received the John Rudometkin Award for giving 110% effort throughout the season. College statistics |- | style="text-align:left;"| 2008–09 | style="text-align:left;"| Iowa | 32 || 19 || 17.2 || .364 || .297 || .440 || 2.7 || .4 || .4 || .3 || 4.0 |- | style="text-align:left;"| 2009–10 | style="text-align:left;"| Iowa | 30 || 22 || 24.4 || .477 || .200 || .676 || 6.2 || .7 || .4 || .3 || 9.7 |- | style="text-align:left;"| 2011–12 | style="text-align:left;"| USC | 18 || 18 || 29.2 || .515 || .000 || .631 || 5.9 || .4 || .9 || .4 || 10.6 |- | style="text-align:left;"| 2012–13 | style="text-align:left;"| USC | 32 || 6 || 14.7 || .505 || .000 || .667 || 3.6 || .2 || .4 || .3 || 4.1 |- | style="text-align:center;" colspan="2"|Career | 112 || 65 || 20.3 || .465 || .269 || .636 || 4.4 || .4 || .5 || .3 || 6.6 |- Professional career U.D. Oliveirense (2013–2014) On September 20, 2013, Fuller signed with U.D. Oliveirense of Portugal for the 2013–14 LPB season. He appeared in all 20 games for Oliveirense in 2013–14, averaging 18.2 points, 9.7 rebounds, 1.1 assists and 1.3 steals per game. Taranaki Mountainairs (2015) In January 2015, Fuller signed with the Taranaki Mountainairs for the 2015 New Zealand NBL season. On April 14, he was named co-Player of the Week for Round 2 alongside Southland Sharks forward Tai Wesley. In the Mountainairs' final game of the season on June 28 against the Super City Rangers, Fuller set an NBL record for points scored in a game with 54 on 25-of-34 shooting. He also recorded a season-high 19 rebounds in a two-point loss to the Rangers, and subsequently earned Round 13 Player of the Week honors. In 18 games for Taranaki, he averaged a league-leading 28.4 points, 9.5 rebounds and 1.3 steals per game, and earned All-Star Five honors. Despite his great season, the Mountainairs failed to win a game in what was just the fourth winless season for an NBL team in league history. Fuerza Regia (2015–2017) On September 22, 2015, Fuller signed with Fuerza Regia of Mexico for the 2015–16 LNBP season. On November 26, he had a season-best game with 22 points and 7 rebounds in a loss to Soles de Mexicali. He later scored 19 points against Abejas de Guanajuato on January 17, and had an 18-point game on February 11 against Panteras de Aguascalientes. In 35 games for Fuerza, he averaged 6.0 points and 3.0 rebounds per game. In August 2016, Fuller re-signed with Fuerza Regia for the 2016–17 season. In 25 games, he averaged 3.6 points and 2.2 rebounds per game. NLEX Road Warriors (2017) On May 19, 2017, Fuller signed with the NLEX Road Warriors as an import for the 2017 PBA Governors' Cup. In 11 games for the Road Warriors, he averaged 22.6 points, 17.7 rebounds, 1.5 assists, 1.6 steals and 1.5 blocks per game. Racing (2018) In January 2018, Fuller joined Luxembourgian club Racing of the Total League. In 10 games, he averaged 30.0 points, 14.4 rebounds, 1.3 assists, 1.0 steals and 1.2 blocks per game. Return to NLEX Road Warriors (2018) In August 2018, Fuller re-joined the NLEX Road Warriors for the 2018 PBA Governors' Cup as an injury replacement for Olu Ashaolu. Hapoel Galil Elyon (2018–2019) On November 29, 2018, Fuller signed with the Israeli team Hapoel Galil Elyon of the Liga Leumit, replacing Stephen Maxwell. In 8 games played for Galil Elyon, he averaged 13.9 points, 5.3 rebounds and 1.5 steals per game. Soles de Mexicali (2019) On January 22, 2019, Fuller signed with Mexican team Soles de Mexicali for the Liga Americas. In three games, he averaged 6.7 points, 4.0 rebounds and 1.0 assists per game. Return to Fuerza Regia (2019) In February 2019, Fuller re-joined Fuerza Regia for the rest of the LNBP season. In 18 games, he averaged 6.7 points, 3.8 rebounds and 1.2 assists per game. Blackwater Elite (2019) In August 2019, Fuller joined Blackwater Elite of the Philippine Basketball Association for the East Asia Super League Terrific 12 and the PBA Governors' Cup. He was replaced by Marqus Blakely for the Terrific 12 tournament due to an ankle injury. He made his debut for the team in the Governors' Cup, but was later replaced again by Blakely due to not being 100%. Fuerza Regia and Malvín (2019–2021) Between November 2019 and November 2020, Fuller played for Fuerza Regia in the LNBP. In March 2021, Fuller joined Malvín of the Liga Uruguaya de Básquetbol. Between September and November 2021, Fuller once again played for Fuerza Regia. TNT Tropang Giga (2021–present) On December 25, 2021, Fuller joined the TNT Tropang Giga for the 2021 PBA Governors' Cup as a replacement for the injured McKenzie Moore. References External links Iowa bio USC bio Q + A With Aaron Fuller 1989 births Living people American expatriate basketball people in Israel American expatriate basketball people in Luxembourg American expatriate basketball people in Mexico American expatriate basketball people in New Zealand American expatriate basketball people in the Philippines American expatriate basketball people in Portugal American expatriate basketball people in Uruguay American men's basketball players Basketball players from Arizona Blackwater Bossing players Club Malvín basketball players Fuerza Regia players Hapoel Galil Elyon players Iowa Hawkeyes men's basketball players Mesa High School alumni NLEX Road Warriors players Philippine Basketball Association imports Power forwards (basketball) Small forwards Soles de Mexicali players Sportspeople from Mesa, Arizona Taranaki Mountainairs players TNT Tropang Giga players USC Trojans men's basketball players
11056603
https://en.wikipedia.org/wiki/FLAIM
FLAIM
FLAIM (Framework for Log Anonymization and Information Management) is a modular tool designed to allow computer and network log sharing through application of complex data sanitization policies. FLAIM is aimed at 3 different user communities. First, FLAIM can be used by the security engineer who is investigating a broad incident spanning multiple organizations. Because of the sensitivity inherent in security relevant logs, many organizations are reluctant to share them. However, this reluctance inhibits the sharing necessary to investigate intrusions that commonly span organizational boundaries. Second, anyone designing log analysis or computer forensics tools needs data with which they can test their tools. The larger and more diverse the data set, the more robust they can make their tools. For many, this means they must gather many logs from outside sources, not just what they can generate in-house. Again, this requires log sharing. Third, researchers in many computer science disciplines (e.g., network measurements, computer security, etc.) need large and diverse data sets to study. Having data sanitization tools available makes organizations more willing to share with these researchers their own logs. FLAIM is available under the Open Source Initiative approved University of Illinois/NCSA Open Source License. This is BSD-style license. It runs on Unix and Unix-like systems, including Linux, FreeBSD, NetBSD, OpenBSD and Mac OS X. While FLAIM is not the only log anonymizer, it is unique in its flexibility to create complex XML policies and its support for multiple log types. More specifically, it is the only such tool to meet the following 4 goals. (1) FLAIM provides a diverse set of anonymization primitives. (2) FLAIM supports multiple log type, including linux process accounting logs, netfilter alerts, tcpdump traces and NFDUMP NetFlows. (3) With a flexible anonymization policy language, complex policies that make trade-offs between information loss and security can be made. (4) FLAIM is modular and easily extensible to new types of logs and data. The anonymization engine is agnostic to the syntax of the actual log. History Work on log anonymization began in 2004 at the NCSA. At first this was for anonymizing logs in-house to share with the SIFT group. Soon there was a need for more powerful anonymization and anonymization of different types of logs. CANINE was created to anonymize and convert between multiple formats of NetFlows. This was a Java GUI-based tool. Later, Scrub-PA was created to anonymize Process Accounting logs. Scrub-PA was based on the Java code used for CANINE. The development of both of these tools were funded under the Office of Naval Research NCASSR research center through the SLAGEL project. It was quickly realized that building one-off tools for each new log format was not the way to go. Also, the earlier tools were limited in that they could not be scripted from the command line. It was decided that a new, modular command line-based UNIX tool was needed. Because speed was also a concern, this tool need to be written in C++. With the successful acquisition of a Cyber Trust grant from the National Science Foundation, the LAIM Working Group was formed at the NCSA. From this project headed by the PI, Adam Slagell, FLAIM was developed to overcome these limitations of CANINE and Scrub-PA. The first public version of FLAIM, 0.4., was released on July 23, 2006. Features Flexible XML policy language Modular to support simple plugins for new log types Support for major UNIX-like Operating Systems Built-in support for several anonymization primitives Plugin for NFDUMP format NetFlows Plugin for netfilter firewall logs Plugin for pcap traces form tcpdump Plugin for linux process accounting logs References Luo, K., Li, Y., Slagell, A., and Yurcik, W., "CANINE: A NetFlow Converter/Anonymizer Tool for Format Interoperability and Secure Sharing," FLOCON — Network Flow Analysis Conference, Pittsburgh, PA, Sep., 2005. External links Official FLAIM Home Page LAIM Working Group Official Home USENIX LISA paper on FLAIM CRAWDAD entry on FLAIM at Dartmouth Anonymity Internet privacy software Free security software Software using the BSD license
40272826
https://en.wikipedia.org/wiki/Samsung%20Galaxy%20Note%204
Samsung Galaxy Note 4
The Samsung Galaxy Note 4 is an Android smartphone developed and produced by Samsung Electronics. It was unveiled during a Samsung press conference at IFA Berlin on 3 September 2014 and was released globally in October 2014 as successor to the Samsung Galaxy Note 3. Improvements include expanded stylus-related functionality, an optically stabilized rear camera, significantly increased charging rate, revised multi-windowing, and fingerprint unlocking. It is the last in the Samsung Galaxy Note series with interchangeable battery. Its subsequent model, the Samsung Galaxy Note 5, was unveiled on 13 August 2015. Specifications Hardware Display The Samsung Galaxy Note 4 features a 2560×1440 Quad HD (“WQHD”) Super AMOLED 5.7-inch display with 2.5D damage-resistant Gorilla Glass 4 and provides a pixel density of 515 ppi (pixels-per-inch). Chipsets The Note 4 came in two variants, one powered by a 2.7 GHz quad-core Qualcomm Snapdragon 805 chipset with Adreno 420 GPU, the other powered by Samsung's ARMv8-A Exynos 7 Octa SoC with two clusters of four cores; four Cortex-A57 cores at 1.9 GHz, and four Cortex-A53 cores at 1.3 GHz, which is the same processor cluster sold for the Samsung Galaxy Note 3 in markets that mostly use or only have 3G (such as HSUPA and HSPA), and/or '2G', such as unaltered GSM and CDMA networks, similar to how the Galaxy Note 3 is sold. The phone has metal edges with a plastic, faux leather back. Connectivity Both devices that use 4G, LTE/LTE-A and Hybrid 4G-LTE Networks were only sold in Canada, Australia, the U.S., the United Kingdom (for some carriers), Sweden, Norway, Denmark, and South Korea, which have widespread 4G LTE Markets, or are solely 4G/LTE/LTE-A dependant such as Canada and Denmark, which did not use any 3G or older networks, except for HSUPA (Used as a fall back network should the signal strength be weak due to being underground or in the middle of a building), as well as HSPA+, which is a 3G network, though considered by some to be the Original 4G. The GPU in charge in the Exynos chipset is the Mali-T760. The Chinese variant utilizes the TD-LTE and TD-SCDMA Plus Network. Storage Both variants came with 3GB of LPDDR3 RAM and 32GB of internal memory that can be expanded using MicroSD-XC cards. The memory card slot has been relocated to allow hot swapping without physical blockage by the battery. Design The Note 4's back-cover has a strong resemblance to the Note 3, with a faux leather texture (although without the simulated stitching). Note 4 has a new aluminum frame design, bearing resemblance from the Samsung Galaxy Alpha. Criticism has been aimed at the lack of IP67 certification (water and dust resistance), which was present in Samsung's other flagship, the Galaxy S5, released half a year earlier. Stylus Like the predecessors, the Note 4 also includes a stylus pen, branded S-Pen, incorporated into the design. Samsung touted new S-Pen features including tilt and rotation recognition but these features were either not implemented or not supported. The WACOM digitizer has been upgraded to be able to distinguish between 2048 pressure sensitivity levels, twice as much as the predecessor. The Scrapbook feature introduced on the Galaxy Note 3 has been extended by a so-called Intelligent Selection feature that allows for optical character recognition of highlighted screen areas. Battery The Note 4 also incorporates a user-removable 3,220mAh lithium-ion battery for the global model and a 3,000mAh non-removable lithium battery variant for the model sold in China. The global model is the last Samsung Mobile flagship to be equipped with user-replaceable battery. The device is the first flagship phone by Samsung to support Qualcomm Quick Charge 2.0 for fast charging up to 15 Watts. The Note 4 features a USB 2.0 charging port instead of USB 3.0 (as was in the Note 3 and S5), in favor of a new feature called Fast Charge, which Samsung claims can charge the phone from 0% to 50% in about 30 minutes and from 0% to 100% in less than 100 minutes. With the wireless charging rear cover accessory, which exists in both a plain rear cover form factor and an S View flip cover (or S Charger View) form factor, the battery can be charged wirelessly using Qi technology. Miscellaneous The Galaxy Note 4 uniquely features an ultraviolet ray measurement sensor. Like the Galaxy S5, it is also equipped with heart-rate monitor, oximeter, among other, more common sensors (barometer, digital compass, front-facing proximity sensor, accelerometer, gyroscope). However, the Note 4 lacks the thermometer and hygrometer sensors which the Galaxy S4 and Galaxy Note 3 from 2013 were equipped with. The Air View feature is no longer useable with fingers like it is on the S4, Note 3, S5 and Alpha. However, it is still useable with the stylus. The capacitive key on the left side of the home button is now a task key, whereas it has been a menu key for previous Galaxy Note series models. However, holding the task key for one second simulates a press of the menu key. Unlike its successor, the Galaxy Note 4 supports Mobile High-Definition Link (MHL), which can be used to connect the mobile phone to an HDMI display. Next to the wireless charging S View cover, another accessory for the device is the LED flip wallet (or LED dot case), which allows displaying the clock time through barely visible small pin holes on the front cover while closed. S-View cover The S-View cover, a horizontal flip case with preview window, shares the functionality of the predecessor's, except the ability to create Action Memos (digital post-it notes) without unfolding the cover. A flashlight shortcut, optional analogue clock designs (next to the default, digital clock), as well as the ability to record video, a shortcut to dial contacts (phone numbers) marked as favourite, and access to the heart rate monitor, all without opening the cover, have been added. Software The Samsung Galaxy Note 4 originally shipped with Google's mobile operating system, Android, specifically KitKat 4.4.4, with its user interface modified with Samsung's custom skin named TouchWiz Nature UX 3.0. The Note 4 contains most of the original Note's software features and functions, but also adds more significant upgrades from the predecessors, such as a new multitasking interface, expanded S-Pen functions, gestures, and refreshed menus and icons. However, some Samsung Smart Screen and air gesture control functionality which was present on the Galaxy S4, Galaxy Note 3, Galaxy S5 and Galaxy Alpha, including Air Browse, Smart Pause, Smart Scroll and Air Call Accept, has been removed. Multi-window The new multitasking interface merges the Galaxy Note 3's “S-Pen window” feature and the split-screen feature known from the Galaxy Note 2, Galaxy S4, Galaxy Note 3 and Galaxy S5, into one feature. Applications (including the camera application) can be transitioned from floating pop-up view to flexible split-screen view and vice versa, and can be put from normal into pop-up view by dragging diagonally from an upper corner. Gallery software The Galaxy Note 4 uses a gallery software very similar to the one of the Galaxy S5, with support for functionality such as "Shot & More" and "Selective Focus". Additional camera modes can be downloaded from a store provided by Samsung. The gallery software is compatible with the Air View feature that allows previewing photos from albums when hovering the stylus above it. User reports suggest that the Exif (meta data) viewer has been removed from Galaxy Note 4. Software updates The device can be updated to Android 5.0.1 Lollipop in many regions, bringing a new, refined UI, and new runtime. This version has been criticized for poor battery life. A further update to 5.1.1 is available, depending on the wireless carrier. Most Note 4 devices can also be updated to Android 6.0.1 Marshmallow, bringing Android features like Android Doze (a feature introduced in Marshmallow that saves battery life) and greater control over app permissions to the device. The Note 4 TouchWiz UI was also evolved featuring the home screen icon pack known from the Galaxy S6 and also features new S Pen features known from the Galaxy Note 5 such as the new Air command menu design with custom shortcuts and Screen-off memo. However, the UI is still very similar to the previous UI and slightly similar to the S6 UI, but most of the TouchWiz UI resembles the original UI for the Note 4. Camera The main (rear-facing) camera is a 16 Megapixel (5312×2988) autofocus camera with 16:9 aspect ratio image sensor (Sony Exmor RS IMX240), featuring Smart OIS (Optical Image Stabilization + software image stabilization), being the first mobile phone of the Samsung Galaxy Note series and the first original variant Samsung flagship phone to feature an optically stabilized rear camera. It allows optically stabilized 4K (2160p) video recording at 30 fps, 1080p video recording with 30 fps and 60 fps (Smooth Motion) options and also 120 fps slow-motion video recording in 720p resolution. An option for 1440p video has been added in the camera software. Like the Galaxy S5, its rear camera supports high dynamic range (HDR) video recording with up to 1080p at 30fps. Digital zoom is allowed up to eight times, twice as much as on the S5, Note 3 and earlier. The option to separately preserve the standard dynamic range version from high dynamic range (HDR) photographs has been removed. The secondary (front-facing) camera is a 3.7 MP camera with an f1.9 aperture that can record 2560×1440 QHD videos and capture wide-angle pictures. The Galaxy Note 4's front camera is the first front camera in any mobile phone that is able to record videos at 1440p (WQHD) resolution. After the LG G3, the Galaxy Note 4 is the second mobile phone to be able to record optically stabilized 2160p (4K) video. Sales Samsung Galaxy Note 4 was released around the start of October 2014 and was available in most major markets by the middle of October. The first regions to receive the device were South Korea and China where it gained huge popularity. In the first month only, the Galaxy Note 4 reportedly sold 4.5 million units, which is a little less than its predecessor, the Galaxy Note 3, which was able to report 5 million sales in the first month after release. Samsung says that sales of the Note 4 were lower than those of the Note 3 at launch because the Note 4 was initially unavailable in some major international markets due to manufacturing issues, delaying release until early November in markets including the United Kingdom and India. Plug-in for Samsung Gear VR Only Snapdragon variants of the Samsung Galaxy Note 4, sold by US and European mobile carriers, may be plugged into the Samsung Gear VR headset, which was created in partnership with Oculus VR. Reception The phone was met with critical acclaim. When the Note 4 was released in late 2014, DisplayMate measured the performance of the display and said it was the best performing smartphone display ever tested and raised the bar for display performance. Note 4s were used to film Cai Lan Gong, the world's first feature film shot with a smartphone at 4K resolution. Successors The Samsung Galaxy Note 5 (branded and marketed as Samsung Galaxy Note5) is an Android phablet smartphone developed and produced by Samsung Electronics. The Galaxy Note 5, along with the Galaxy S6 Edge+, was unveiled during a Samsung press conference in New York City on 13 August 2015. It is the successor to the Samsung Galaxy Note 4. The phone became available in the U.S. on 21 August 2015. In Europe, the successor to the Note 4 is the Samsung Galaxy Note 8, because the Note 5 was never released there, there never was a Note 6, and the Note 7 was withdrawn due to battery hazard. However, the Note 8 has neither a replaceable battery, nor a custom colour case window, nor a 16 Megapixel camera. With the S Charger View case, user-replaceable battery, Micro SD-XC-expandable storage and 15 Watts of charging performance, the Note 4 still performs well 7 years after release. See also Comparison of smartphones References External links Samsung Galaxy Note mobile phones Phablets Mobile phones introduced in 2014 Mobile phones with stylus Mobile phones with user-replaceable battery Mobile phones with infrared transmitter Mobile phones with 4K video recording Discontinued smartphones
54230166
https://en.wikipedia.org/wiki/Cyber%20PHA
Cyber PHA
A cyber PHA (also styled cyber security PHA) is a safety-oriented methodology to conduct a cybersecurity risk assessment for an Industrial Control System (ICS) or Safety Instrumented System (SIS). It is a systematic, consequence-driven approach that is based upon industry standards such as ISA 62443-3-2, ISA TR84.00.09, ISO/IEC 27005:2018, ISO 31000:2009 and NIST Special Publication (SP) 800-39. The name, cyber PHA, was given to this method because it is similar to the Process Hazards Analysis (PHA) or the hazard and operability study (HAZOP) methodology that is popular in process safety management, particularly in industries that operate highly hazardous industrial processes (e.g. oil and gas, chemical, etc.). The Cyber PHA methodology reconciles the process safety and cybersecurity approaches and allows IT, Operations and Engineering to collaborate in way that is already familiar to facility operations management and personnel. Modeled on the process safety PHA/HAZOP methodology, a cyber PHA enables cyber risks to be identified and analyzed in the same manner as any other process risk, and, because it can be conducted as a separate follow-on activity to a traditional HAZOP it can be used in both existing brownfield sites and newly constructed greenfield sites without unduly meddling with well established process safety processes. The method is typically conducted as a workshop that includes a facilitator and a scribe with expertise in the cyber PHA process as well as multiple subject matter experts who are familiar with the industrial process, the industrial automation and control system (IACS) and related IT systems. For example, the workshop team typically includes representatives from operations, engineering, IT and health and safety as well as an independent facilitator and scribe. A multidisciplinary team is important in developing realistic threat scenarios, assessing the impact of compromise and achieving consensus on realistic likelihood values given the threat environment, the known vulnerabilities and existing countermeasures. The facilitator and scribe are typically responsible for gathering and organizing all of the information required to conduct the workshop (e.g. system architecture diagrams, vulnerability assessments, and PHAs) and training the workshop team on the method, if necessary. A worksheet is commonly used to document the cyber PHA assessment. Various spreadsheet templates, databases and commercial software tools have been developed to support the cyber PHA method. The organization’s risk matrix is typically integrated directly into the worksheet to facilitate assessment of severity and likelihood and to look up the resulting risk score. The workshop facilitator guides the team through the process and strives to gather all input, reach consensus and keep the process proceeding smoothly. The workshop proceeds until all zone and conduits have been assessed. The results are then consolidated and reported to the workshop team and appropriate stakeholders. References External links Safety requires cybersecurity Security process hazard analysis review Integrating ICS Cybersecurity with PSM Cyber Security Risk Analysis for Process Control Systems Using Rings of Protection Analysis aeCyberPHA Risk Assessment Methodology Building Cybersecurity into a Greenfield ICS Project Intro to Cyber PHA Video: Cyber Process Hazards Analysis (PHA) to Assess ICS Cybersecurity Risk presentation at S4x17 Video: Consequence Based ICS Risk Management presentation at S4x19 Unsolicited Response Podcast Cyber PHA for SIS Security How Secure are your Process Safety Systems? Process Safety & Cybersecurity Securing ICS Safety Requires Cybersecurity The Familial Relationship between Cybersecurity and Safety Cybersecurity Depends on Up-to-Date Intelligence Cybersecurity Risk Assessment Impact assessment Evaluation methods Risk analysis methodologies
21792651
https://en.wikipedia.org/wiki/Pietro%20Grossi
Pietro Grossi
Pietro Grossi (15 April 1917, in Venice – 21 February 2002, in Florence) was an Italian composer pioneer of computer music, visual artist and hacker ahead of his time. He began experimenting with electronic techniques in Italy in the early sixties. Biography Pietro Grossi was born in Venice, and he studied in Bologna eventually taking a diploma in composition and violoncello. In the sixties Grossi taught at the Conservatorio Luigi Cherubini and began to research and experiment with electroacoustic music. From 1936 to 1966 was the first cellist of the Maggio Musicale Fiorentino orchestra. Grossi began to experiment with electroacoustic music in the 1950s. By 1962, he had become the first Italian to carry out successful research in the field of computer music. In 1963, he turned his interest to electronic music and founded the S 2F M (Studio di Fonologia Musicale di Firenze) which made its headquarters in Florence at the Conservatorio, and he also became a lecturer in this subject. In 1964 he organized events with the association Contemporary Musical Life that introduced in Italy the work of John Cage. In 1965 he obtained the institution of the first professorship of Electronic Music in Italy. In 1967 he made the first experiences in computer music. In 1970 he made his first approaches to musical telematics organizing a performance with a link between Rimini (Pio Manzù Foundation) and Pisa (CNUCE). By invitation of lannis Xenakis, he presented another telematic concert between Pisa and Paris in 1974. His contributions to the development of new technological musical instruments and to the creation of software packages for music-processing design have been fundamental. He has not limited his work to the musical world, but also engaged in contemporary art. In the eighties he was working on new forms of artistic production oriented toward the use of personal computers in the visual arts. Grossi started to develop visual elaborations created on a personal computer with programs provided with "self-decision making" and that works out the concept of HomeArt (1986), by way of the personal computer, raises the artistic aspirations and potential latent in each one of us to the highest level of autonomous decision making conceivable today, and the idea of personal artistic expression: "a piece is not only a work (of art), but also one of the many 'works' one can freely transform: everything is temporary, everything can change at any time, ideas are not personal anymore, they are open to every solution, everybody could use them". Grossi has always been interested in every form of artistic expression. The last step of his HomeArt, is the creation of a series of unicum books, electronically produced and symbolically called HomeBooks (1991): each work is completely different from the others, thanks to the strong flexibility of the digital means. Sergio Maltagliati will continue this project creating autom@tedVisuaL software in 2012, which generates always different graphical variations. It is based on HomeArt’s BBC BASIC source code. This first release autom@tedVisuaL 1.0 has produced 45 graphical single samples, which have been sammled and published. He collaborated in order to experiment with electronic sound and composition with the computer music division of "CNUCE" (Institute of the National Research Council of Pisa). Grossi’s latest multimedia experiments were with interactive sound and graphics. His later works involved automated and generative visual music software, autom@tedVisualMusiC 1.0 which he extended beyond the realms of music into the interactive work for the Internet, conceiving and collaborating with Sergio Maltagliati in 1997 of the first Italian interactive work for the web netOper@, entertaining in his own house study the first on-line performance. However, NeXtOper@ remains unfinished, a project to integrate new media, such as mobile phone and GPS. Selected works 1961 Progretto 2-3 this piece consisting of several different high monotones that follow one another, is extremely minimal and ambient, controlled by a computer algorithm. 1965 Battimenti an electronic work composed and realized from “working material” for the electronic Studio di Fonologia Musicale (S 2F M), made by the 94 combinations of near frequencies. 1969 Collage where the concepts of music being an open process where no work of music is a finished piece but rather something to be manipulated. 1980 Computer Music transcription and elaborations (with Soft TAUMUS synthesizer TAU2, IBM 370/168 Institutes of CNR CNUCE and IEI) from the following authors: Bach, Scarlatti, Paganini, Brahms, Chopin, Strawinskij, Debussy, Joplin, Satie, Webern, Hindemith, Stockhausen. 1985-90 Mixed Unicum another ambient drone piece, similar to Progretto 2-3 and yet far more varied and rewarding, as the shifting tones create an alien topography of sound. 1986 HomeArt Grossi has developed the concept of HomeArt. It consists of completely automated and generative visual processes, based on simple Qbasic computer programs . Bibliography Sergio Maltagliati, HomeBook 45 unicum graphics, 2012. Girolamo De Simone, Il dito nella marmellata, Musica d'arte a Firenze, 2005, ed. Nardini - Francesco Giomi ; Marco Ligabue, L' istante zero. Conversazioni e riflessioni con Pietro Grossi, 1999, ed. Sismel - Lelio Camilleri, Pietro Grossi. Musica senza musicisti, scritti 1966-1996, ed. CNUCE CNR Pisa Lelio Camilleri, Computational Musicology in Italy, Leonardo, Journal of the International Society for the Arts, Sciences, and Technology (Leonardo/ISAST), The MIT Press, Cambridge, U.S., vol. 21 n. 4, 1988, pp. 454-456. Francesco Giomi, The Italian Artist Pietro Grossi. From Early Electronic Music to Computer Art, in Leonardo, Journal of the International Society for the Arts, Sciences, and Technology (Leonardo/ISAST), The MIT Press, Cambridge, U.S., vol. 28 n. 1, 1995, pp. 35-39. Discography Visioni di vita spaziale (Edizioni Leonardi Srl under licence to Pirames International Srl, 1967) Elettrogreca (Edizioni Leonardi Srl under licence to Pirames International Srl, 1967) GE-115 Computer Concerto (General Electrics, 1968) Elettro musica N.1 (Edizioni Leonardi Srl under licence to Pirames International Srl, 1971) Elettro musica N.2 (Edizioni Leonardi Srl under licence to Pirames International Srl, 1971) Computer Music (CNUCE/CNR, 1972) Atmosfera & elettronica (Edizioni Leonardi Srl under licence to Pirames International Srl, 1972) Computer Music - Bach/Grossi (LP, Ayma, 1980) Paganini al computer (LP, Edipan, 1982) Computer Music - Satie/Joplin/Grossi (LP, Edipan, 1983) Sound Life (LP, Edipan, 1985) Battimenti (CD, ants records, 2003) Suono Segno Gesto Visione a Firenze 2 -Grossi, Chiari, Cardini, Mayr, Lombardi, Aitiani, Maltagliati (Atopos 2005) Musicautomatica (CD, Die Schachtel, 2008) BATTIMENTI 2.5 audio Cd - numbered copy of limited edition (2019) DVD video CIRCUS_8 DVD video Quantum Bit Limited Edition (2008) CIRCUS_5.1 DVD (digital edition) Quantum Bit Netlabel (2012) References External links Photo Album #1: Pietro Grossi Associazione Pietro Grossi official web site Pietro Grossi autom@tedVisualMusic da Home@rt netOper@ Music Academy "Luigi Cherubini" 1917 births 2002 deaths Italian classical composers Italian male classical composers 20th-century classical composers Net.artists Italian digital artists Italian contemporary artists Italian performance artists New media artists Italian multimedia artists 20th-century Italian composers 20th-century Italian male musicians
383972
https://en.wikipedia.org/wiki/Eben%20Moglen
Eben Moglen
Eben Moglen (born 1959) is an American legal scholar who is professor of law and legal history at Columbia University, and is the founder, Director-Counsel and Chairman of Software Freedom Law Center. Professional biography Moglen started out as a computer programming language designer and then received his bachelor's degree from Swarthmore College in 1980. In 1985, he received a Master of Philosophy in history and a JD from Yale University. He has held visiting appointments at Harvard University, Tel Aviv University and the University of Virginia since 1987. He was a law clerk to Justice Thurgood Marshall (1986–87 term). He joined the faculty of Columbia Law School in 1987, and was admitted to the New York bar in 1988. He received a Ph.D. in history from Yale University in 1993. Moglen serves as a director of the Public Patent Foundation. Moglen was part of Philip Zimmermann's defense team, when Zimmermann was being investigated over the export of Pretty Good Privacy, a public key encryption system, under US export laws. In 2003 he received the EFF Pioneer Award. In February 2005, he founded the Software Freedom Law Center. Moglen was closely involved with the Free Software Foundation, serving as general counsel from 1994-2016 and board member from 2000 to 2007. As counsel, Moglen was tasked with enforcing the GNU General Public License (GPL) on behalf of the FSF, and later became heavily involved with drafting version 3 of the GPL. On April 23, 2007 he announced in a blog post that he would be stepping down from the board of directors of the Free Software Foundation. Moglen stated that after the GPLv3 Discussion Draft 3 had been released, he wanted to devote more time to writing, teaching, and the Software Freedom Law Center. Freedom Box Foundation In February 2011, Moglen created the Freedom Box Foundation to design software for a very small server called the FreedomBox. The FreedomBox aims to be an affordable personal server which runs only free software, with a focus on anonymous and secure communication. FreedomBox launched version 0.1 in 2012. Stances on free software Moglen says that free software is a fundamental requirement for a democratic and free society in which we are surrounded by and dependent upon technical devices. Only if controlling these devices is open to all via free software, can we balance power equally. Moglen's Metaphorical Corollary to Faraday's Law is the idea that the information appearance and flow between the human minds connected via the Internet works like electromagnetic induction. Hence Moglen's phrase "Resist the resistance!" (i.e. remove anything that inhibits the flow of information). Statements and perspectives Moglen believes the idea of proprietary software is as ludicrous as having "proprietary mathematics" or "proprietary geometry." This would convert the subjects from "something you can learn" into "something you must buy", he has argued. He points out that software is among the "things which can be copied infinitely over and over again, without any further costs." Moglen has criticized what he calls the "reification of selfishness." He has said, "A world full of computers which you can't understand, can't fix and can't use (because it is controlled by inaccessible proprietary software) is a world controlled by machines." He has called on lawyers to help the Free Software movement, saying: "Those who want to share their code can make products and share their work without additional legal risks." He urged his legal colleagues, "It's worth giving up a little in order to produce a sounder ecology for all. Think kindly about the idea of sharing." Moglen has criticized trends which result in "excluding people from knowledge." On the issue of Free Software versus proprietary software, he has argued that "much has been said by the few who stand to lose." Moglen calls for a "sensible respect for both the creators and users" of software code. In general, this concept is a part of what Moglen has termed a "revolution" against the privileged owners of media, distribution channels, and software. On March 13, 2009, in a speech given at Seattle University, Moglen said of the free software movement that, "'When everybody owns the press, then freedom of the press belongs to everybody' seems to be the inevitable inference, and that’s where we are moving, and when the publishers get used to that, they’ll become us, and we’ll become them, and the first amendment will mean: 'Congress shall make no law [...] abridging freedom of speech, or of the press [...].', not – as they have tended to argue in the course of the 20th century – 'Congress shall make no law infringing the sacred right of the Sulzbergers to be different.'" On the subject of Digital Rights Management, Moglen once said, "We also live in a world in which the right to tinker is under some very substantial threat. This is said to be because movie and record companies must eat. I will concede that they must eat. Though, like me, they should eat less." References External links Eben Moglen's webpage at Columbia University Law clerks of the Supreme Court of the United States Members of the Free Software Foundation board of directors American legal scholars GNU people Columbia University faculty Yale Law School alumni Swarthmore College alumni Harvard University staff University of Virginia School of Law faculty Copyright scholars Copyright activists American lawyers American bloggers Living people Columbia Law School faculty 1959 births Free software people Articles containing video clips
34253810
https://en.wikipedia.org/wiki/2012%20in%20science
2012 in science
The year 2012 involved many significant scientific events and discoveries, including the first orbital rendezvous by a commercial spacecraft, the discovery of a particle highly similar to the long-sought Higgs boson, and the near-eradication of guinea worm disease. A total of 72 successful orbital spaceflights occurred in 2012, and the year also saw numerous developments in fields such as robotics, 3D printing, stem cell research and genetics. Over 540,000 technological patent applications were made in the United States alone in 2012. 2012 was declared the International Year of Sustainable Energy for All by the United Nations. 2012 also marked Alan Turing Year, a celebration of the life and work of the English mathematician, logician, cryptanalyst and computer scientist Alan Turing. Events, discoveries and inventions January 1 January – NASA's GRAIL-B satellite successfully enters lunar orbit, joining its twin spacecraft GRAIL-A. The two satellites will study the Moon's gravitational field, generating a detailed map of its fluctuations to help scientists understand how the Moon formed. 2 January China launches its first commercial 3DTV channel, operated by China Central Television (CCTV). A new study shows that deep brain stimulation (DBS) is a safe and effective intervention for treatment-resistant depression in patients with either unipolar major depressive disorder (MDD) or bipolar ll disorder (BP). 3 January – Genetically modified fast-ageing mice exhibited improved health and lived two to three times longer than expected after being injected with stem cells, according to findings published in Nature Communications. 4 January American scientists report that a parasitic species of fly which compels honey bees to abandon their hives may be responsible for a global honey bee die-off that has decimated hives around the world. Honey bees are crucial pollinators, and their rapidly diminishing population may have severe effects on human agriculture. University of Wyoming scientists develop genetically modified silkworms capable of producing large amounts of spidersilk, which has a greater tensile strength than steel. If available in bulk quantities, the silk could be used to produce high-strength medical sutures and lightweight forms of body armor. Scientists at the University of Southern California develop a method for generating accurate 3D models of cellular genomes. Researchers at Oxford University report promising results in human trials of a prototype hepatitis C vaccine. Scientists at Cornell University use a specialised lens to entirely cloak an object from view for 40 trillionths of a second by altering the speed of light. Classified documents are leaked detailing a range of advanced non-lethal weapons proposed or in development by the United States Armed Forces. Among the systems described are a laser-based weapon designed to divert hostile aircraft, an underwater sonic weapon for incapacitating SCUBA divers and a heat-based weapon designed to compel crowds to disperse. 5 January Mae Jemison, the first female African-American astronaut, is selected to head the DARPA- and NASA-sponsored 100-Year Starship project, which aims to conduct research into the technological and human elements needed for manned interstellar travel. American scientists report that they have bred the first-ever monkeys grown from cells taken from different embryos. Such "chimeric" hybrids could give valuable insights into the development of human embryos. A team of international researchers reports that low-resistivity electrical wires can be produced at the nanometer scale by chaining phosphorus atoms together and encasing them in silicon. In future, the development may permit the production of efficient nanometer-scale electronics. A team of American, French and Italian researchers demonstrate working transistors made from cotton fibers, doped with gold nanoparticles and a conductive polymer. The invention could permit the creation of a range of electronic-fabric devices, including clothing capable of measuring pollutants, T-shirts that display dynamic information, and carpets that sense how many people are crossing them. 6 January The human brain's ability to function can start to deteriorate as early as age 45, according to a study published in the British Medical Journal. Scientists refute a Greenpeace claim that genetically modified corn has caused a new insect pest. 9 January Human emissions of carbon dioxide will defer the next Ice Age, according to a new study. Researchers in California develop a cheap plastic capable of removing large amounts of carbon dioxide (CO2) from the air. The new material could enable the development of "artificial trees" that lower atmospheric concentrations of CO2 in an effort to lessen the effects of climate change. 10 January The 2012 Consumer Electronics Show opens in Las Vegas, Nevada. Among the new products and technologies showcased are large-screen OLED televisions, quad-core tablet computers and consumer-ready 3D printers. Climate change, in the form of reduced snowfall in mountains, is having a major impact on mountainous plant and bird communities, through the increased ability of elk to stay at high elevations over winter and consume plants, according to a study in Nature Climate Change. 11 January An international team of astronomers report that each star in the Milky Way Galaxy may host "on average ... at least 1.6 planets", suggesting that over 160 billion star-bound planets may exist in our galaxy alone. The team used gravitational microlensing to discover the gravitational effects of planets orbiting distant stars. American astronomers discover three rocky exoplanets smaller than Earth, the smallest such worlds yet found, orbiting a red dwarf star 130 light-years from Earth. Researchers report the discovery of a natural hormone that has a similar effect to exercise on muscle tissue – burning calories, improving insulin processing, and perhaps boosting strength. 12 January Scientists formally describe the world's smallest known vertebrate species, Paedophryne amauensis – a frog that measures just 7 millimeters in length. The species was first discovered in Papua New Guinea in 2009. A University of Connecticut researcher who studied the health benefits of resveratrol, a compound found in red wine, has been found to have falsified data on numerous occasions. 13 January IBM researchers successfully store a single bit of data in a group of just 12 supercooled iron atoms; current commercial hard disks require over 1 million atoms to store one bit of data. The breakthrough, which was achieved with the use of a scanning tunnelling microscope, may permit the production of ultra-high-density computer storage media in future. German scientists convert a gold sphere just 60 nanometres in diameter into an ultra-sensitive listening device, potentially allowing the sounds of bacteria and other single-celled organisms to be recorded. 14 January – Researchers at the University of Cambridge repair myelin sheath damage in ageing mice with multiple sclerosis by injecting the blood of younger mice into them, reactivating the older mice's regenerative stem cells. 15 January – Russia's Fobos-Grunt Martian sample return spacecraft, which became stranded in orbit after a post-launch malfunction in November 2011, re-enters Earth's atmosphere. 18 January Astronomers report the discovery of the most distant dwarf galaxy yet found, approximately 10 billion light-years away. A British amateur astronomer discovers a new Neptune-sized exoplanet, just days after the BBC's Stargazing Live program makes a public appeal for volunteers to assist scientists in the search for potential exoplanets. Over 100,000 volunteers are reportedly taking part in the ongoing search. Archaeologists find a novel tulip-shaped fossil, formally named Siphusauctum gregarium, in the Middle Cambrian Burgess Shale in the Canadian Rockies. The 20-centimetre-long creature reportedly possessed a unique filter feeding system. A working 9-nanometer transistor is developed by IBM engineers, demonstrating that nanotubes could serve as a viable alternative to silicon in future nanoelectronic devices. 19 January Austrian researchers develop a quantum computer capable of performing calculations without revealing any of the data involved, using encoded strings of photons designed to appear random. This method of "blind quantum cryptography" may permit sensitive data to be processed and transferred without any danger of interception or decryption, leading to ultra-secure cloud computing. NASA data shows that in 2011, temperatures in the Arctic rose beyond the record established in 2010 — setting a new record. 20 January – Virologists agree to a temporarily hiatus on experiments on the H5N1 influenza virus, due to fears that an airborne strain of the lethal virus could be used by bioterrorists. 22 January American researchers report that nanoparticles can be successfully engineered to mimic part of the body's immune system, improving its response to vaccines. An international team of scientists concludes that anthropogenic CO2 emissions over the last 100 to 200 years have already raised ocean acidity far beyond the range of natural variations. 23 January South Korean scientists develop touchscreens that can recognise the existence and concentration of DNA molecules placed on them. The invention could allow the development of smartphones with the ability to diagnose users' medical conditions. The Lancet reports that a human medical trial of embryonic stem cells successfully eased a degenerative form of blindness in two volunteers, and showed no signs of any adverse effects. Brain scans of people under the influence of psilocybin, the active ingredient in magic mushrooms, have given scientists the most detailed picture to date of how psychedelic drugs work. Scientists demonstrate a terahertz antenna 100 nanometers across – 30,000 times smaller than the previous smallest antenna. The invention could permit the production of lightweight, handheld devices able to accurately scan for bombs, chemicals and even subcutaneous tumors. 24 January Earth is struck by the largest solar storm since 2005, creating huge aurorae and potentially interfering with satellite communications worldwide. A nest of dinosaur eggs 100 million years older than the previous oldest site is found in South Africa. The fossils are of the prosauropod species Massospondylus, a relative of the long-necked sauropods. 25 January University of Washington scientists report that injecting sulfate particles into the stratosphere will not fully offset climate change. A study in Japan finds that green tea can significantly reduce disability in the elderly, likely due to its antioxidant content. 26 January – American researchers successfully "cloak" a three-dimensional object, making it invisible from all angles, for the first time. However, the demonstration works only for waves in the microwave region of the electromagnetic spectrum. 27 January An international team of scientists reports that graphene, already widely known for its conductive properties, is also able to selectively filter gases and liquids. The material could thus potentially find use in industrial distillation and water purification. A study published in the journal Carcinogenesis shows that in both cell lines and mouse models, grape seed extract (GSE) kills head and neck cancer cells, while leaving healthy cells unharmed. Using an airborne LIDAR system, scientists produce the most detailed 3D image of the Amazon rainforest yet recorded, allowing the accurate measurement of the rainforest's ecosystem and rate of deforestation. 2012 BX34, an asteroid between and across, passes within 60,000 kilometres of the Earth, performing one of the closest asteroid flybys yet recorded. British animators develop a new algorithmic method of creating highly realistic CGI trees, allowing films and video games to easily display realistic 3D foliage. 29 January – Using stem cells generated from patients with schizophrenia, bipolar depression and other mental illnesses, scientists at the University of Edinburgh create neurones with brain tissue genetically identical to the person's brain. The breakthrough could allow new treatments for mental illnesses to be accurately tested without endangering patients. 30 January A United Nations report warns that time is running out to ensure there is enough food, water and energy for a rapidly rising world population. By 2030, the world will need at least 50 percent more food, 45 percent more energy and 30 percent more water, according to estimates. The British Royal Navy begins development of a new anti-missile defence system, the Sea Ceptor, capable of intercepting and destroying supersonic missiles within an area of . The system is likely to enter service by 2017. American researchers report that ultrasound waves can be used effectively to kill sperm, potentially offering a new male contraceptive method. Ozone from anthropogenic air pollution in North America leads to the annual loss of 1.2 million tonnes of wheat in Europe alone, according to a study published by British universities. A NASA study reports that changes in solar activity cannot be responsible for the current period of global warming. The sun's total solar irradiance has in recent years dipped to the lowest levels recorded during the satellite era. According to genetic studies, modern humans seem to have mated with "at least two groups" of ancient humans: Neanderthals and Denisovans. 31 January American scientists successfully demonstrate a method of decoding thoughts by studying activity in the human brain's superior temporal gyrus, which is involved in linguistic processing. Using this method, a device which reads and transmits the thoughts of brain-damaged patients could become a reality in the future. Microchip designer AMD launches its Radeon HD 7950 graphics card, based on a 28 nanometer manufacturing process – a more advanced die shrink of the current 32 nanometer standard. Poyang Lake, China's largest freshwater lake, has almost completely dried up due to a combination of severe drought and the impact of the recently built Three Gorges Dam. February 1 February – Researchers report that the eruption of supervolcanoes could be predicted several decades before the event by detecting the seismic and chemical signs of a massive magma buildup. 2 February The European Commission issues a 225-million-euro (US$330 million) contract to an Anglo-German consortium for eight additional satellites to expand Europe's Galileo satellite navigation system. Astronomers report the discovery of a large exoplanet orbiting within the habitable zone of a star 22 light-years distant. This is the fourth potentially life-supporting exoplanet discovered since May 2011. Researchers reportedly create the world's thinnest pane of glass, a sheet of silicon and oxygen just three atoms wide. The glass formed in an accidental reaction when the scientists were synthesizing graphene on copper-covered quartz. 3 February The European Southern Observatory successfully activates its Very Large Telescope (VLT) by linking four existing optical telescopes to operate as a single device. The linked VLT is the largest optical telescope yet built, with a combined mirror diameter of . Physicists at Germany's Max Planck Institute unveil a microscope that can image living brain cells as they function inside a living animal. American scientists demonstrate a medical procedure that may allow patients suffering from nerve damage to recover within weeks, rather than months or years. The procedure makes use of a cellular mechanism similar to that which repairs nerve axons in invertebrates. MIT researchers develop high-temperature photonic crystals capable of efficiently converting heat to electricity, potentially allowing the creation of pocket-sized microreactors with ten times the efficiency and lifespan of current commercial batteries. As photonic crystals are already a relatively mature technology, the new invention could be commercialised in as little as two years. A Lancet study reports that global malaria deaths may be badly underestimated, giving a revised 2010 malaria death toll of 1.24 million. By contrast, the World Health Organization estimated that 655,000 people died of malaria in 2010. 4 February – Dutch doctors successfully fit an 83-year-old woman with an artificial jaw made using a 3D printer. This operation, the first of its kind, could herald a new era of accurate, patient-tailored artificial transplants. 6 February After nearly 20 years of intermittent drilling, Russian scientists reportedly break through to the surface of the subterranean Lake Vostok, buried under the Antarctic ice. The lake, which has not been uncovered for over 15 million years, may harbour a unique prehistoric ecosystem. A team of engineers and biologists develop a working WORM computer memory out of salmon DNA molecules by combining the DNA with silver nanoparticles. 7 February Scientists report that rapid declines in some British and European ladybird species are being caused by the spread of the invasive harlequin species. The entire genome of an extinct species of human – the 40,000-year-old Denisova hominin – has been decoded from a fossil. 8 February – NASA data reveals that the total land ice lost from Greenland, Antarctica and Earth's glaciers and ice caps between 2003 and 2010 totalled about 4.3 trillion tons (1,000 cubic miles), adding about 0.5 inches (12 millimeters) to global sea levels. Such a quantity of ice would be sufficient to cover the entire United States to a depth of 1.5 feet (0.5 meters). 9 February – Researchers at Case Western Reserve University discover that bexarotene, a drug normally used to treat skin cancer, can quickly reverse the symptoms of Alzheimer's disease in mice, removing over 50% of the disease's trademark amyloid plaque from the brain within 72 hours. 10 February – Scientists at the University of California, San Diego report the creation of the tiniest telecommunications laser yet built, just 200 nanometers wide. The highly efficient nanolaser could be used to develop optical computers and ultra-high-resolution imaging systems. 13 February A new UN report warns that 24 percent of global land area has declined in productivity over the past 25 years due to unsustainable land-use, and soil erosion rates are about 100 times greater than nature can replenish. The European Space Agency successfully conducts the maiden launch of its new Vega rocket, transporting several satellites into orbit, including the first Polish, Hungarian and Romanian satellites. BAE Systems engineers unveil a carbon-fiber-based structural battery capable of being integrated into a device's framework, reducing weight while maintaining structural strength and power capacity. 14 February – In a groundbreaking human trial, American scientists report that damaged heart tissue in heart attack patients can be repaired with infusions of the patient's own stem cells. The treatment halved the amount of extant scar tissue within a year. 15 February – Nevada becomes the first US state to allow the testing of autonomous vehicles on US public roadways. 16 February – The speed at which someone walks may predict their likelihood of developing dementia later in life, according to researchers in the US. 20 February – Scientists report regenerating Silene stenophylla from 32,000-year-old remains. This surpasses the previous record of 2,000 years for the oldest material used to regenerate a plant. 22 February Scientists have extended the life of male mice by 15%, using an enzyme called SIRT6. Engineers at Stanford University reveal a wirelessly powered, self-propelled medical device that can travel through the bloodstream to deliver drugs, perform diagnostics or microsurgeries. NASA reports the detection of the solid form of buckyballs (buckminsterfullerene) in deep space. Researchers show that sirtuin, a class of proteins, is directly linked to longevity in mammals. 24 February – British-Italian researchers demonstrate a giant 3D printer capable of constructing a full-sized house in a single 24-hour session. The machine, which uses sand and a chemical binder as its working material, prints structures from the ground up, including stairs, partition walls and even piping cavities. 26 February Researchers publish the first images of the charge distribution in a single molecule, precisely showing the motion of electrons. The observed distribution apparently corresponds closely with predictive models. It may be possible to one day create an "unlimited" supply of human eggs to aid fertility treatment, US doctors say. 27 February The remains of two new species of prehistoric penguin are discovered – Kairuku grebneffi and Kairuku waitaki. Standing nearly tall, Kairuku grebneffi is the largest penguin ever discovered. 28 February IBM announces a breakthrough in quantum computing, demonstrating a qubit microchip that can preserve its quantum states up to four times longer than previous designs. Researchers estimate that Tyrannosaurus rexs bite force could exceed 57,000 newtons, more than three times that of a great white shark. 29 February – Raspberry Pi single-board computer is commercially launched through U.K. online retailers. March 1 March – New research concludes that the Earth's oceans may be growing more acidic at a faster rate than at any time in the past 300 million years. 2 March NASA's Cassini spacecraft detects oxygen in the atmosphere of Saturn's moon Dione. Meta-analysis of 42 previous studies concludes that some consumption of chocolate may be good for the heart. 5 March – A study finds a correlation between snoring as a toddler and behavior problems later in childhood. 7 March Physicists from Fermi National Accelerator Laboratory report data suggesting that the elusive hypothesized Higgs boson ("God particle", with a mass of 115 to ) may have been detected. Scientists successfully decode the gorilla genome, the last of the great ape genera to be sequenced. 8 March A study suggests that donor stem cells may prevent organ rejection in imperfectly matched transplant cases. The international Daya Bay neutrino experiment announces the discovery of a new type of neutrino oscillation. 9 March – US researchers announce a breakthrough in treating AIDS, using a cancer drug to attack HIV inside certain immune-system cells, which were previously difficult to reach with treatments. 12 March Researchers at the Vienna University of Technology develop a 3D printer that can print at the nano-scale and is orders of magnitude faster than previous devices. A diet high in red meat can shorten life expectancy by increasing the risk of death from cancer and heart problems, according to a study of more than 120,000 people by researchers at Harvard Medical School. Substituting red meat with fish, chicken or nuts lowered the risks, the study found. 13 March A California-based company has developed solar panels that are half the price of today's cheapest cells, and therefore cheap enough to challenge fossil fuels. Scientists have identified a potential drug that speeds up trash removal from the cell's recycling center, the lysosome, one of the causes of aging and degenerative diseases. 14 March A fly species, kept in complete darkness for 57 years (1,400 generations), showed genetic alterations that occurred as a result of environmental conditions, offering clear evidence of evolution. A pill which doubles the length of time that patients with advanced skin cancer can survive has gone on sale in Britain for the first time. America's coastlines are even more vulnerable to sea level rise than previously thought, according to a pair of new studies. Up to 32% more real estate could be affected by a 1-meter rise in sea level, while the population exposed to rising water is 87% higher than previously estimated. A process to "unprint" toner ink from paper has been developed by engineers at the University of Cambridge, using short laser pulses to erase words and images. 15 March – American scientists use a particle accelerator to send a coherent neutrino message through 780 feet of rock. This marks the first use of neutrinos for communication, and future research may permit binary neutrino messages to be sent immense distances through even the densest materials, such as the Earth's core. 16 March – Physicists found no discernible difference between the speed of a neutrino and the speed of light in latest test of the faster-than-light neutrino anomaly. 18 March Researchers have identified why a mutation in a particular gene can lead to obesity. NEC has developed "organic radical battery" (ORB) technology with a thickness of just 0.3mm. 19 March Even if humankind manages to limit global warming to 2 degrees C (3.6 degrees F), as the Intergovernmental Panel on Climate Change recommends, future generations will have to deal with sea levels 12 to 22 meters (40 to 70 feet) higher than at present, according to research published in the journal Geology. Researchers at the RIKEN Advanced Science Institute (Japan) have developed a way to create full-color holograms with the aid of surface plasmons. The amount of photovoltaic solar panels installed in the US more than doubled from 2010 to 2011, according to a report by the Solar Energy Industries Association (SEIA) and GTM Research. Seagate claims it has paved the way for 3.5-inch hard drives with 60TB capacities, after breaking the 1TB/square inch density threshold. 20 March Astronomers have discovered the first known rectangular-shaped galaxy: LEDA 074886. New analysis by MIT shows that there is enough room underground to safely store at least a century's worth of U.S. fossil fuel emissions. 24 March – Humans hunted Australia's giant vertebrates to extinction about 40,000 years ago, the latest research published in Science has concluded. 25 March Global temperatures could rise by 3.0 °C (5.4 °F) by 2050, a new computer simulation has suggested. Canadian film director James Cameron reaches the Challenger Deep, the deepest known point in Earth's oceans, in the Deepsea Challenger submersible. Cameron is the first person to visit the Deep, which is located in the Pacific Mariana Trench, since 1960. Physicists report that the largest molecules yet tested (molecules containing 58 or 114 atoms) also demonstrate quantum wave behavior using the classic double-slit experiment. 28 March – NASA announces the name of the Martian mountain, Mount Sharp, that the Mars Science Laboratory rover (also known as "Curiosity") will explore after its planned landing in Gale Crater on 6 August 2012. 29 March "Solar tornadoes" several times as wide as the Earth have been observed in the Sun's atmosphere by the Atmospheric Imaging Assembly telescope on board NASA's Solar Dynamics Observatory (SDO) satellite. Scientists have revealed the most detailed picture of the Milky Way galaxy ever produced, with over a billion stars visible in a mosaic combined from thousands of individual images. New scanning technology has revealed that the human brain possesses an astonishingly simple 3D grid structure, with sheets of parallel neuronal fibers crossing one another at right angles. April 2 April – The British Army announces the development of a conductive smart fabric for infantry uniforms. The fabric, which should enter widespread service by 2015, will eliminate the need for heavy, vulnerable power cables, making soldiers' electronics safer, cheaper and more durable. 4 April A new, detailed record of past climate change has shown compelling evidence that the last ice age was ended by a rise in temperature driven by an increase in atmospheric carbon dioxide. The key result from the new study is that it shows the carbon dioxide rise during this major transition ran slightly ahead of increases in global temperature. Austrian and Japanese researchers unveil solar cells that are thinner than a thread of spider silk, and flexible enough to be wrapped around a single human hair. American researchers begin a new project, funded by the National Science Foundation, to develop printable robots that can be designed and made to order by the average person in less than 24 hours. The project, which is hoped to come to fruition by 2020, could allow any individual to cheaply build automated tools for any task in their own home. 5 April Dutch and American researchers report that they have created a working quantum computer out of diamond, using the diamond's natural impurities as superimposed qubits to perform calculations. Google unveils Project Glass, which aims to develop augmented reality glasses capable of layering information such as email, real-time traffic updates and video calls over a user's field of vision. The Large Hadron Collider re-enters operation after an energy upgrade. It now has a total collision energy of 8 trillion electronvolts, a major increase over its pre-upgrade energy of 7 TeV. 6 April – An international team of researchers reports that a new, drug-resistant strain of malaria has emerged on the Thai–Cambodian border, potentially threatening global efforts to contain the disease. 8 April – American scientists reveal that transparent graphene sheets can be used to encapsulate liquids for study by electron microscopes. The discovery will greatly ease the accurate imaging of liquids at micro- and nanoscales. 10 April – The Wellcome Trust, one of the world's largest private funders of scientific research, states that it is launching a new online journal to promote the free sharing of scientific papers. The new journal, titled eLife, is part of a widespread push for open access to scientific research, and will compel researchers to make their work freely available online. 12 April A team of researchers from France's Laboratoire Univers et Théorie releases the first ever computer model simulation of the structure of the entire observable universe, from the Big Bang to the present day. The simulation has made it possible to follow the evolution of 550 billion individual particles. A report reveals that the United States invested more in renewable energy technology in 2011 than any other nation, totalling US$48 billion. China was the second-largest investor, spending US$45.5 billion on renewables. Worldwide, the combined investment in renewables reached an all-time high, at US$236 billion. A later report published by the United Nations amends these figures, stating that China invested $52 billion in renewable energy in 2011, while the US spent $51 billion. German physicists develop the world's first universal quantum computing network, linking two laboratories using entangled rubidium atoms as network nodes. An international team of researchers has used new, massively parallel DNA sequencing technology to fast-track the discovery of a breast cancer risk gene, XRCC2. DARPA, the US military's advanced research agency, offers a US$2 million prize to any team who can independently develop a rescue robot capable of multiple tasks, including climbing ladders, clearing obstacles, using power tools and driving cars. After studying 40 years of medical records, Swedish scientists state that sufferers of Huntington's disease are around 50% less likely to develop cancer than those without the disease. Further study may reveal the genetic mechanism behind this resistance, allowing new cancer treatments to be developed. The United Kingdom reports that it is considering the installation of undersea power cables to allow its National Grid to draw clean energy from Iceland's volcanoes. Scientists report that complexity analysis studies of the Labeled Release experiments of the 1976 Viking mission to Mars may suggest the detection of "extant microbial life on Mars." 13 April North Korea's Unha-3 orbital rocket disintegrates in mid-flight over the Yellow Sea, destroying its payload, the Kwangmyŏngsŏng-3 satellite. Analysts fear that the failed launch may raise the likelihood of North Korea conducting another nuclear weapons test. German scientists develop a fiber-based "earthquake-proof" wallpaper capable of reinforcing masonry and delaying building collapses during violent quakes. The invention could save lives by giving people more time to flee from collapsing buildings. The Pentagon places an order for advanced dual-focus contact lenses, designed to give soldiers greater visual awareness, in tandem with a new HUD system. The technology may enter the civilian market by 2014. Dutch scientists report that they have found evidence of the existence of the Majorana fermion, a particle that is its own antiparticle. The existence of the Majorana was first theorized by the Italian scientist Ettore Majorana in the 1930s. Researchers at UCLA announce that they have genetically engineered stem cells to seek out and kill HIV in mice. 15 April – Researchers claim that new satellite imagery shows an increase in the mass of some glaciers in Asia's Karakoram mountain range. This data contrasts with the wider global trend of glacial melting. 16 April – A new treatment for prostate cancer can rid the disease from nine in ten men without debilitating side effects, a study has found. 17 April – It is revealed that the Chinese and American militaries have been conducting informal war games together to help prevent military escalation in the event of a future cyber war. 18 April – Researchers at the American National Institutes of Health demonstrate a nanotechnology-based drug treatment which can successfully alleviate some symptoms of cerebral palsy (CP). The drug, which was tested in rabbits, caused a dramatic improvement of the movement disorders and brain inflammation that are characteristic of many cases of CP. 19 April A landmark study by British and Canadian scientists reveals that breast cancer can be subdivided into ten distinct types, with its aggressiveness determined by certain genes. The new data may make breast cancer diagnoses much more precise, and allow cancer treatments to be more effectively tailored to each patient. Led by British scientists, a consortium including American, Belgian and Danish scientists successfully develop synthetic DNA compounds, dubbed "XNA", which demonstrate evolution when faced with selective pressure. British researchers identify key genes that "switch off" as the human body ages. These genes may be targeted by future anti-aging therapies. 20 April Scientists say the notoriously dry continent of Africa is sitting on a vast reservoir of groundwater. A NASA-backed group of universities begins testing a GPS-derived earthquake warning system. The system, which uses satellite data to track seismic activity in real-time, may allow accurate earthquake and tsunami warnings to be issued up to ten times faster than is currently possible. After three years of development, IBM reveals a new, ultra-lightweight lithium-air battery, offering greater energy density than any current lithium-ion battery. The new battery may permit the production of electric vehicles with far greater range and battery life than current models. 21 April – Scientists at Chicago's Northwestern University successfully trial a brain-computer interface capable of restoring naturalistic muscle movements in paralyzed rhesus monkeys. It is hoped the invention will eventually be approved to treat paralytic or brain-damaged humans. 22 April – Intel Corporation releases its new Ivy Bridge microprocessors – the world's first commercial 22 nanometer microchips, featuring increased processing power and energy efficiency. 24 April – Planetary Resources, a startup company backed by Google billionaires Larry Page and Eric Schmidt and film director James Cameron, announces plans to develop technology to survey and mine asteroids for minerals by 2020. The company plans to launch the first element of its project, a network of orbital surveying telescopes, by 2014. 26 April Australian scientists develop a multi-layered, silica-based hydrophobic coating with greater durability than previous such coatings. The invention may be used to make self-cleaning fabrics and antibacterial medical equipment. Researchers develop a crystalline quantum computer, composed of just 300 atoms, that theoretically is so powerful that it would take a conventional computer the size of the known universe to match it. Scientists report that lichen survived over 34 days under Martian conditions in the Mars Simulation Laboratory (MSL) maintained by the German Aerospace Center (DLR). 27 April Researchers identify 53 key neurons in the brains of homing pigeons which may explain how the birds navigate using Earth's geomagnetic field. The British company Reaction Engines begins testing the advanced engine precooler system intended for its reusable Skylon spaceplane. If the tests are successful, the hybrid-rocket Skylon – designed to vastly reduce the cost of orbital spaceflight – may begin flying cargo to Earth's orbit by 2020. May 1 May Scientists report that a new genetic test could diagnose the risk of breast cancer years before the disease actually develops, allowing much more effective early treatment. French researchers successfully create silicene, a one-atom-thick sheet of silicon that is analogous to the much-vaunted graphene. Silicene is theorized to retain silicon's excellent semiconductor properties even at extremely small scales, and could allow the simple mass production of efficient nanoscale computers. 2 May – The European Space Agency selects the Jupiter Icy Moon Explorer (JUICE) proposal for its next major space exploration program. The robotic JUICE probe, which is planned to launch in 2022, will conduct in-depth studies of the Jovian moons Callisto, Europa and Ganymede. 3 May – In the United Kingdom's first successful ocular implant trial, two men blinded by retinitis pigmentosa have their sight partially restored by prototype microchip implants. 8 May – Claire Lomas, a paralyzed British woman, becomes the first person to complete a marathon using a bionic mobility suit. The ReWalk suit allowed her to complete the London Marathon in 16 days. 9 May – A detailed design is released for a practical artificial leaf – a potentially revolutionary milestone in the development of sustainable energy. 11 May American researchers report that preventable infections are the leading cause of child mortality worldwide. Of the 7.6 million children who died before their fifth birthday in 2010, over 60% died of infections such as pneumonia. Scientists at the University of Science and Technology of China use quantum teleportation to transmit photons over a distance of – a world record. The teleportation method, which utilises quantum entanglement to transfer information between points without crossing the intervening space, could allow the development of ultra-secure satellite communications. 12 May – Scientists refute the theory that sex-linked chromosomes, such as the male Y chromosome, will become extinct. A new study shows that, although such chromosomes have shrunk and lost genetic material, they remain crucially important predictors of fertility. 13 May – Researchers claim that there is a strong correlation between the loss of biodiversity and the disappearance of endangered languages and cultures. 14 May Researchers extend the lifespan of mice by 24%, using gene therapy applied when the mice were adults. The success of the technique, which involved inducing cells to produce more of the enzyme telomerase, suggests that adult life extension is feasible. Scientists grow healthy bone from human embryonic stem cells. This breakthrough could allow much quicker and easier bone grafts for human patients. Scientists at California's Stanford University invent a working bionic eye powered only by focused light. Though currently a prototype, the device could eventually restore the sight of millions of people suffering from eye diseases such as macular degeneration and retinitis pigmentosa. 15 May The United States announces a national plan to develop an effective treatment for Alzheimer's disease by 2025. American scientists develop a device which uses genetically engineered viruses to generate electricity. The invention could allow the development of ubiquitous piezoelectric micro-generators which gather energy from everyday vibrations such as closing doors. 16 May American surgeons successfully restore hand function to a partly paralyzed man using a pioneering nerve transfer technique. Following the surgery and subsequent physiotherapy, the patient – who entirely lost the use of his hands in a car accident – can now feed himself and even write with some assistance. Japanese scientists develop a wireless data transmission system which operates in the currently unregulated terahertz frequency spectrum. The system can transmit data at a rate of 3 Gbit/s, a record for wireless data transmission; it could potentially be upgraded to transmit at 100 Gbit/s. The USGS and IAU officially name areas of Mars, including Aeolis Mons, Aeolis Palus and Robert Sharp Crater, relevant to the landing of NASA's Curiosity Mars rover on 6 August 2012. Engineers at Virginia Tech build the world's first 3D-printing vending machine, which allows any member of the public to rapidly print objects on demand by submitting a blueprint to the machine. 20 May – An annular solar eclipse takes place. 22 May – American researchers demonstrate a rewritable DNA memory capable of storing digital data. 23 May – In a breakthrough for adult stem cell therapy, Israeli scientists grow healthy heart muscle cells from the skin cells of patients. This development could offer a new treatment for heart failure patients. 25 May SpaceX's unmanned Dragon spacecraft completes a successful rendezvous with the International Space Station, becoming the first commercial spacecraft ever to do so. South Africa, Australia and New Zealand agree to co-host the Square Kilometre Array (SKA), the world's largest single radio telescope project. The SKA, which will comprise thousands of individual antennae with a combined signal-collecting area of , is expected to begin operations by 2025. American researchers unveil a cloaking device capable of slowing light to a virtual halt within an array of 25,000 microscopic lenses. Archeologists announce the discovery of a 42,000-year-old bone flutes in a German cave – the oldest musical instruments yet discovered. 29 May A "road train" of wirelessly linked autonomous vehicles successfully completes a motorway journey, in Spain's first public test of autonomous vehicles. Iran claims to have developed antivirus software capable of defending against the powerful Flame cyberweapon, which has infected computer networks across the Middle East. 30 May Scientists successfully sequence the tomato genome, and state that tastier and more pesticide-resistant tomato varieties can be engineered for commercial use within five years. Geologists report that supervolcanoes can develop much faster than previously suspected – erupting within just a few hundred years of their formation, instead of tens of thousands of years. 31 May SpaceX's Dragon spacecraft returns to Earth following its successful test mission to the International Space Station. Scientists develop a nanotechnology-based immunoassay test which is potentially three million times more sensitive than conventional tests. The new test could revolutionise the early detection of maladies such as cancer and Alzheimer's disease. The International Union of Pure and Applied Chemistry (IUPAC) officially names synthetic elements 114 and 116 "flerovium" and "livermorium", respectively. Sharp Corporation develops a solar cell with the highest solar energy conversion efficiency yet achieved. A conversion efficiency of 43.5% was obtained by using a concentrator triple-junction compound cell, combining a focusing lens with multiple layers of light-absorbing compounds. June 1 June In a major milestone for neuroscience, researchers publicly release the first installment of data from their project to construct the first whole-brain wiring diagram of a vertebrate brain, that of a mouse. Scientists publish the results of a successful neurorehabilitation study, in which paralysed rats regained the ability to walk and even sprint after receiving targeted electrochemical therapy. The rats' damaged spinal cords were stimulated with chemicals and implanted electrodes, and a robotic assistive harness was used to "teach" the rats to walk again. Australian researchers publish a new study revealing how the zebrafish heals its spinal cord after injury. According to the study, a specialised protein prevents paralysing glial scars forming when zebrafish suffer spinal cord damage. It is hoped that this protein may be exploited for the treatment of paralysed humans. 4 June – A partial lunar eclipse takes place. 5 June American glass manufacturer Corning Inc. unveils an ultra-thin, flexible glass dubbed "Willow Glass". The invention, which is similar to Corning's widely used Gorilla Glass, could be used in the development of flexible computer displays and ultra-thin smartphones. The solar-powered Solar Impulse aircraft lands in Morocco after a 19-hour flight from Spain, marking the first intercontinental flight of a purely solar-powered aircraft. 5–6 June – A transit of Venus, one of the rarest predictable astronomical phenomena, occurs. Another such transit will not occur until the year 2117. 6 June An international group of scientists warns that population growth, widespread destruction of natural ecosystems, and climate change may be driving the Earth toward an irreversible change in the biosphere – a planet-wide "tipping point". Scientists at Sweden's Karolinska Institutet achieve a breakthrough in creating a new vaccine, CAD106, for Alzheimer's disease. IPv6, a new version of the Internet Protocol, is officially launched, offering trillions of possible new web addresses. Wales becomes the first nation in the world to have its plants DNA barcoded. A tiny fragment of leaf, seed or root, or even a single pollen grain, can now be used to identify species. German scientists develop zeolite thermal storage pellets that can store four times as much thermal energy as water, and can retain their energy almost indefinitely. 7 June According to NOAA scientists, the average temperature for the contiguous United States during May 2012 was 64.3°F, 3.3 °F above the long-term average, making it the second-warmest May on record. The month's high temperatures also contributed to the warmest spring, warmest year-to-date, and warmest 12-month period the United States has experienced since recordkeeping began in 1895. Scientists at the University of Washington successfully sequence the genome of an 18-week-old human fetus in the womb by taking blood samples from the mother. In future, millions of children could be safely screened for genetic disorders in this way. The US Naval Research Laboratory has developed a form of underwater solar energy. A team of New Zealand scientists report that measuring the ratio of hydrogen and methane levels on the planet Mars may help determine the likelihood of life on Mars. According to the scientists, "...low H2/CH4 ratios (less than approximately 40) indicate that life is likely present and active." In a separate study, a team of Netherland scientists associated with MIT reported methods of detecting hydrogen and methane in extraterrestrial atmospheres. 8 June American researchers report that they have successfully developed a key insulation technology required for the ITER nuclear fusion demonstration reactor. American scientists build a tabletop-sized X-ray laser, vastly smaller and cheaper than most such devices. The invention could permit ultra-high-resolution imaging of microscopic structures such as living cells. British researchers begin trialling "smart" hand pumps equipped with transmitters that can immediately detect and report mechanical breakdowns. This will allow vital water pumps to be fixed much more quickly in rural Africa. Japanese researchers grow a tiny, functioning human liver from stem cells. 10 June Canadian scientists develop a new method of accurately visualising complex protein interactions. The development could have broad implications for the biomedical and bioengineering sciences, including the design of functional bionanomachines. 11 June The European Extremely Large Telescope is approved for construction by member states of the European Southern Observatory organization. 12 June Scientists unveil a new porous metal-organic framework, NOTT-202, capable of capturing and storing excess carbon dioxide within its structure. An extensive study concludes that several factors aligned to cause the extinction of wooly mammoths. The IARC, a WHO research agency, concludes that diesel exhaust exposure can cause cancer. A123 Systems develops an improved version of its lithium-ion battery cells, potentially lowering the cost of electric vehicles. 13 June NASA successfully launches its NuSTAR X-ray space telescope. Scientists fully decode the bonobo genome. 14 June Swedish surgeons report having implanted a patient with a working lab-grown vein created with the patient's own stem cells. Chinese researchers report that fields of GM crops can be beneficial to nearby non-GM plants by encouraging the proliferation of beneficial predator insects, which reduce the need for pesticides. Examples of cave art in Spain are dated to around 38,000 BC, making them the oldest examples of art yet discovered in Europe. Scientists theorize that the paintings may have been made by Neanderthals, rather than by homo sapiens. 2012 LZ1, a large near-Earth asteroid, passes by the planet. Physical activity levels are declining worldwide, a trend that raises major health concerns, according to a new study. New research warns that pH levels along the US western seaboard will drop to 7.8 by 2050, with serious consequences for many organisms. 15 June American scientists report a possible genetic link between diabetes and an increased risk of Alzheimer's disease. NASA scientists report that Voyager 1 may be very close to entering interstellar space and becoming the first human-made object to leave the Solar System. 16 June China successfully launches the manned Shenzhou 9 spacecraft on a mission to the Tiangong-1 space station module. Shenzhou 9 carries a crew of three, including China's first female astronaut, Liu Yang. The United States Air Force's robotic Boeing X-37B spaceplane returns to Earth after a successful 469-day orbital mission. 18 June – Researchers design a robot that can outperform humans in identifying a wide range of natural materials according to their textures. The invention paves the way for advancements in prostheses, personal assistive robots and consumer product testing. 19 June – Men who are heavy tea drinkers may be more likely to develop prostate cancer, according to new research. 20 June Engineers build a working 50-gigapixel camera by synchronizing 98 tiny cameras in a single device. Renewable energy sources can fill 80 percent of American electricity demand by 2050, according to a new report. 21 June Scientists develop the world's first magnetic emulsion, based on magnetic surfactant molecules. The invention could be used to clean up oil spills or even guide medicines through human blood vessels. 2.8-million-year-old climate data is reconstructed from sediment cores recovered from Lake El’gygytgyn, Russia. The data is considerably older than the 800,000-year-old ice cores found in the Antarctic. 23 June – 100 years after the birth of English cryptanalyst and computer pioneer Alan Turing, British experts cast doubt on the long-held notion that Turing's death was a suicide. 24 June China successfully completes its first manual orbital rendezvous, as the manned Shenzhou 9 spacecraft docks with the Tiangong-1 module without the assistance of automated docking systems. Rates of sea level rise are increasing three-to-four times faster along portions of the U.S. Atlantic Coast than globally, according to a new U.S. Geological Survey report published in Nature Climate Change. Lonesome George, the last known individual of the Pinta Island tortoise subspecies, dies in Galápagos National Park, probably aged over 100, thus making the subspecies extinct presumptively. 26 June – The discovery of a new mineral, panguite, is announced, with samples found in the Allende meteorite. 27 June Physicists collide gold ions together to produce a quark–gluon plasma, similar to that which existed in the first instant after the Big Bang. In doing so, they momentarily produce what Guinness World Records reports is the highest man-made temperature ever: 4 trillion degrees Celsius (7.2 trillion degrees Fahrenheit). Scientists develop a new, high-precision method for modifying organic compounds with new active molecules, easing the development of new medicines. Scientists associated with University of the Witwatersrand, Johns Hopkins University and other international Universities report that early humans, such as Australopithecus sediba, may have lived in savannas but ate fruit and other foods from the forest – behavior similar to modern-day savanna chimpanzees. 28 June – An international team of astronomers discovers evidence that our Milky Way had an encounter with a small galaxy or massive dark matter structure perhaps as recently as 100 million years ago, and as a result of that encounter it is still ringing like a bell. 29 June American researchers demonstrate "paint-on" batteries, composed of active layers just 0.5 mm thick, capable of being spray-painted onto almost any surface. The technology could allow for the creation of lighter, more flexible electronic devices with a wide range of form factors. Dutch and German scientists unveil a new brain-scanning functional magnetic resonance imaging device that allows paralyzed people to type out words using only their thoughts. Scientists discover the remains of an enormous, 3-billion-year-old impact near the Maniitsoq region of West Greenland, a billion years older than any other known collision on Earth. July 1 July – The London Symphony Orchestra performs a musical composition created without human input by the Iamus computer. 2 July American researchers use a 3D printer to build a sugar framework for growing an artificial liver. The sugar structure simulates a human vascular system, allowing artificial blood vessels to be grown to support the liver. Scientists use ultrasound to display 3D video on a modified liquid soap membrane, creating the world's thinnest transparent video display. Graphene sheets with precisely controlled pores can purify water more efficiently than existing methods, according to scientists at MIT. Scientists report that indirect evidence supporting the existence of the Higgs boson has been found. 3 July Researchers photograph the shadow of a single atom for the first time. A study led by Kansas State University discovers a new quantum state, which allows three, but not two, atoms to stick together. 4 July CERN physicists announce the discovery of a particle consistent with the standard model's Higgs boson at a "5 sigma" level of significance, indicating that there is only one chance in 3.5 million to get such a result by chance without a particle. American scientists develop an electrically conductive gel that can easily be printed onto surfaces with a standard inkjet printer, allowing the rapid and simple production of a wide range of electronics. Researchers have identified seven genetic markers linked with a woman's breast size, according to a new study. 5 July – Scientists have produced the most detailed footage of a single neuron ever seen. In the timelapse video, individual proteins are shown moving through different pathways within the cell. 6 July UCLA engineers develop an ultra-high-speed optical microscope capable of quickly and reliably identifying cancer cells in human blood, paving the way for faster, cheaper and more reliable cancer diagnoses. Scientists construct the most biologically accurate robotic legs yet built, closely mimicking the motion of human leg muscles. 7 July – Non-human animals including all mammals and birds, and many other creatures including octopuses possess consciousness, according to the new Cambridge Declaration on Consciousness. 9 July Scientists discover a new molecule that could potentially make teeth cavity-proof. Scientists have, for the first time, directly detected part of the invisible dark matter "scaffolding" of the universe, where more than half of all matter is believed to reside. 10 July A new biofuel production process created by Michigan State University researchers produces energy more than 20 times higher than existing methods. In two new scientific articles, researchers refute NASA's claims that bacteria can successfully incorporate arsenic into their DNA. American scientists develop an electrode-based T-shirt capable of charging cellphones on the move. It is reported that staying seated for long periods of time can reduce the human lifespan, unless mitigated by regular strenuous exercise. 11 July New research from the University of Manchester indicates that graphene – already noted for its strength and conductivity – is capable of repairing its structure without human assistance by absorbing loose carbon atoms from its vicinity. NASA's Cassini space probe images a huge gaseous vortex shrouding the south pole of Saturn's moon Titan. Virgin Galactic unveils its privately developed satellite launch vehicle, LauncherOne, and confirms that its SpaceShipTwo spaceplane will soon begin powered test flights. The Hubble Space Telescope discovers a fifth moon of the dwarf planet Pluto. 13 July – A new survey shows that lemurs are far more threatened by extinction than previously thought. 15 July – It is reported that Dracunculiasis, also called guinea worm disease (GWD), is on the verge of being wiped out – becoming only the second human disease after smallpox to be eradicated. 16 July – A major milestone in HIV prevention is reached, as the FDA approves an existing drug, Truvada, for uninfected adults at high risk of acquiring the disease. 19 July An iceberg twice as large as Manhattan reportedly breaks off from Greenland's Petermann Glacier. A new nanoparticle coating with self-repairing surface functionality has been developed. The coating uses polymer stalks tipped with functional compounds to repair surface damage. Astronomers have discovered the most ancient spiral galaxy yet, dating back 10.7 billion years. 20 July A giant potable aquifer is discovered in Namibia, potentially offering enough drinkable water to sustain the country for centuries. Using mice, researchers have grown sweat glands from newly identified stem cells. 23 July American scientists create an artificial jellyfish out of silicone and lab-grown heart cells. The construct is capable of swimming in a similar manner to real jellyfish when stimulated with an electric current. Researchers report that 14% of British stomach cancer cases could be prevented by reducing public salt intakes. Researchers create the first complete computer model of a living organism, fully simulating a bacterium. 25 July Satellite data reveals that 97% of Greenland's ice is undergoing a thaw, the greatest level of ice melt ever recorded on the landmass. A rift in the Antarctic rock as deep as the Grand Canyon is increasing ice melt from the continent, researchers say. The International Space Station's Alpha Magnetic Spectrometer instrument reports that it has recorded 18 billion cosmic ray events since its installation in 2011. 26 July The rapid decline in Arctic sea ice is at least 70% due to man-made global warming, according to a new study, and may even be up to 95% caused by humans – a far higher proportion than scientists had previously thought. Using complex algorithms, researchers have found that pop songs over the last 50 years have become increasingly louder and more bland in terms of the chords, melodies and types of sound used. Using a bone marrow transplant, two men have been "cured" of HIV infection. Ageing termite workers are discovered to use a toxic crystalline structure to "self-destruct", spraying enemy insects with toxins in defence of their termite mounds. An American gunsmith produces the world's first functional 3D-printed plastic firearm. 27 July In preparation for the beginning of the 2012 Summer Olympics in London, British telecom companies create a hugely expanded network infrastructure in the city, including over 1,000 new Wi-Fi hotspots and thirty additional mobile phone masts. Swiss scientists claim that Earth's Moon may have been formed in a glancing "hit and run" collision with a large, fast-moving protoplanet. Japanese women have fallen behind Hong Kong citizens in life expectancy for the first time in 25 years, dropping from 86.3 years in 2010 to 85.9 years in 2011. This was partly due to the earthquake and tsunami of March 2011, according to a report by Japan's health ministry. American scientists use microbes to cleanly convert electricity into methane gas, potentially offering a new form of renewable energy. 29 July – Major technology companies predict that as many as 50 billion electronic devices may be wirelessly connected worldwide by 2020, as automated machine-to-machine communication sees increasing use in retail and manufacturing. 31 July – People with even minor symptoms of mental illness have a lower life expectancy, according to a large-scale population-based study published in the British Medical Journal. August 1 August – Researchers claim to have resolved one of the biggest controversies in cancer research – discovering the specific cancer cells that seem to be responsible for the regrowth of tumours. 2 August Scientists in Antarctica announce that they have discovered what appears to be the remains of an ancient rainforest from the early Eocene period buried deep beneath the ice. A study published in Animal Behavior finds that female spiders that cannibalize their mates produce much healthier offspring than non-cannibalizing spiders, supporting a link between sexual cannibalism in the animal kingdom and reproductive success. 3 August Deforestation in the Amazon rainforest has fallen again in the past 12 months, according to preliminary data published by Brazil's National Institute for Space Research. American and Canadian researchers develop a medical spray which uses human skin cells and coagulant proteins to speed up the healing of open wounds such as leg ulcers. In medical trials, the "skin spray" proved over 20% more effective than other ulcer treatments. 6 August NASA's Mars Science Laboratory mission successfully lands Curiosity, the largest Mars rover yet built, in Mars' Gale Crater. Papua New Guinea's government has approved the world's first commercial deep-sea mining project, despite strong environmental concerns. 7 August – New brain research refutes the results of earlier studies that cast doubts on free will. 8 August Anthropologists in northern Kenya unearth fossils of a previously unconfirmed species of human that lived approximately 2 million years ago. Almost one-quarter of the world's population lives in regions where groundwater is being used up faster than it can be replenished, concludes a comprehensive global analysis of groundwater depletion published in Nature. 9 August – American and South Korean engineers build a flexible, worm-like robot that moves by mimicking the contraction of an earthworm. The robot's artificial muscle is based on a nickel–titanium wire that expands and contracts in response to electric currents. It can squeeze through tight spaces and absorb heavy impacts, and could be used in future for covert reconnaissance missions. 10 August – Engineers successfully test a new algorithm that allows autonomous UAVs to fly through complex structures without requiring GPS navigation. 11 August The Perseid meteor shower reaches its peak for 2012, becoming widely visible in the Northern Hemisphere. Experts declare the 2012 London Olympics to be the "greenest Olympics ever", praising its use of recycled materials and temporary venues, and noting the improvements made to London's transport infrastructure. Sea ice in the Arctic is disappearing at a far greater rate than previously expected, according to data from the first purpose-built satellite launched to study the thickness of the Earth's polar caps. 12 August – Scientists discover a single genetic switch that triggers the loss of brain connections in humans, and also causes depression in animal models. The findings could lead to more effective antidepressant therapies. 13 August South Korean researchers develop a cheap electronic ink based on tiny rectennas, capable of transmitting data over short distances. The printable invention could potentially revolutionise the field of augmented reality. A new class of polymers has been discovered that are resistant to bacterial attachment. These new materials could lead to a significant reduction in hospital infections and medical device failures. US wind energy reaches 50 gigawatts of capacity. 14 August Boeing's X-51 hypersonic scramjet prototype is destroyed during a powered test flight after a control fin failure. Scientists from Singapore shrink the 1972 Playboy centerfold image of Swedish model Lena Soderberg to the width of a human hair. It is hoped that this new miniaturization method will lead to more efficient watermarks or covert messages. 15 August – In a major breakthrough, an international team of scientists has proven that addiction to morphine and heroin can be blocked, while at the same time increasing pain relief. 16 August Researchers have finally found a compound that may offer the first effective and hormone-free birth control pill for men. (Science Daily) British scientists develop the world's first room-temperature maser, using a crystal of p-Terphenyl to modify a commercial medical laser to produce coherent microwave emissions without the need for expensive magnets and coolant. The maser could be used to develop more sensitive medical scanners and radio telescopes. (BBC) Harvard University scientists develop a flexible, octopus-inspired robot capable of rapidly camouflaging or advertising itself by pumping liquid dyes into channels on its surface. The relatively inexpensive robots could be used in a variety of fields, from surgery to search-and-rescue to covert operations. (BBC) 17 August Jennifer Doudna and Emmanuelle Charpentier publish a pioneering paper on CRISPR-mediated programmable genome editing. A group of South Korean scientists has reportedly developed a carbon battery for electric vehicles capable of charging up to 120 times faster than standard batteries. (Inhabitat) Researchers have demonstrated a way to potentially "hack into" a person's brain, using BCI technology. (ExtremeTech) (Usenix) Researchers at Harvard's Wyss Institute for Biologically Inspired Engineering successfully store 5.5 petabits of data – around 700 terabytes – in a single gram of DNA, breaking the previous DNA data density record by a thousand times. (ExtremeTech) 18 August – Scientists in the United States report that they have found a new family of spiders in the caves of California and Oregon. It is the first such discovery in North America for more than 140 years. (BBC) 19 August – Scientists are reportedly close to developing a baldness cure. (The Telegraph) 20 August – The first evidence of a planet's destruction by its aging star has been discovered by an international team of astronomers. (Penn State Science) 21 August MIT researchers report that a genetically modified organism could turn carbon dioxide or waste products into a gasoline-compatible transportation fuel. (MIT) A new study of nine coastal cities around the world suggests that Shanghai is most vulnerable to serious flooding later this century. (Science Daily) Life in the world's oceans is facing a potential mass extinction, largely due to human activity, say researchers. (Huffington Post) Researchers have developed an "electronic nose" prototype that can detect small quantities of harmful airborne substances. (Science Daily) Scientists have identified the crucial role of a protein called Mof in the epigenetics of stem cells. The protein helps prime stem cells to become specialized cells in mice. (Science Daily) Analysts report that robotic technology is seeing increased use in the global mining industry, as mining and drilling companies seek to reduce personnel costs by installing autonomous trains, trucks, drills and underwater vehicles. (Technology Spectator) 22 August LG Electronics unveils the world's largest commercial ultra-definition TV, boasting four times the resolution of 1080p high-definition screens. (BBC) NASA names the Curiosity rover's Martian landing site "Bradbury Landing", in honour of the American science fiction author Ray Bradbury, who died in June 2012 aged 91. Meanwhile, Curiosity conducts a successful short-range test drive, proving that its mobility system is in nominal condition. (The Guardian) (NASA) A large-scale test of smart vehicle data sharing begins in Ann Arbor, Michigan. Over the course of the year-long trial, around 2,800 vehicles will be fitted with vehicle-to-vehicle wireless communications, allowing them to share data about their movements and alert their drivers if they are at risk of collision. Such technology could be used in future to drastically reduce traffic accidents. (BBC) 23 August – New research links the origins of Indo-European languages with the spread of farming from Anatolia approximately 8,000-9,500 years ago. (Science Daily) 25 August NASA's Voyager 1 crosses the heliopause and enters interstellar space, the first human-made object to do so. Researchers discover a promising new drug target for the treatment and prevention of heart failure. (Science Daily) (ESC) 26 August Besse Cooper, at the time the world's oldest living human, celebrates her 116th birthday, becoming one of only eight people in recorded history to indisputably do so. (Loganville-Grayson Patch) Miniature surgical nets could be used to safely extract dangerous blood clots from the brains of stroke patients, potentially alleviating symptoms such as speech loss and paralysis, according to two new medical studies. (BBC) 27 August – Young people who smoke cannabis run the risk of a significant and irreversible reduction in their IQ, according to one of the largest cannabis studies ever carried out. (BBC) 28 August – Three decades after its last sighting, the Japanese river otter is declared extinct. (The Japan Times) 29 August Scientists report the discovery of two new exoplanets orbiting a binary star – the first such planetary system yet discovered. (Wired) Caloric restriction fails to extend primate lifespan, according to the results of a long-term study. (Extreme Longevity) Large volumes of methane – a potent greenhouse gas – could be locked beneath the Antarctic ice, according to a new study. (BBC) (The Guardian) Better management of agricultural systems could provide enough food for the expected global population of 9 billion by 2050, according to a new study. However, the study ignores factors such as climate change and geopolitics. (Science Daily) A cost analysis of the technologies needed to transport materials into the stratosphere to reduce the amount of sunlight hitting Earth and therefore reduce the effects of global climate change shows that they are both feasible and affordable. (Science Daily) In a world first, astronomers at Copenhagen University report the detection of a specific sugar molecule, glycolaldehyde, in a distant star system. The molecule was found around the protostellar binary IRAS 16293-2422, which is located 400 light years from Earth. Glycolaldehyde is needed to form ribonucleic acid, or RNA, which is similar in function to DNA. This finding suggests that complex organic molecules may form in stellar systems prior to the formation of planets, eventually arriving on young planets early in their formation. (National Geographic) Scientists at the University of Liverpool are leading a £1.65 million project to produce and test the first nanomedicines for treating HIV/AIDS. (University of Liverpool) 30 August – South African scientists claim that a breakthrough drug cures all strains of malaria. Clinical trials on humans are set to start in 2013. (PopSci) 31 August Researchers successfully perform the first implantation of an early prototype bionic eye with 24 electrodes. (Science Daily) Swedish roboticists begin a crowdsourcing project to collect thousands of 3D Kinect images of household objects which can be used to improve the navigation capabilities of domestic robots. (BBC) A gaze-tracking smart television that can be controlled by the eye movements of users is unveiled at a Berlin trade show. (BBC) September 1 September When they encounter a fallen bird, western scrub jays call out and gather around the body in a funeral-like display, scientists discover. (BBC) NASA scientists report that polycyclic aromatic hydrocarbons (PAHs), subjected to interstellar medium (ISM) conditions, are transformed, through hydrogenation, oxygenation and hydroxylation, into more complex organics. This process is described as "a step along the path toward amino acids and nucleotides, the raw materials of proteins and DNA, respectively". Furthermore, as a result of these transformations, the PAHs lose their spectroscopic signature which could be one of the reasons "for the lack of PAH detection in interstellar ice grains, particularly the outer regions of cold, dense clouds or the upper molecular layers of protoplanetary disks." (Space.com) 2 September – Austrian scientists develop a 3D printing method which can construct complex microscopic structures out of individual molecules. (Gizmag) 3 September – Swiss engineers build a versatile, self-righting all-terrain legged robot, similar to Boston Dynamics' BigDog military robot. (TechCrunch) (CLAWAR2012) 4 September The UK Office for National Statistics estimates that almost all British children born in 2012 could live to the age of 100, assuming recent improvements in healthcare and living standards continue. (BBC) A formal study finds little evidence of health benefits from organic foods. (Science Daily) (Ann. Intern. Med.) British scientists develop a smart, pressure-mapping carpet with an optical layer that can raise the alarm if it detects sudden falls. The carpet can also map and record walking patterns over time, allowing doctors to track movement problems in elderly patients. (New Scientist) A physically active lifestyle can lower the risk of breast cancer by up to 13%, according to the largest-ever study of its kind. (Cancer Research UK) Coastal erosion due to rising sea levels may have been "dramatically underestimated", according to a new scientific model. (Science Daily) 5 September An international research team achieves quantum teleportation over a record-breaking distance of through free space. (Science Daily) Scientists publish (in Science, Nature and elsewhere) the most detailed analysis to date of the human genome, revealing that much more of our genetic code is biologically active than previously thought, and largely disproving the notion of junk DNA. (BBC) (Nature News) Increased precipitation and river discharge in the Arctic has the potential to speed climate change, according to the results of a new study. (Science Daily) A new report by Oxfam suggests that the full impact of climate change on future food prices is being greatly underestimated. (Oxfam) NASA's Dawn spacecraft departs the asteroid 4 Vesta for the dwarf planet Ceres, which it is expected to reach in 2015. (USA Today) (Space.com) (NASA) 6 September Japan sets a new world record for ocean drilling depth, reaching below the seafloor off Shimokita Peninsula. (Science Daily) A regular intake of fish oils, together with moderate exercise, significantly helps to slow aging decline, according to a recent trial. (BBC) (British Science Festival) Researchers in the US produce the shortest-ever laser pulses, with a duration of 67 attoseconds. Such "attosecond science" will make it possible to observe some of the briefest microscopic events in the universe, such as electrons moving in their orbitals, in real time. (BBC) (Opt. Lett.) DARPA's cheetah legged robot prototype is recorded running faster than Usain Bolt, the world's fastest sprinter, in a treadmill test. (The Register) 7 September Harvest Automation begins deliveries of its HV-100 agricultural robot, a commercial automaton capable of navigating around obstacles and working in teams to perform horticultural tasks such as pruning and spraying crops. (The Economist) Flooded mines could supply 40% of Glasgow's heating, say geologists. (The Guardian) (British Geological Survey) 9 September – If it is fully harnessed, wind energy could easily meet all of the world's long-term electricity demand, according to a new study. (Science Daily) 10 September A new scientific model suggests that even more extrasolar planets could harbour life than previously estimated. The model assumes that subsurface liquid water could host alien life, in addition to the surface water that scientists are searching for on nearby exoplanets. (BBC) Caribbean coral reefs are on the verge of collapse, with less than 10% of the reef area showing live coral cover. (The Guardian) (IUCN) 12 September UK researchers report a major advance in the treatment of deafness, using stem cells to successfully restore hearing in animals for the first time. (BBC) A new species of monkey is identified in the Democratic Republic of Congo. Found in remote forests, it is only the second new monkey species to be discovered in Africa in 28 years. (The Guardian) Intel Corporation reveals details of its new Haswell microarchitecture, a 22 nanometer microchip family offering unprecedented computing power and energy efficiency for consumer electronics. The first commercial Haswell-powered devices are expected to emerge in 2013. (The Register) Microsoft unveils a patent for a 3D video gaming system that would allow real-time video to be projected on the walls of any room, creating a 360-degree game environment to immerse players. (BBC) 13 September Small spherical "blueberries" found in Martian rocks may have been formed by microbes, possibly indicating that life existed on Mars in the distant past. (Life Scientist) UNICEF reports that global child mortality rates have decreased significantly in recent years. Whereas approximately 12 million children died before their fifth birthday in 1990, by 2011 this figure had dropped to 6.9 million. This improvement is reportedly due to a combination of rising living standards, foreign aid and broader immunisation. (AFP) An IBM team in Zürich has published single-molecule images so detailed that the type of atomic bonds between their atoms can be discerned. (BBC) Scientists identify five genes that determine the form of the human face, in a find that could lead to police identification sketches based solely on DNA findings. (BBC) 14 September Scientists demonstrated that a brain implant can improve cognitive function in primates for the first time ever. IOP (io9) UK weather forecasters can predict extreme winter weather in future seasons with more confidence, due to a new analytical computer model. (BBC) 17 September A warp drive to achieve faster-than-light travel, a supposedly impossible goal, may not be as unrealistic as once thought, scientists say. (Space.com) Scientists working on the Blue Brain Project have achieved a major breakthrough in mapping the human brain, identifying key principles that determine synapse-scale connectivity and making it possible to accurately predict the locations of synapses in the neocortex. (EPFL) NASA's twin GRAIL gravitational research satellites reveal that the Moon has a much thinner crust than previously assumed. (Nature News) 18 September The Dark Energy Survey's high-resolution camera begins operation in Chile, surveying distant galaxies for evidence of the action of dark energy. (Nature) Massachusetts-based company Rethink Robotics releases its Baxter industrial robot, the first humanoid robot designed to apply common sense and machine learning to factory operations. (PC Magazine) Doctors in Sweden have performed the world's first mother-to-daughter uterus transplants. (BBC) (University of Gothenburgh) 19 September Researchers at the University of Cambridge develop a method for cheaply printing liquid crystal-based lasers using a standard inkjet printer. The invention could allow the creation of "smart wallpaper" with built-in video displays. (BBC) Arctic sea ice has reached its minimum extent for the year, setting a record for the lowest cover since satellite records began in the 1970s. The 2012 extent has fallen to 3.41 million km2 (1.32 million sq mi), 50% lower than the 1979–2000 average. (BBC) (NASA) When a huge meteor collided with Earth about 2.5 million years ago and fell into the southern Pacific Ocean, it not only could have generated a massive tsunami, but may also have plunged the world into the Ice Ages, a new study suggests. (Science Daily) A new study reveals that fast-flowing and narrow glaciers have the potential to trigger massive changes in the Antarctic ice sheet and contribute to rapid ice-sheet decay and sea-level rise. (Science Daily) 20 September MakerBot Industries, an American manufacturer of 3D printers, opens the world's first 3D printer retail outlet in New York City. (CNET) Elevated CO2 levels on humans cause decreased cognitive ability, starting at 600 ppm. (Huffington Post) 22 September – NASA reveals plans for the "Gateway Spacecraft", a permanent outpost beyond the Moon, to be constructed from leftover components of the International Space Station. (Orlando Sentinel) 23 September Researchers have shown that many species of fruit fly will be unable to survive even a modest increase in temperature. Many are now close to or beyond their temperature safety margin, and very few have the genetic ability to adapt to climate change. (Sydney Morning Herald) Japanese researchers achieve a new world record for data transmission, demonstrating one-petabit-per-second fiber transmission over : equivalent to sending 5,000 HDTV videos per second over a single fiber. (NTT) The first continent-wide estimate of African great ape distribution and its changes over time has revealed a dramatic decline in ape habitats. (BBC) 24 September UK doctors report that a new "SARS-like" respiratory coronavirus has been identified. The disease has infected at least two people in the Middle East and killed one. (BBC) A major reassessment of 18 years of satellite observations provides a new, more detailed view of the changes in sea level around the world. Incorporating the data from a number of spacecraft, the study re-affirms that ocean waters globally are rising by just over 3mm per year. (BBC) The entire field of particle physics is set to switch to open-access publishing, a milestone in the push to make research results freely available to readers. (Nature News) 25 September New data from the Chandra X-Ray Observatory suggests that the Milky Way galaxy is surrounded by a gigantic halo of hot gas, with a far greater radius than the galaxy itself, and a roughly equal mass. If the halo's dimensions are confirmed, its concentration of mass may explain the apparent lack of baryonic matter in the galaxy. China's first aircraft carrier, a retrofitted ex-Soviet vessel named the Liaoning, enters naval service. (Wall Street Journal) 26 September – An international team of scientists identifies a key factor responsible for declining muscle repair during ageing, and discovers how to halt the process in mice with a common drug. (EurekAlert) 27 September Researchers have shown for the first time the trapping action of the pimpernel sundew, Drosera glanduligera, a carnivorous plant. (Science Daily) NASA scientists announce the Curiosity rover's discovery of evidence of ancient flowing liquid water on Mars. (AP News) (NASA) Researchers demonstrate a new type of biodegradable electronics technology with wide applications in medical implants, environmental monitors and consumer devices. (Science Daily) Toyota begins development of the Human Support Robot, a voice-controlled domestic robot designed to help elderly and disabled people by moving objects, reaching high shelves and opening doors and curtains. (AutoBlog) 30 September – Climate change will lead to smaller fish, according to a new study led by fisheries scientists at the University of British Columbia. Under a high emissions scenario, the maximum body weight most fish species reach could decline by up to a quarter by 2050. October 1 October – Sea cucumbers and sea urchins are able to change the elasticity of collagen within their bodies, and could hold the key to maintaining a youthful appearance, according to scientists at Queen Mary, University of London. (Queen Mary University) 2 October – Under a high-emissions climate change scenario, global sea levels could rise nearly by the year 3000, according to new research. (Science Daily) 3 October In preparation for a land speed record attempt, the British Bloodhound SSC team conducts a successful hybrid rocket test in Newquay. The rocket will operate in tandem with a Eurofighter Typhoon jet engine to propel the Bloodhound vehicle at per hour during its record attempt in 2014. (BBC) Scientists report that the venomous black mamba produces a highly effective natural painkiller. (Los Angeles Times) A startup company demonstrates a cheap and efficient method of printing complex electronics onto flexible substrates. (TechEye) 4 October A new genetic test can fully sequence the genome of a newborn baby in just 50 hours, a major improvement over the usual month-long sequencing process. The test can screen for 3,500 genetic diseases, allowing critically ill infants to be diagnosed and treated much more effectively. (TIME) Nissan unveils the NSC-2015, a prototype electric driverless car that can park itself, understand road markings and quickly report attempted thefts. A commercial version is planned for 2015. (BBC) 5 October DARPA successfully tests technology which enables drones to conduct aerial refueling autonomously. (BBC) (DARPA) Tokyo Institute of Technology researchers make a breakthrough in teaching a computer to understand human brain function. The scientists used fMRI datasets to train a computer to predict the semantic category of an image originally viewed by five different people. (ResearchSEA) 7 October – Expanding production of palm oil – a common ingredient in processed foods, soaps and personal care products – is driving rainforest destruction and massive carbon dioxide emissions, according to a new study led by researchers at Stanford and Yale universities. (EurekAlert) 8 October SpaceX's Dragon spacecraft launches on its first operational resupply mission to the International Space Station, following a successful demonstration mission in May 2012. (BBC) (Space.com) (Space Fellowship) A variant in a gene involved with inflammation and the immune response is linked with a decreased risk of lung cancer, according to researchers at the National Cancer Institute in Rockville, Maryland. (EurekAlert) Researchers have found what they claim is the first fossil yet discovered of an ancient spider attacking prey caught in its web. The amber fossil dates back between 97 million and 110 million years. (OSU) A new Alzheimer's drug, Solanezumab, slows the pace of memory loss in sufferers by 34%, according to the results of two trials. (Daily Telegraph) The 2012 Nobel Prize in Physiology or Medicine is awarded jointly to John B. Gurdon and Shinya Yamanaka, for the discovery that mature cells can be reprogrammed to become pluripotent stem cells. (Nobel Prize) 9 October Microsoft tests a sensor bracelet that can quickly recognise a wide variety of human hand gestures. The invention could be used as a general-purpose remote control for electronics, allowing devices to be activated and controlled with simple hand movements. (BBC) The 2012 Nobel Prize in Physics is awarded jointly to Serge Haroche and David J. Wineland "for ground-breaking experimental methods that enable measuring and manipulation of individual quantum systems". Their work may eventually help make quantum computing possible. (New York Times) (Nobel Prize) 10 October Arizona State University researchers develop a new software system capable of estimating greenhouse gas emissions across entire urban landscapes, all the way down to roads and individual buildings. Previously, scientists quantified carbon dioxide (CO2) emissions at a much broader level. (Arizona State University) SpaceX's Dragon spacecraft docks with the International Space Station, becoming the first commercially contracted re-supply vehicle to do so. (Huffington Post) The 2012 Nobel Prize in Chemistry is awarded jointly to Robert J. Lefkowitz and Brian Kobilka for their work on G-protein-coupled receptors. (Nobel Prize) The United States Navy begins funding the development of a versatile robot capable of adapting everyday materials to rescue trapped humans. (BBC) 11 October In the largest-ever genetic study of cholesterol and other blood lipids, an international consortium has identified 21 new gene variants associated with risks of heart disease and metabolic disorders. The findings expand the list of potential targets for drugs and other treatments for lipid-related cardiovascular disease, a leading global cause of death and disability. (Science Daily) New research led by Yale University scientists suggests that a rocky planet twice Earth's size orbiting a nearby star is composed largely of diamond. (Science Daily) (ArXiv) 12 October – Europe launches the third and fourth of its Galileo navigation satellites, making it possible for the Galileo system to be fully tested prior to the start of operations in 2015. The system is planned to become fully operational, with 27 active satellites, by 2020. (BBC) 14 October Austrian skydiver Felix Baumgartner performs the highest skydive yet attempted, jumping from a pressurized capsule above Roswell, New Mexico. Baumgartner became the first human to break the sound barrier without an aircraft during his jump. (BBC) (Daily Telegraph) Scientists claim that water molecules found in lunar soil could be produced by the solar wind reacting with the Moon's surface. (Daily Telegraph) 15 October Astronomers have confirmed the existence of a Neptune-like exoplanet that has four suns, making it the first quadruple star system ever discovered. (io9) (ArXiv) September 2012 was tied as the warmest September ever recorded globally, according to data from the National Climatic Data Center. (NOAA) New research clearly shows that there is an increasing tendency for cyclones to form when the climate is warmer, as it has been in recent years. (EurekAlert) Researchers from North Carolina State University demonstrate new techniques for stretching carbon nanotubes to create carbon composites that can be used as stronger, lighter materials in a wide variety of applications. (NC State University) NASA demonstrates its X1 powered exoskeleton, a robotic assistance suit based on its Robonaut humanoid robot. The X1 exoskeleton is designed to assist paraplegics with walking, and can also be set to provide walking exercise for able-bodied astronauts. (CNET) 17 October A new exoplanet is discovered orbiting Earth's closest stellar neighbour, Alpha Centauri. The new planet is believed to be too hot to sustain life, but there is a high probability that the system contains other planetary bodies, including potentially Earthlike ones. Medical scientists report, on the basis of a decade-long double-blind study involving nearly 15,000 older male physicians, that subjects taking a daily multivitamin were associated with 8 percent fewer cancers compared to subjects taking a placebo. (New York Times) A drug made from a plant known as "thunder god vine," or lei gong teng, that has long been used in traditional Chinese medicine, wipes out pancreatic tumors in mice, and may soon be tested in humans. (Bloomberg) 83% of Madagascar's palms are threatened with extinction, putting the livelihoods of local people at risk, according to the latest update of the Red List of Threatened Species released today by the International Union for Conservation of Nature (IUCN). (IUCN) 18 October A kidney-like organ grown from scratch in a laboratory has been shown to work in animals – an achievement that could lead to the production of spare kidneys for patients from their own stem cells. (New Scientist) Extinctions during the early Triassic period left Earth a virtual wasteland due to extreme heat, a new study suggests. (National Geographic) Using a new imaging technique, based on the detection of calcium ions in neurons, neuroscientists have developed a way to monitor how brain cells coordinate with each other to control specific behaviors. (MIT) For the first time, an assembly of nanomachines has been synthesised that is capable of producing a coordinated contraction, similar to the movements of biological muscle fibres. (ScienceDaily) 19 October – The European Space Agency announces that it will launch a new satellite in 2017 to study super-Earths and other large exoplanets orbiting nearby stars. The CHaracterising ExOPlanets Satellite (CHEOPS) will orbit the Earth at an altitude of about . (Space.com) 22 October Engineers develop an ultra-high-density form of magnetic tape, using barium ferrite particles to store up to 100 terabytes of data in a single tape cartridge. The invention is intended to store the huge volumes of astronomical data that the Square Kilometre Array will generate upon its inception in 2024. (Gizmodo) British doctors use the remote-controlled Da Vinci Surgical System to perform the UK's first robotic open-heart surgery. (BBC) 24 October As much as 44 billion tons of nitrogen and 850 billion tons of carbon stored in Arctic permafrost could be released over the next century, according to a new study led by the U.S. Geological Survey. This is roughly the amount of carbon already stored in the atmosphere today. (USGS) Binge drinking – drinking less during the week and more on the weekends – significantly reduces the structural integrity of the adult brain, according to a new study. (Science Daily) A new gene therapy method to prevent the inheritance of certain genetic diseases has been successfully demonstrated in human cells. It is believed that this research, along with other efforts, will pave the way for future clinical trials in human subjects. (Science Daily) The world's first commercial vertical farm opens in Singapore. The farm maximizes its growing space by using 120 high-rise cultivation towers, and can produce half a ton of vegetables a day. (Channel News Asia) 25 October – Microsoft launches Windows 8, the most fundamental update to its Windows operating system in 17 years. (The Guardian) 26 October The oldest Mayan tomb yet discovered is found in Guatemala. The ancient tomb is believed to date back to between 400 BC and 700 BC. (BBC) Scientists have recovered the sounds of music and laughter from the oldest playable American recording, dating back to 1878. (The Atlantic) 27 October – Women who give up smoking by the age of 30 will almost completely evade the risks of dying young from tobacco-related diseases, according to a study of more than a million women. (BBC) 28 October The unmanned SpaceX Dragon spacecraft successfully completes its first fully operational resupply mission to the International Space Station (ISS), landing intact in the Pacific Ocean after over two weeks docked with the ISS. (BBC) British scientists invent a simple liquid-based test that can accurately diagnose diseases such as cancer or HIV by detecting small concentrations of biomarkers such as anomalous proteins. (BBC) IBM researchers demonstrate the initial steps toward commercial fabrication of carbon nanotubes as a successor to silicon-based electronics. (IBM) 30 October Britain's first 4G mobile network is launched, offering high-speed mobile data services in 11 major cities. (The Guardian) NASA scientists report that the Curiosity Mars rover has performed the first X-ray diffraction analysis of Martian soil at the "Rocknest" site. The results from the rover's CheMin analyzer revealed the presence of several minerals, including feldspar, pyroxenes and olivine, and suggested that the Martian soil in the sample was similar to the "weathered basaltic soils" of Hawaiian volcanoes. (NASA) IBM's Watson supercomputer is to help train doctors at a medical school in Cleveland, Ohio. (IBM) (BBC) Amonix, a leading designer and manufacturer of concentrator photovoltaic (CPV) solar power, has achieved a milestone in the industry by successfully converting more than a third of sunlight into electricity. Its figure of 33.5% efficiency broke the previous record of 30.3%. (Amonix) Pollen counts in the US will be more than double today's level by 2040, according to a new study. (NewsWise) 31 October – Scientists in the Netherlands have demonstrated a form of self-healing concrete that uses limestone-producing bacteria. (BBC) (TU Delft) November 1 November Climate scientists are biased not toward "alarmism" (as the media often claims), but rather the reverse: toward cautious and conservative estimates, according to a new study. A gene that is associated with regeneration of injured nerve cells has been identified by scientists at Penn State University and Duke University. (Science Daily) China announces plans to construct the world's first 100-petaflop supercomputer by 2015. (TechEye) Sea levels are rising faster than expected from global warming, due to critical feedbacks missing from earlier models, according to the University of Colorado. (Science Daily) (GSA) China reveals its second prototype stealth fighter, the J-13, which is a smaller and faster version of the existing Chengdu J-20. (The Guardian) 2 November – Glybera becomes the first gene therapy approved by regulatory authorities in the Western world. Commercial roll-out is expected in late 2013. (BBC) () 5 November New research suggests that just one or two individual herpes virus particles attack a skin cell in the first stage of an outbreak, resulting in a bottleneck in which the infection may be vulnerable to medical treatment. (Princeton) A 15-year research project has succeeded in curbing the growth of polycystic kidney disease, one of the most common life-threatening genetic diseases, which affects 12.5 million people worldwide. Previously, only the symptoms of the disease could be treated. (University of Zurich) A report in the November 6 issue of Current Biology offers the first complete description of the spade-toothed whale (Mesoplodon traversii), a species previously known only from a few bones. The description is based on two individuals – an adult female and her male calf – who became stranded and died on a New Zealand beach in 2010. (Science Daily) 6 November University of Bonn scientists develop a soccer-playing robot called NimbRo-OP, intended to develop new capabilities for humanoid bipedal robots, such as using tools, climbing stairs, and using human facial expressions, gestures and body language for communication. (Wired) (Humanoids 2012) Targeting a single chemical inside cancerous cells could one day lead to a single test for a broad range of cancers, researchers say. The same system could then be used to deliver precision radiotherapy treatments. (BBC) (NCRI) In the largest ever study of its kind, an international team of astronomers establishes that the rate of star formation in the universe is now only 1/30th of its peak, and that this decline is set to continue. (Science Daily) Leisure-time physical activity extends life expectancy by as much as 4.5 years, according to a study by the National Cancer Institute. Even half of the recommended weekly exercise can add 1.8 years. (Science Daily) 7 November Canadian researchers working to develop the world's first HIV vaccine have cleared a major hurdle. Initial results from a Phase I trial have shown no adverse effects, while significantly boosting immunity. The vaccine could be commercially available in five years. (io9) Human diseases could soon be modeled in an electronic "organ-on-a-chip", with a new generation of research to replace animal testing. (Science Daily) Astronomers report that HD 40307 g, a super-Earth exoplanet 42 light-years away from Earth, is within the habitable zone of its host star HD 40307 and may be "just right to support life". (Space.com) Rising temperatures due to climate change could mean wild arabica coffee becomes extinct within 70 years, posing a risk to the genetic sustainability of one of the world's basic commodities, according to new research. (The Guardian) 8 November Due to insufficient rates of decarbonisation, the world is on track for 6 °C (11 °F) of climatic warming by the year 2100, according to a new report. (PWC) (UCAR) Scientists debate the scientific basis and claims to novelty of a proposal to use the orbital angular momentum of light and radio waves to massively boost wireless data transfer. (BBC) MIT engineers develop a hearing aid battery which uses ions within the human inner ear to provide a steady electric current. (ExtremeTech) Nao robots are used to teach autistic primary school children in a groundbreaking trial in the UK. (BBC) University of Washington scientists have succeeded in removing the extra copy of chromosome 21 in cell cultures derived from a person with Down syndrome, a condition in which the body's cells contain three copies of chromosome 21 rather than the usual pair. (Io9) American climatologists report that the record-breaking 2012 North American drought continues to worsen, with over 19% of the contiguous United States suffering from extreme drought, and groundwater levels declining nationwide. (Reuters) (NOAA) 9 November The United States Army develops a tactical 3D printing capability to allow it to rapidly manufacture critical components on the battlefield. (BBC) Microsoft demonstrates software that translates spoken English into Chinese while preserving the speaker's intonation. (BBC) 11 November – Scientists develop a highly efficient metamaterial cloaking device capable of rendering objects invisible to microwaves. (E! Science News) 12 November The American Titan machine is declared the world's most powerful supercomputer, capable of performing 17.59 quadrillion floating point operations per second. Overall, the United States has the most supercomputers listed in the global top 500, with 251; China is in second place, with 72. (The Daily Telegraph) The Large Hadron Collider detects an extremely rare particle decay event, casting doubt on the popular theory of supersymmetry. (New Scientist) 13 November A total solar eclipse occurs. (The Guardian) (BBC) A longevity gene is found which makes the Hydra vulgaris virtually immortal, and could extend human lifespans. (Uni Kiel) Physicists conduct the first quantum teleportation from one macroscopic object to another, potentially allowing the development of quantum routers and a quantum Internet. (MIT Technology Review) 14 November An international team of researchers discovers a gene that helps explain how humans evolved from chimpanzees. Scientists say the gene – called miR-941 – appears to have played a crucial role in human brain development, and may shed light on how humans learned to use tools and language. (Science Daily) Even moderate drinking in pregnancy can affect a child's IQ, according to a new study using data from over 4,000 mothers and their children. (Science Daily) A gene that nearly triples the risk of Alzheimer's disease has been discovered by an international team including researchers from Mayo Clinic. It is the most potent genetic risk factor for Alzheimer's identified in the past 20 years. (Science Daily) Scientists sequence the genome of the domestic pig. The similarities between the pig and human genomes mean that the new data may have wide applications in the study and treatment of human genetic diseases. (Medical Daily) (Business Standard) Astronomers discover a wandering, starless rogue planet drifting through space around 100 light-years from Earth. (BBC) Researchers at the American National Institute of Standards and Technology (NIST) prove that single-wall carbon nanotubes may help protect DNA molecules from damage by oxidation. (PhysOrg) 15 November Scientists warn that the lethal ebola virus can spread between species as an aerosol. However, they emphasize that these aerosol particles can only travel short distances. (The Scientist) An efficient, high-volume technique for testing potential drug treatments for Alzheimer's disease has uncovered an organic compound that restored motor function and longevity to fruit flies with the disease. (Science Daily) New artificial muscles made from nanotech yarns and infused with paraffin wax can lift more than 100,000 times their own weight, and generate 85 times more mechanical power than the natural muscle of the same dimensions, according to scientists. (Science Daily) If global temperatures were to rise just 1 degree Celsius, the Bhutanese glaciers would shrink by 25 percent and produce 65 percent less annual melt water, according to research published in Geophysical Research Letters. (Brigham Young University) The United States Navy announces plans to replace its trained minesweeping dolphins with robotic submarines such as the Knifefish by 2017. (PopSci) 16 November Rat heart cells are used by University of Illinois scientists to power tiny, crawling "bio-robots". (BBC) New research has identified a common gene variant which influences when a person wakes up each day, as well as the time of day they are most likely to die. (Harvard) 18 November – A biodegradable nanoparticle has been developed which can stealthily deliver an antigen, tricking the immune system into stopping its attack on myelin and preventing multiple sclerosis in mice. (Science Daily) 19 November Scientists report a huge decline in UK birdlife – from 210 million nesting birds in 1966, down to 166 million in 2012. (RSPB) (BBC) (Report) Cambridge University scientists heal paralyzed dogs by injecting them with cells grown from their nasal linings. Many of the 23 injured dogs treated with the experimental therapy regained some use of their legs even 12 months after their injury, and scientists believe that human patients could be treated in a similar fashion. (ChannelsTV) A new tumor-tracking technique may improve outcomes for lung cancer patients. (Science Daily) IBM researchers have simulated 530 billion neurons and 100 trillion synapses on a supercomputer. (KurzweilAI) (GizMag) (SC12) 20 November NASA scientists report (via an NPR interview) that the Curiosity Mars rover, apparently based on a SAM analysis, has provided, according to John Grotzinger (MSL Principal Investigator), "data that is gonna be one for the history books. It's looking really good." Later, a NASA spokesperson said the discovery "won't be earthshaking, but it will be interesting." Nonetheless, the scientists are presently verifying their results and expect to make an official announcement at the fall meeting of the American Geophysical Union, which will take place between 3 December and 7 December in San Francisco, according to Grotzinger in an interview with Space.com. The news is later played down by NASA. (NPR) (Universe Today) (Time) (Space.com) (New York Times) More than 1,000 coal-fired power plants are being planned worldwide, new research from the World Resources Institute has revealed, with the majority being constructed in China and India. (The Guardian) (WRI) The level of greenhouse gases in the atmosphere reached 390.9 parts per million in 2011, a new record high, according to the World Meteorological Organization. Between 1990 and 2011, there was a 30% increase in radiative forcing. (WMO) (Report) Physicists have shown that synthetic membrane channels can be constructed through "DNA nanotechnology." (Science Daily) Scientists have developed a computer chip that mimics a dog's nose. It is capable of rapidly identifying trace amounts of vapour molecules, providing continuous real-time monitoring at concentrations of just 1 part per billion (ppb). (UCSB) 21 November For the first time, encrypted quantum signals are successfully sent down a conventional broadband fiber, instead of requiring a dedicated individual cable. This development could allow quantum cryptography, which offers near-impenetrable data security, to become available to the general public. (BBC) The effects of climate change are already evident in Europe and the situation is set to get worse, the European Environment Agency has warned. (BBC) (EEA) A United Nations report – the Emissions Gap Report 2012 – says global attempts to limit CO2 emissions are falling well short of what is needed to stem dangerous climate change. (BBC) For the first time, scientists at Fred Hutchinson Cancer Research Center have defined key events that take place early in the process of cellular aging. They have shown that the acidity of the vacuole is critical to aging and the stable functioning of mitochondria. (FHCRC) The printing of 3D tissue has taken a major step forward with the creation of a novel hybrid printer that simplifies the process of creating implantable cartilage. (IOP) European Space Agency (ESA) member states agree at their ministerial council to a 10.1-billion-euro programme of activities, including a planned upgrade to the Ariane 5 rocket. (BBC) (ESA) 23 November Footprints believed to have been made by the giant flightless bird Diatryma indicate that it was a "gentle herbivore" and not a fierce carnivore, scientists say. (BBC) Having a job with poor working conditions can be just as bad for a person's mental health as being unemployed, according to new research published in Psychological Medicine. (MachinesLikeUs) 25 November A Chinese Shenyang J-15 jet fighter conducts the first landing on the country's first aircraft carrier, the Liaoning. This milestone marks a major step forward in China's efforts to increase its naval power. (BBC) Pathological changes typical of Alzheimer's disease have been significantly reduced in mice by blockade of an immune system transmitter. (Science Daily) 26 November A Norwegian liquid natural gas tanker becomes the first ship of its size to attempt a winter crossing of the Arctic. As Arctic ice cover reduces due to climate change, the Arctic sea route may become increasingly viable for large ships. (BBC) Researchers, including NASA scientists and engineers from Los Alamos National Laboratory, have demonstrated a new concept for a reliable nuclear reactor that could be used on space flights. The Demonstration Using Flattop Fissions (DUFF) experiment produced 24 watts of electricity. (Los Alamos National Laboratory) 27 November Permafrost covering almost a quarter of the Northern Hemisphere contains 1.7 trillion tonnes of carbon, twice that currently in the atmosphere, and could significantly amplify global warming should thawing accelerate as expected, according to a new report released today by the United Nations Environment Programme (UNEP). (UNEP) A review reveals that grapefruit drug interactions affects even more than previously though, with the list of drugs with potentially serious interactions more than doubling since 2012. Users of heart medication are particularly vulnerable. (CBC) 28 November The Reaction Engines Skylon spaceplane project achieves a key engine design milestone. (BBC) Astronomers observe a penumbral lunar eclipse. (Los Angeles Times) Previous estimates of sea level rise have been underestimated, while estimates of global temperature rises appear to be accurate, according to a new study published in the journal Environmental Research Letters. (PIK) A completely new method of manufacturing the smallest structures in electronics could make their manufacture thousands of times quicker, allowing for cheaper semiconductors. The findings have been published in the latest issue of Nature. (Science Daily) Scientists have achieved a major breakthrough in deciphering bread wheat's genetic code. This could lead to new varieties that are more productive and better able to cope with disease, drought and other stresses that cause crop losses. (Science Daily) American engineers build a 3D printer capable of manufacturing tools from lunar regolith, potentially allowing future astronauts to manufacture equipment on-site using lunar or Martian rock. (CNET) (WSU) Vanderbilt University engineers develop a lightweight powered exoskeleton, which technology company Parker Hannifin plans to release commercially for paraplegia sufferers in 2014. (Co.Exist) 29 November Scientists discover the second-largest supermassive black hole ever detected, with a mass 17 billion times that of the Sun. However, the black hole resides in an anomalously small galaxy. (BBC) (Universe Today) NASA reports that its MESSENGER probe has discovered water ice and organic compounds on the surface of Mercury. (Reuters) (New York Times) Experts have combined data from multiple satellites and aircraft to produce the most comprehensive and accurate assessment to date of ice sheet losses in Greenland and Antarctica and their contributions to sea level rise. Ice sheet loss at both poles is increasing, the study finds. (NASA) A study published in Nature states that human genetic variation has accelerated rapidly in recent centuries, faster than natural selection can operate. (Wired) 30 November British scientists develop a method of safely cultivating medicinal stem cells from the blood of adult patients, potentially allowing each patient to have a personalised source of stem cells. (Science Codex) Italian scientists publish the first direct images of DNA, which were produced using a scanning electron microscope. The images provide photographic proof of DNA's double-helix structure, and could further scientific understanding of the compound's function. At the Euromold trade show in Germany, manufacturers display numerous advances in commercial 3D printing technology, including a device that can rapidly print an entire bicycle. (Wired) MIT researchers develop a protein-inspired modular robot capable of magnetically folding itself into a wide variety of shapes, potentially heralding future devices that can reconfigure themselves for nearly any purpose. (Science Daily) (BBC) December 1 December – The United States government announces the first major offering of Atlantic coastal sites for offshore wind farm developments. Around will be sold off in 2013. (The Guardian) 2 December Researchers state that they have identified the point of origin of the genes that later enabled human thought and reasoning. This development, 500 million years ago, later granted humans the ability to learn complex skills, analyse situations and think flexibly. (Science Daily) Global carbon dioxide emissions are projected to have risen further in 2012, reaching a new record high of 35.6 billion tonnes, according to figures from the Global Carbon Project. (Science Daily) (The Guardian) 3 December – NASA reports that its Curiosity Mars rover has performed its first extensive soil analysis, revealing the presence of water molecules, sulfur and chlorine in the Martian soil. (SlashGear) (New York Times) (NASA) 4 December A British energy firm announces plans to construct Africa's largest solar energy plant in Ghana. (BBC) NASA announces that it plans to launch another robotic Mars rover in 2020, following the success of its Curiosity rover. (Space.com) (CNET) (NASA) Using a simple "drag-and-drop" computer interface and DNA self-assembly techniques, researchers have developed a new approach for drug development that could drastically reduce the time required to create and test medications. (NSF) Besse Cooper, the world's oldest living human and the last surviving person born in 1896, dies aged 116. (Loganville-Grayson Patch) (BBC) 5 December Paleontologists announce the discovery of what is likely to be the oldest known dinosaur. Nyasasaurus parringtoni is believed to have lived 10-15 million years before the previous earliest known dinosaur specimens. (BBC) Liquid Robotics' autonomous Wave Glider nautical robot completes a record-breaking voyage from San Francisco to Australia. The data-gathering robot, powered by solar panels and wave energy, survived storms and shark attacks during its year-long journey. (BBC) In the first such operation in the United States, medical researchers implant a pacemaker-like device into the brain of an Alzheimer's disease patient in the early stages of the disease. The device, which provides deep brain stimulation and has already been used by sufferers of Parkinson's disease, could boost memory and reverse cognitive decline. (Science Daily) A Belgian team develops a curved LCD contact lens display. Researchers say the prototype could be used in medicine, or lead to adaptable "in-eye" sunglasses. (Optics.org) 6 December The Golden Spike Company announces plans for commercial lunar expeditions by 2020, with flights to the Moon starting at around $750 million per person. (io9) Scientists identify the mechanism that allows Toxoplasma gondii – a single-celled parasite – to pass from the human gut to the brain, where it may cause suicidal thoughts and risk-taking.(The Independent) 7 December An interactive map showing the location of every German bomb dropped on London during World War II is created. (BBC) A young British girl successfully receives a pioneering bone replacement treatment to restore her damaged spine. The operation, the first of its kind ever attempted in Europe, used bone taken from the child's legs to replace her lower vertebrae, which were missing due to a rare and potentially lethal genetic condition. (BBC) NASA's Opportunity Mars rover discovers clay-bearing deposits on the surface of Mars, indicating the past presence of liquid water. (Huffington Post) A new study shows that with "near perfect sensitivity", anatomical brain images alone can accurately diagnose chronic ADHD, schizophrenia, Tourette syndrome, bipolar disorder, or familial risks for major depression. 8 December – The 1997 Kyoto Protocol on the limitation of greenhouse gas emissions is extended until 2020, having previously been set to expire by the end of 2012. (BBC) 9 December – French Polynesia establishes the world's largest shark sanctuary, protecting all shark species from fishing in an area of 4.7 million square kilometres. (RFI) 10 December The first close-up footage of the Sunda clouded leopard, one of the rarest cat species on Earth, is released. (BBC) Scientists succeed in making the skin cells of older people act like younger cells again, simply by adding more filler to the fiber-filled area around the cells. (Science Daily) Researchers create a shape-shifting metamaterial that could revolutionise the treatment of wounds. The liquid material could be infused with drugs, then shaped to fit perfectly inside a wound. (PopSci) 11 December The United States Air Force launches its robotic Boeing X-37 spaceplane on its third classified long-duration mission. (Universe Today) Numerous major Japanese companies, including Mitsubishi, demonstrate new specialist robots for cleaning up the Fukushima Daiichi nuclear disaster. (BBC) 12 December North Korea conducts its first successful orbital launch, placing the Kwangmyŏngsŏng-3 satellite into low Earth orbit. (Reuters) Astronomers report that the most distant known galaxy, UDFj-39546284, is now estimated to be even further away than previously believed. The galaxy, which is estimated to have formed around "380 million years" after the Big Bang (about 13.75 billion years ago), is approximately 13.37 billion light years from Earth. (Space.com) DARPA announces funding for a medical foam technology that can rapidly staunch severe internal bleeding on the battlefield. (BBC) 13 December Scientists identify a new species of primate, the slow loris Nycticebus kayan, which is found to have a toxic bite. (LiveScience) China's unmanned Chang'e 2 probe successfully performs a close flyby of the asteroid 4179 Toutatis, in the first such attempt by a Chinese spacecraft. (Planetary.org) Physicists report the constancy, over space and time, of a basic physical constant of nature that supports the standard model of physics. The scientists, studying methanol molecules in a distant galaxy, found the change (∆μ/μ) in the proton-to-electron mass ratio μ to be equal to "(0.0 ± 1.0) × 10−7 at redshift z = 0.89" and consistent with "a null result". (Space.com) 14 December – British researchers partially sequence the genome of a fast-spreading fungus that is killing off ash trees across Europe. (BBC) 16 December – American scientists use a genetically modified virus to partially convert the heart muscle of guinea pigs into cells which govern the heart's rhythm, effectively creating a biological pacemaker. If this development can be applied to humans, heart conditions could be treated without the need for expensive medical implants and their attendant maintenance surgeries. (BBC) 17 December Researchers at the University of Pittsburgh develop a robotic arm that can be precisely controlled by paralyzed patients using a set of motor cortex implants. (Gizmodo) NASA's twin GRAIL lunar satellites deorbit and are intentionally crashed into the surface of the Moon, marking the end of their year-long gravity research mission. (The Register) The Terrafugia Transition flying car begins flight certification tests, in preparation for its planned commercial release in 2013. (Sun Daily) 19 December Astronomers report that the nearby star Tau Ceti hosts five exoplanets, including one world believed to be within the star's habitable zone. (The Guardian) (ArXiv) The final orbital spaceflight of 2012 occurs, marking the 72nd successful orbital launch of the year, and the 78th overall. Chinese scientists discover fossil evidence which shows that the extinct elephant genus Palaeoloxodon survived in China until as recently as 1,000 BC. The genus was previously believed to have disappeared by 8,000 BC. (BBC) 20 December – NASA scientists release the latest WMAP results and an image of the very early universe. The nine-year WMAP data shows "13.772+/-0.059"-billion-year-old temperature fluctuations and a temperature range of ± 200 microKelvin. In addition, the study finds that 95 percent of the early universe is composed of dark matter and energy, the curvature of space is less than 0.4 percent of "flat", and the universe emerged from the cosmic Dark Ages "about 400 million years" after the Big Bang. (Space.com) (ArXiv 21 December A "Trojan horse" therapy which uses viruses concealed within white blood cells to attack tumours is successfully used to eliminate prostate cancer in mice. However, human trials have yet to be conducted. (BBC) As predicted by scientists, 21 December 2012 passes without any form of apocalyptic event, despite years of global anticipation. (NASA) (DNA India) 24 December Scientists analyse the genomes of individuals with a high familial risk of bowel cancer, and discover two flawed genes that may contribute to the disease. (University of Oxford) (BBC) American researchers report that their experiments with liquid crystals may yield future materials that can be directly controlled and re-shaped in real time. 26 December The world's longest high-speed rail line enters operation in China. The railway links Beijing with Guangzhou. Tigers, having been on the verge of extinction, are now making a comeback in India and Thailand, according to the Wildlife Conservation Society. (WCS) 28 December – Stanford University engineers publish a design for a future mission to the Martian moon Phobos, incorporating both an orbiting satellite and spherical surface rovers. 31 December – A NASA-supported study suggests that manned spaceflight may harm the brains of astronauts and accelerate the onset of Alzheimer's disease. IISE Top 10 New Species The Top 10 New Species 2013 was announced on 22 May 2013 by the International Institute for Species Exploration, commemorating unique species discovered during 2012. The ten selected new species were: Viola lilliputana Chondrocladia lyra Cercopithecus lomamiensis Sibon noalamina Ochroconis anomala Paedophryne amauensis Eugenia petrikensis Lucihormetica luckae Semachrysa jade Juracimbrophlebia ginkgofolia Prizes Abel Prize 2012 Abel Prize: Endre Szemerédi Fundamental Physics Prize 2012 Fundamental Physics Prize: Nima Arkani-Hamed, Alan Guth, Alexei Kitaev, Maxim Kontsevich, Andrei Linde, Juan Maldacena, Nathan Seiberg, Ashoke Sen and Edward Witten 2012 FPP special award: Stephen Hawking, Peter Jenni, Fabiola Gianotti, Michel Della Negra, Tejinder Singh Virdee, Guido Tonelli, Joe Incandela and Lyn Evans Kyoto Prize 2012 Kyoto Prize in Advanced Technology: Ivan Sutherland 2012 Kyoto Prize in Basic Sciences: Yoshinori Ohsumi Nobel Prize 2012 Nobel Prize in Physiology or Medicine: John B. Gurdon and Shinya Yamanaka 2012 Nobel Prize in Physics: Serge Haroche and David J. Wineland 2012 Nobel Prize in Chemistry: Robert J. Lefkowitz and Brian Kobilka Deaths Sources: The Guardian and The Daily Telegraph January 3 January – James F. Crow, American geneticist (b. 1916). 6 January – Roger Boisjoly, American rocket engineer (b. 1938). 7 January – Herbert Wilf, American mathematician and academic (b. 1931) 12 January – Bjørn G. Andersen, Norwegian quaternary geologist (b. 1924). 21 January – Roy John Britten, American biologist and geneticist (b. 1919) February 16 February – Donald Henry Colless, Australian entomologist (b. 1922). 19 February – Renato Dulbecco, Italian American virologist, winner of the 1975 Nobel Prize in Physiology or Medicine (b. 1914). 24 February – Oliver Wrong, British nephrologist (b. 1925). March 10 March – Frank Sherwood Rowland, American atmospheric chemist, winner of the 1995 Nobel Prize in Chemistry (b. 1927). 24 March – Sir Paul Callaghan, New Zealand physicist (b. 1947). April 8 April – Fang Lizhi, Chinese astrophysicist and political activist (b. 1936). 20 April – George Cowan, American chemist, Manhattan Project scientist and businessman (b. 1920). 29 April – Roland Moreno, Egyptian-born French inventor of the smart card, Légion d'honneur recipient (b. 1945). May 12 May Fritz Ursell, German-born British mathematician and fluid mechanics expert (b. 1923). Donald Nicholson, British biochemist (b. 1916). 20 May – Eugene Polley, American engineer, inventor of the wireless television remote control (b. 1915). 21 May – Alan Thorne, Australian anthropologist (b. 1939). 27 May – Friedrich Hirzebruch, German mathematician (b. 1927). 30 May – Sir Andrew Huxley, British physiologist and biophysicist, winner of the 1963 Nobel Prize in Physiology or Medicine (b. 1917). June 7 June Phillip V. Tobias, South African paleoanthropologist (b. 1925). F. Herbert Bormann, American plant ecologist, credited with the discovery of acid rain (b. 1922). 13 June – William Standish Knowles, American chemist, winner of the 2001 Nobel Prize in Chemistry (b. 1917). July 3 July – Sergio Pininfarina, Italian automotive engineer and Senator for life (b. 1926). 8 July – Lord Henry Chilver, British engineer, lecturer and politician (b. 1926). 21 July – Geoffrey Hattersley-Smith, British geologist and glaciologist (b. 1923). 23 July – Sally Ride, American physicist and astronaut, first American woman in space (b. 1951). 26 July – Ralph Slatyer, Australian ecologist and first Chief Scientist of Australia (b. 1929). August 2 August – Sir Gabriel Horn, British biologist (b. 1927). 3 August – Martin Fleischmann, British chemist and cold fusion theorist (b. 1927). 6 August – Sir Bernard Lovell, British physicist and radio astronomer (b. 1913). 14 August – Sergei Kapitsa, Russian physicist and TV host (b. 1928). 21 August – William Thurston, American mathematician, winner of the 1982 Fields Medal (b. 1946). 25 August – Neil Armstrong, American astronaut, first human being to set foot upon the Moon (b. 1930). September 6 September – Jerome Horwitz, American chemist, inventor of the anti-HIV drug azidothymidine (b. 1919). 8 September – Bill Moggridge, British industrial designer who created the original laptop shape (b. 1943). 9 September – Ron Taylor, Australian shark expert, conservationist and filmmaker (b. 1934). 13 September – Dilhan Eryurt, Turkish astrophysicist (b. 1926). 25 September – Dame Louise Johnson, British biochemist (b. 1940). 29 September – Neil Smith, Scottish geographer (b. 1954). October 5 October – Keith Campbell, British biologist who was involved in creating the first cloned mammal (b. 1954) 15 October – Maria Petrou, Greek-born British artificial intelligence researcher (b. 1953). 17 October – Stanford Ovshinsky, American physicist and inventor who designed the battery now used in hybrid cars (b. 1922). 20 October E. Donnall Thomas, American physician, joint winner of the 1990 Nobel Prize in Physiology or Medicine (b. 1920). Paul Kurtz, American skeptic, philosopher and secular humanist (b. 1925). 29 October – Wallace L. W. Sargent, Anglo-American astronomer (b. 1935). November 11 November – Farish Jenkins, American paleontologist (b. 1940). 12 November – Daniel Stern, American psychologist (b. 1934). 14 November – Norman Greenwood, Australian chemist (b. 1925). 26 November – Joseph Murray, American surgeon and organ transplant pioneer, joint winner of the 1990 Nobel Prize in Physiology or Medicine (b. 1919). December 9 December Sir Patrick Moore, English astronomer (b. 1923). Alex Moulton, English mechanical engineer and inventor (b. 1920). Norman Joseph Woodland, American engineer, inventor of the barcode (b. 1921). 17 December – Colin Spedding, British agricultural scientist (b. 1925). 24 December – Guy Dodson, New Zealand biochemist (b. 1937). 27 December – Archie Roy, Scottish astronomer (b. 1924). 30 December Rita Levi-Montalcini, Italian neurologist and Senator for life, joint winner of the 1986 Nobel Prize in Physiology or Medicine (b. 1909). Carl Woese, American microbiologist (b. 1924). See also List of emerging technologies List of years in science 2012 in paleontology 2012 in spaceflight References External links "Medical sciences news highlights of 2012". BBC. 29 December 2012. "366 days: Nature's 10 – Ten people who mattered this year". Nature. 19 December 2012. Science obituaries at Legacy.com 21st century in science 2010s in science
562696
https://en.wikipedia.org/wiki/MathWorks
MathWorks
MathWorks is an American privately held corporation that specializes in mathematical computing software. Its major products include MATLAB and Simulink, which support data analysis and simulation. History The company's key product, MATLAB, was created in the 1970s by Cleve Moler, who was chairman of the computer science department at the University of New Mexico at the time. It was a free tool for academics. Jack Little, who would eventually set up the company, came across the tool while he was a graduate student in electrical engineering at Stanford University. Little and Steve Bangert rewrote the code for MATLAB in C while they were colleagues at an engineering firm. They founded MathWorks along with Moler in 1984, with Little running it out of his house in Portola Valley, California. Little would mail diskettes in baggies (food storage bags) to the first customers. The company sold its first order, 10 copies of MATLAB, for $500 to the Massachusetts Institute of Technology (MIT) in February 1985. A few years later, Little and the company moved to Massachusetts, and Little hired Jeanne O'Keefe, an experienced computer executive, to help formalize the business. By 1997, MathWorks was profitable, claiming revenue of around $50 million, and had around 380 employees. In 1999, MathWorks relocated to the Apple Hill office complex in Natick, Massachusetts, purchasing additional buildings in the complex in 2008 and 2009, ultimately occupying the entire campus. MathWorks expanded further in 2013 by buying Boston Scientific's old headquarters campus, which is near to MathWorks' headquarters in Natick. By 2018, the company had around 3,000 employees in Natick and said it had revenues of around $900 million. Products The company's two lead products are MATLAB, which provides an environment for scientists, engineers and programmers to analyze and visualize data and develop algorithms, and Simulink, a graphical and simulation environment for model-based design of dynamic systems. MATLAB and Simulink are used in aerospace, automotive, software and other fields. The company's other products include Polyspace, SimEvents, and Stateflow. Corporate affairs Intellectual property and competition In 1999 the US Department of Justice filed a lawsuit against MathWorks and Wind River Systems alleging that an agreement between them violated antitrust laws. The agreement in question stipulated that the two companies agreed to stop competing in the field of dynamic control system design software, with MathWorks alone selling Wind River's Matrixx Software and that Wind River would stop all research and development and sales in that field. Both companies eventually settled with the Department of Justice and agreed to sell the MATRIXx software to a third party. MathWorks had total sales of $200 million in 2001, with dynamic control system design software accounting for half of those sales. MathWorks's Simulink software was found to have infringed 3 patents from National Instruments related to data flow diagrams in 2003, a decision which was confirmed by a court of appeal in 2004. In 2011, MathWorks sued AccelerEyes for copyright infringement in one court, and patent and trademark infringement in another. AccelerEyes accepted consent decrees in both cases before the trials began. In 2012, the European Commission opened an antitrust investigation into MathWorks after competitors alleged that Mathworks refused to grant licenses to its intellectual property that would allow people to create software with interoperability with its products. The case was closed in 2014. Logo The logo represents the first vibrational mode of a thin L-shaped membrane, clamped at the edges, and governed by the wave equation, which was the subject of Moler's thesis. Community The company annually sponsors a number of student engineering competitions, including EcoCAR, an advanced vehicle technology competition created by the United States Department of Energy (DOE) and General Motors (GM). MathWorks sponsored the mathematics exhibit at London's Science Museum. In the coding community, MathWorks hosts MATLAB Central, an online exchange where users ask and answer questions and share code. MATLAB Central currently houses around than 145,000 questions in its MATLAB Answers database. The company actively supports numerous academic institutions to advance STEM education, including giving funding to MIT Open Courseware and MITx. References Further reading External links Companies based in Natick, Massachusetts Software companies established in 1984 Software companies based in Massachusetts 1984 establishments in Massachusetts Software companies of the United States 1984 establishments in the United States Companies established in 1984
3450318
https://en.wikipedia.org/wiki/Ring%20Around%20the%20Moon%20%28Space%3A%201999%29
Ring Around the Moon (Space: 1999)
"Ring Around the Moon" is the fourth episode of the first season of Space: 1999. The screenplay was written by Edward di Lorenzo; the director was Ray Austin. The shooting script is dated 14 December 1973 with green page amendments dated 17 January 1974; the final shooting script is dated 8 February 1974. Live-action filming took place Wednesday 27 February 1974 through Thursday 14 March 1974. Story Technician Ted Clifford enters Main Mission to perform a minor maintenance task. As he unpacks his tool kit at an access panel near one of the windows, he fails to notice a sphere of orange light materialising above the lunar horizon. The sphere pulsates and Clifford stiffens as an aura of orange light surrounds his head. Zombie-like, he crosses Main Mission and begins to operate an input terminal on Main Computer with incredible speed. A man possessed, he fends off all attempts to drag him away from the keypad with super-human strength. Suddenly, Clifford backs away from the computer bank. He whimpers Help me, collapses and dies. Before anyone can react, the Main Mission staff are knocked off their feet by a tremendous jolt—a beam of orange light has reached out from the sphere and enveloped the Moon. Tracking sensors reveal the sphere to be stationary but, from their perspective, it is still getting closer to the Moon's surface; John Koenig realises that the Moon has been trapped in an orbit around the sphere. An audio signal is received and a sibilant voice announces that they are prisoners of the planet Triton. The staff reviews the damage caused by the Moon's sudden deceleration. The fact that all but four Eagles are non-operational and the main generators show no sign of damage but are only putting out minimum power indicates that the sphere's occupants have purposefully compromised Alpha's defences. David Kano reveals that Clifford had accessed and scanned classified information during his mad session with Computer. With no information on Triton available, Koenig feels a reconnaissance flight is in order. Victor Bergman comments that he thinks the aliens will not be surprised; he has a nasty feeling they are now being watched. Koenig receives a report on Ted Clifford's autopsy from Doctors Helena Russell and Bob Mathias—a ball of orange light was briefly visible at the base of his brain, with most of the surrounding brain tissue appearing to have melted. Also, his optic nerve had been reconfigured to function like a high-speed camera and the processing speed of his neuronal system having been increased a thousandfold. The conclusion reached is that Clifford's physiology was re-structured by the aliens to function as a computer. During this, Bergman's suspicions are confirmed as the viewer is taken into the sphere where unseen entities reside in a black void, surrounded by swirling coloured light and melodic sound. They observe the goings-on in the Alpha Medical Centre...and seem especially interested in Helena. Alan Carter and co-pilot Jim Donovan lift-off in Eagle Three—also under alien observation—to reconnoitre the sphere. On approach, the ship is deflected by a force-beam of orange light. Both astronauts are rendered unconscious, and the Eagle is sent tumbling out of control back towards the Moon. With controls set to manual, Paul Morrow cannot use his remote link to establish control and bring them back safely. Eagle Three crash-lands seven hundred metres from the Moonbase perimeter. With no other means available, Koenig decides to lead a rescue team on foot. Halfway to the downed Eagle, Koenig's party is ambushed; a ball of orange light approaches and Helena is compelled to walk into it. Koenig attempts to prevent her abduction, but is thrust back and rendered unconscious. The light, with Helena, disappears. Back at Alpha, Koenig regains consciousness and learns of Helena's abduction and the death of Carter's co-pilot. Bergman reckons the crash was nothing more than a lure to get Helena out on the surface and unprotected for unknown reasons. Koenig wants to make another attempt to penetrate the sphere; he proposes to Bergman that in order to defeat the Tritonian force-field effect, they need to radically increase the strength of their Eagles' standard anti-gravity screens. Helena materialises in the black void, clad in gossamer robes. She converses with her unseen captors, telling them the Alphans mean them no harm and that they need their help. The Tritonian voice says that it is she that will help them; they are the eyes of Triton and everything that was, is, and will be is recorded by them. A frenetic light-show ensues as Helena is unknowingly processed by the Tritonians. The modified Eagle is ready and Koenig and Carter take-off to rendezvous with the sphere. The strengthened anti-gravity screens effectively thwart the power of the projected force-beam. The aliens then reverse the field and the Eagle is suddenly dragged forward at incredible velocity. Before being rendered unconscious by the mounting g-forces, Koenig manages to switch the instruments to automatic and Morrow brings them safely back to Alpha. During this activity, a ball of light drops down to the Moon's surface and, at an Alpha airlock station, deposits a smiling Helena. Helena is taken to Medical for a comprehensive work-up. All the tests come back fine, except one—despite all evidence to the contrary, the results of her eye exam indicate she should be blind. Faced with this physiological contradiction, the only logical conclusion is that she has undergone the same processing procedure used on Ted Clifford. Moving to Bergman's quarters, the professor finds a possible reference to the Tritonians as the 'Eyes of Heaven' in the Pyramid Texts of the Old Kingdom of Ancient Egypt, revealing their long-term observation of Mankind. Helena recalls that she was not wearing her spacesuit, indicating the sphere contains a breathable atmosphere. With that, Bergman speculates the Tritonians may have a recognisable humanoid form. This brainstorming session is interrupted by the Tritonians' activation of Helena. In a trance-like state, she moves through Alpha, dematerialising at will to avoid all obstacles. She reaches Main Mission and approaches the Main Computer terminal. There, she begins the same hyperactive operation of Computer as Clifford had done. Kano reviews the accessed memory cells; he reports one of the first cells scanned and transmitted through Helena stored the complete schematics of the Moonbase life-support system. At the rate of her current activity, Bergman surmises that they have 132 hours until Helena exhausts Computer's memory store. Mathias, though, reports that she will be dead long before that. Bergman and Koenig meet to discuss the Triton entities, figuring they must have some physical limitation that prevents them from leaving their sphere. Each brings some important information to the table. Bergman has determined from his galaxy charts that the planet Triton no longer exists. Koenig forwards a report from Kano that the malfunction of a computer memory cell disabled the Triton force-field for the thirty-two seconds required for the system to correct the error. They realise that there must be a circuit from the sphere, to the force-field around the Moon, to Helena, to Computer, and back to the sphere. With all the other components controlled by the Triton probe, Computer is the only exploitable link in the chain. A plan is formulated to intentionally jam twenty-five key memory circuits in Computer, negating the force-field for the thirteen minutes required to fly to the sphere. During this time, Kano will 'hard-wire' the astronomical data proving Triton's destruction into Computer so that it can be the only information transmitted by Helena. Koenig hopes to persuade the Triton probe that, with the death of its home world, its function is obsolete, and to release Helena and the Moon. The plan is carried out. With the force-field down, Koenig and a squad of Security men travel to the sphere and enter it unobstructed. They disembark and, while searching the black void, Koenig is isolated and makes first-hand contact with the Tritonians. They appear as floating spheres of striated brain-tissue embedded with a single huge eye. They reveal to Koenig that their purpose is to gather information on Earthmen in preparation for a potential invasion of Triton. Furthermore, they have permitted Koenig's plan to succeed with the intent of delivering him to their domain—he, too, will be processed and become Helena's replacement after her death. By this time, Computer has cleared the affected memory cells. The force-field returns and Helena is reactivated. She transmits the data on Triton to the sphere, where Koenig forces the entities to acknowledge the demise of their home planet. Faced with the fact of a purposeless existence, the Tritonians opt to self-destruct. Koenig and company scramble back to the Eagle and take off as the sphere disintegrates around them. As Bergman and Helena watch on the big screen, their ship emerges, barely escaping the final explosion. Later, Helena is given a clean bill of health by Mathias and shows no lasting effects of her abduction. Bergman is pensive, noting that even with their immense body of knowledge, the Tritonians could not endure. He muses that "Perhaps knowledge isn't the answer." Koenig then counters: "Then what is?" Cast Starring Martin Landau — Commander John Koenig Barbara Bain — Doctor Helena Russell Also Starring Barry Morse — Professor Victor Bergman Featuring Prentis Hancock — Controller Paul Morrow Clifton Jones — David Kano Zienia Merton — Sandra Benes Anton Phillips — Doctor Bob Mathias Nick Tate — Captain Alan Carter Max Faulkner — Ted Clifford Uncredited Artists Suzanne Roquette — Tanya Michael Stevens — Man in Corridor Chai Lee — Anna Wong (removed from final cut) Prentis Hancock — Triton Probe Voice Music An original score was composed for this episode by Vic Elms and music editor Alan Willis. Against expectations, Elms (who was producer Sylvia Anderson's son-in-law) thought he could improvise a score with the musicians the day of recording, as he could neither read nor write music. To avoid a walkout, Willis stepped in, hurriedly set some of Elms's themes down on paper, and conducted the musicians himself. Barry Gray wanted the music to be in the style of Maurice Ravel; Elms and Willis's final product is more reminiscent of the rock idioms of the day of the bands Deep Purple, Emerson, Lake & Palmer and Yes. A track from the Thunderbirds episode The Mighty Atom can also be heard in the episode. Production Notes The original concept for this episode, involving UFOs and alien abduction, was one of ten episodes outlines devised for the writers' guide prior to production of "Breakaway". Whether this original concept was conceived by script editor Edward di Lorenzo or whether he adapted the idea is unclear. The ideas presented in the production bear striking resemblances to di Lorenzo's script writing for Mel Welles' Lady Frankenstein, a 1972 attempt to update the Frankenstein myth by adding issues related to gender, ecology and power/knowledge. (Many themes from this movie were reworked into "Ring Around The Moon" and his subsequent stories for the series "Missing Link" and "Alpha Child".) In the original script, the planet name was Uralt (German for 'ancient'), not Triton. As the theme of the episode is the foundation of science and whether the human condition can be understood from the point of rationality alone, it seems somewhat unclear why they changed from Uralt to Triton. In early drafts, Chief Engineer Smith (or Smitty, as introduced in "Black Sun") is a minor character. He does not appear in the final shooting script, although there is a reference to a Chief Engineer Anderson, which may be an internal joke referring to Gerry Anderson's obsession with technical issues. In general, the story seems to be about the relationship between power and knowledge, and, to a large extent, appears to be a visualization of some of Michel Foucault's main writings. The story has a strong visual style with vivid colour and abstract light effects. Some have compared the visual style to the German Expressionist cinema, others have found parallels in the French and Eastern European Theatre of the Absurd. A preproduction painting by Keith Wilson shows a strong 2001: A Space Odyssey influence. From a visual point of view, the episode could perhaps be seen as a paraphrase over the final third of 2001, consisting of the psychedelic "journey through time" sequence and the study of the astronauts M. C. Escher-like reflections on his own self-image. Ray Austin made his debut as a director on Space: 1999 with this episode. Probably due to having been a stuntman and stunt coordinator before taking up direction, his approach on this particular episode and all later episodes of Space: 1999 has a very clear physical presence. The episode is extremely visual with a lot of movement, contrapunctual to the philosophical and cerebral contents of the story. Austin's style of direction has sometimes been compared to that of Alfred Hitchcock, and throughout Space: 1999 there are a number of quotes to the master. This particular entry has from time to time been compared with Rope (1948). Novelisation The episode was adapted in the first Year One Space: 1999 novel Breakaway by E.C. Tubb, published in 1975. As with most of his work for the series, Tubb took many liberties with the details of this teleplay. The probe's home of Triton was no longer a planet two million light-years from Earth, but the moon of Neptune, which in this narrative, had gone missing some few years before 1999. Bergman also creates the anti-gravity shield specifically for the purpose of penetrating the Triton ship's forcefield, having gained the knowledge required to perfect the technique from the atomic-waste explosion earlier in the novel. Having been convinced of the demise of its home, the probe chooses to self-destruct—though not before warning the Alphans of their impending encounter with a black sun. Response "Ring Around the Moon" appears to split fans. According to some, 'it is generally regarded as one of the lesser first season episodes'. Others, however, see it as the ultimate experience in science fiction, similar to Alphaville, 2001: A Space Odyssey and Solaris. In a review in the Gerry Anderson-related fanzine Andersonic, Richard Farrell interprets the plot as a philosophical discussion of the difference between knowledge and wisdom: "the Tritonians merely seek knowledge, whereas the Alphans use the facts they have learned (about Triton in this case) as a means to an end, i.e. saving Helena and escaping." . He concludes "The writer's point is alluded to in the epilogue when Victor muses on the Triton probe, 'All that knowledge and yet... perhaps knowledge isn't the answer after all'. Di Lorenzo's answer, perhaps, is being able to accumulate the wisdom to put that knowledge to some use." Although feeling "the plot itself is undeniably a little thin" he praises director Ray Austin's "stylized use of sound" and the lighting particularly inside the Tritonian sphere which "imbue the sequences with a sense of tension and atmosphere - the aliens are heard but never seen". References External links Space: 1999 - "Ring Around the Moon" - The Catacombs episode guide Space: 1999 - "Ring Around the Moon" - Moonbase Alpha's Space: 1999 page Discussion Group "Ring Around The Moon" Episode Review at Andersonic.co.uk 1976 British television episodes Space: 1999 episodes
18404394
https://en.wikipedia.org/wiki/Devotion%20%281946%20film%29
Devotion (1946 film)
Devotion is a 1946 American biographical film directed by Curtis Bernhardt and starring Ida Lupino, Paul Henreid, Olivia de Havilland, and Sydney Greenstreet. Based on a story by Theodore Reeves, the film is a highly fictionalized account of the lives of the Brontë sisters. The movie features Montagu Love's last role; he died almost three years before the film's delayed release. Plot The story takes place in the early 1800s, when the Brontë sisters Charlotte and Anne have made the decision to leave their family – their sister Emily, their brother Branwell, their aunt and their vicar father – to take positions as governesses in other families. The two sisters long to break free from their tedious life and get experiences from the outside world, to prepare for their careers as writers. They have the intention of giving part of their income from the governess positions to their talented brother Branwell, so that he can go to London and study art, to become a great temperamental painter. One night when Bran is getting drunk at a local tavern, a man named Arthur Nicholls, his father's new curate, arrives. The drunken Bran insists that Arthur accompany him to the vicarage. At first Arthur refuses, believing that it is too late in the evening. When he realizes how drunk Bran has become, he accompanies him to see that he gets home safe and sound. Emily, who answers the door, mistakes Arthur for one of Bran's drunken friends and treats him with contempt. The next day Bran leaves for London again, and Arthur reappears at the house. He is greeted by the unwelcoming Mr. Brontë, and soon Emily realizes her mistake and she and Arthur become good friends. They go on walks together, and one day Emily shows Arthur a lonely house on a hill, the one that inspired her writing her novel, Wuthering Heights. Time passes and a disillusioned Bran returns home from London. He blames all his sisters for his failure as a painter. Soon after Charlotte and Anne also return home, and at a ball at the neighboring Thornton house, Arthur is struck by Charlotte's beauty and falls in love with her. When Charlotte realizes that Emily is interested in Arthur, she becomes interested in him as well. Later, a drunken Bran disrupts the dance, and Arthur leaves the dance and takes him home. Arthur discovers that Charlotte wants to take Emily with her to Brussels to further their educations. Since he is in love with Charlotte he decides to sponsor the trip. He secretly buys a painting from Bran, and with the money from the sale the sisters are able to go to Europe. Emily hopes that Arthur will ask her to stay behind, but he has fallen in love with Charlotte and will not comply. The girls start their education at the school of Monsieur and Madame Heger, located in Brussels. Before long Charlotte admits to Emily that she has received unwelcome attentions while she was a governess and that after she returned home, Arthur kissed her. Emily is heartbroken by the news. That night, Emily dreams about the moors and a threatening black horseman. Not so long after that, Monsieur Heger takes Charlotte privately to an exhibition and kisses her. When she returns to the Hegers' house, Emily is already packing, having received a letter from Anne saying that Bran is ill. Both Charlotte and Emily immediately rush back to England, and once they are back, they both start writing their novels. Bran reads them both and then he tells Emily that they are both in love with the same man. Eventually the sisters learn that Arthur bought the painting that financed their trip to Europe, and Emily insists that they should repay him. One day Emily can’t find Bran so she goes out in the rain looking for him. She finds him, and shortly after that he collapses and dies. Emily’s book Wuthering Heights and Charlotte's book Jane Eyre are both published under male pseudonyms. Despite the fact that Charlotte's sells better, the famous author William Makepeace Thackeray believes that Emily's is the greater of the two. Thackeray meets Charlotte and introduces her to London society. She convinces him to take her to the poverty-stricken East End, where Arthur now works. Arthur admits to Charlotte that he loves her, but because Emily loved him, he felt he could not stay in Yorkshire. Charlotte gets a message that Emily is taken seriously ill, and she hurries home to Yorkshire. She arrives just in time to say goodbye before her sister dies from her illness. After Emily’s demise Arthur returns to woo Charlotte. Cast Ida Lupino as Emily Brontë Paul Henreid as Reverend Arthur Nicholls Olivia de Havilland as Charlotte Brontë. Despite playing the biggest part, she was only credited third due to her lawsuit against Warner Brothers. Sydney Greenstreet as William Makepeace Thackeray Nancy Coleman as Anne Brontë Arthur Kennedy as Branwell Brontë Dame May Whitty as Lady Thornton Victor Francen as Monsieur Heger Montagu Love as Reverend Bronte Ethel Griffies as Aunt Branwell Edmund Breon as Sir John Thornton Odette Myrtil as Madame Heger Doris Lloyd as Mrs. Ingraham Marie De Becker as Tabby Eily Malyon as Lady Thornton's friend at the ball Reginald Sheffield as Charles Dickens Harry Cording as Coachman (uncredited) Leo White as Waiter (uncredited) Production Devotion was filmed between November 11, 1942 and mid-February 1943, but its screening was delayed until April 5, 1946 at the Strand Theater in Manhattan, due to a lawsuit by Olivia de Havilland against Warner Brothers. De Havilland successfully sued her studio to terminate her contract without providing the studio an extra six months to make up for her time on suspension. It proved a landmark case for the industry. Reception Bosley Crowther wrote in The New York Times: “The Warners have simplified matters to an almost irreducible extreme and have found an explanation for the Brontës in Louisa May Alcott terms. They have visioned sombrous Emily, the author of Wuthering Heights, and Charlotte, the writer of Jane Eyre, as a couple of 'little women' with a gift." Despite an excellent score by Erich Wolfgang Korngold, and production values and an ending that hearkened back to the earlier film version of Wuthering Heights (1939) from another production company, the press generally put Devotion down as "a mawkish costume romance, even with identities removed. Presented as the story of the Brontës—and with the secondary characters poorly played—it is a ridiculous tax upon reason and an insult to plain intelligence." On February 17, 1947, Lux Radio Theatre broadcast a 60-minute radio adaptation of the movie starring Jane Wyman, Vincent Price and Virginia Bruce. References External links 1946 films 1940s biographical films American biographical films American films American black-and-white films Biographical films about writers Films directed by Curtis Bernhardt Films scored by Erich Wolfgang Korngold Films about siblings Films about sisters Films set in the 1830s Films set in the 1840s Warner Bros. films
41601527
https://en.wikipedia.org/wiki/Ness%20Wadia%20College%20of%20Commerce
Ness Wadia College of Commerce
Ness Wadia College of Commerce is a college affiliated with the Savitribai Phule Pune University, run by the Modern Education Society. This college was founded by Indian businessman Sir Ness Wadia. Early history It has been well represented on University Boards of Studies where its faculty members have been providing valuable inputs to the Board members. Within the institution, teachers are working on drafting syllabi for its own autonomous courses. The heads of departments meet to ensure that the syllabus is fully covered. Background The background of the Ness Wadia College of Commerce can be traced back to June 1959 when the first pre-degree commerce class was started under the auspices of the Modern Education Society. The class was held in the Nowrosjee Wadia College of Arts and Science. In the years that followed a need to establish a full-fledged commerce college was identified. In 1969, the commerce wing came to be established as an independent college of commerce, named after the industrialist and philanthropist Sir Ness Wadia (1873-1952). His brother Sir Cusrow Wadia also contributed to the establishment of the college. The college moved into its present premises with a building of its own in 1971. The college was inaugurated on 15 July 1969, by Dr. H. V. Pataskar, the then vice-chancellor of the University of Pune. It came to be headed by Dr. B. S. Bhanage, one of the leading scholars of his time who later became the vice-chancellor of Shivaji University. He was followed in 1972 by Professor V. K. Nulkar who retired in 1991. Professor Dr. H. M. Shaikh took over charge of the college from Principal Nulkar who remained principal until February 2000 when he retired. Dr. Ms. V. S. Devdhar took over as principal in Feb 2000 on Principal Shaikh's retirement. After her retirement in June 2009 Dr. H. V. Deosthali worked as acting principal until Dr. M. M. Andar took over on his retirement in January 2010. Academics The majors offered are as follows. Senior College Under Graduate Courses- B.Com BBA BCA BBM-IB Post Graduation Courses- M.Com MCA Post Graduation Diploma Courses- DTL DBF DIB Ph.D. Courses- The college is a recognized Research Centre for Ph.D. by the University of Pune under its Faculty of Commerce. Autonomous Courses Certificate Course in Tally. ERP 9 Business English ICICI Bank University E-Learning Course ICICI Equity Investment Course Certificate Course in Spoken English Certificate Course in Business English Certificate Course in Foreign Languages (French, German, Spanish, Chinese & Japanese) ACCA, UK CISI, UK Campus & Infrastructure The college has 2 seminar halls, 2 three-storeyed buildings, a library, gymnasium, a sports ground, bank, post office, stationery shop, BSNL Internet & telephone care centre, canteen, students' hostel and common rooms for boys and girls. Library The college has a spacious library spread over 526 Sq. metres with a seating capacity of approximately 200. It is open from 7.30 am to 5.30 pm on working days and is also open on holidays. It has a collection of over 50,000 books and subscribes to over 100 journals/periodicals. It subscribes to 4 e-journals along with N-List and participates in resource sharing with INFLIBNET. It has 18 computers for public access with internet bandwidth of speed 2 MBPS being provided. The library uses SOUL 2.0 Library Gymkhana & Sports Indoor and outdoor games and sports facilities are available in the college campus. An orientation programme is conducted every year to make the students aware of the sports facilities available to them. The college appoints expert coaches for various sports events. In the field of sports, the students of the college have participated in sports activities organized at the inter-collegiate, inter-zonal, inter-university, state and national levels. Students have won various prizes, medals, shields and the General Championships many times. Many of the Students had got honor of being selected in various tournaments held in different parts of country at various levels. Since its establishment in 1969, the college has maintained an outstanding sports record. Recently the college ground has been renovated with ultra-modern equipments. All kinds of facilities are provided for the students – a state of the art gymnasium, and well-equipped grounds for many sports. Computer Laboratory The institution has an up-to-date computer facility for teaching, administration and library. There are 2 Computer Laboratories – Computer lab I consisting of 20 branded Compaq Pcs and Computer lab II consisting of 30 computers which are branded Pcs (Lenovo Think centre Desktops) and an IBM Server X3400M3. Each laboratory has a battery backup of 4 hours and both laboratories have LAN facility. Only Licensed software is provided in the computer laboratories. WiFi & Internet Facility The WiFi facility ints and 2 Ruckus zone directors. 75 nodes are connected to this WiFi and 45 nodes are connected to BSNL Internet. This Internet facility is available free to all faculty on campus and to students on the campus on a nominal payment basis. The computers in the administration office have LAN connectivity within the offices of the principal and vice principal. The computers in the computer library and college library are LAN connected. All the computers in the college premises possess the licensed software. Hostel The college has three blocks of boys and girls hostels. Hostels are equipped with all modern amenities conveniences and comforts. The rooms are well ventilated and are suitably furnished. A new hostel is constructed with independent facility block containing separate mess and reading halls for boys and girls. The students staying in the hostel are provided subsidized mess facility. The hostel also provides guest room for parents/ relatives in case of emergency. Round the clock security is provided to both hostels and it is under CCTV surveillance. Open air theater The open air theater is a platform provided by the college to students who wish to explore their talents in extracurricular activities like dance, drama, poetry and other fields. This will help them to prepare for various competitions at intercollegiate level. Placement and career guidance cell The placement & career guidance cell of the college takes requests from prominent firms in Pune and Mumbai to hold seminars, workshops and presentations for recruitment into their varied cadres of employment. It organizes a career fest at the end of the year which absorbs substantial number of final year Ness Wadians in assorted industries. The college promises 100% placement assistance to every student, including the students of the B.Com., BBA, BBM (IB), BCA courses and postgraduate and diploma students who have opted for a career in industry. The college placement cell covers job opportunities in the field of banking, marketing, finance, IT, insurance, etc. Companies like Infosys, Deloite, E-clerx, Metro, ADP, etc. are invited to select candidates. The placement cell also arranges for training in- Group discussions Interview techniques Resume writing Public speaking Awards and rankings The peer team of the National Council for Assessment and Accreditation (NAAC), Bangalore visited the college in February 2004 and accredited it with an A grade. This makes the college one of the two exclusive commerce colleges in Pune with an A grade. India Today has already ranked it as one of the top five colleges in the city. Best college award & best principal award: The college was selected two years consecutively for two University of Pune awards: the 'Best College Award (2005–06)' & the 'Best Principal Award (2006–07)'. Both the awards were presented by the Vice Chancellor on the University Foundation Day (10 February 2005). Alumni The college networks and collaborates with its past students through the Ex-Ness Wadians’ Association which is its alumni association. The former faculty is invited to the annual Foundation Day and other prominent occasions. References External links Official website Sister Institutions Ness Wadia College Universities and colleges in Pune Commerce colleges in India 1969 establishments in Maharashtra
1245832
https://en.wikipedia.org/wiki/Zero%20Minus%20Ten
Zero Minus Ten
Zero Minus Ten, published in 1997, is the first novel by Raymond Benson featuring Ian Fleming's James Bond following John Gardner's departure in 1996. Published in the United Kingdom by Hodder & Stoughton and in America by Putnam, the book is set in Hong Kong, China, Jamaica, England and some parts of Western Australia. Benson's working title for the novel was No Tears for Hong Kong; this was eventually used as the title for the last chapter in the novel. Continuity According to Raymond Benson, as far as character continuity was concerned, he had been given free lease by Ian Fleming Publications (then Glidrose Publications) to follow or ignore other continuation authors as he saw fit. Benson took a middle-of-the-road approach to this. While Benson treats Ian Fleming's novels as strictly canon, Gardner's novels are not, though there are some aspects that he adopts. For instance, in Gardner's Win, Lose or Die Bond is promoted to Captain, but Benson's novels have Bond holding the rank of Commander again with no explanation. Some of Gardner's original recurring characters are also not present, including Ann Reilly (aka Q'ute), who, by the end of Gardner's era, had taken over Q Branch from Major Boothroyd; Benson features Major Boothroyd, again, with no explanation. Some of Gardner's changes do remain—for instance, Benson's Bond continues to smoke cigarettes from H. Simmons of Burlington Arcade, which dates from Gardner's For Special Services (1982). Additionally, the Bond girls Fredericka von Grüsse (Never Send Flowers / SeaFire), Harriet Horner (Scorpius), and Easy St. John (Death is Forever) are all mentioned. Benson's other novels also retain aspects of Gardner's series, though there is just as much that he ignores. Some elements of the films also carry over into Benson's novels. M, for instance, is not Sir Miles Messervy, but the female M that was first introduced in the film GoldenEye (1995), although Gardner also introduced this character in his novelisation of that film and retained the character through his final novel COLD (1996). Bond also reverts to using his trusty Walther PPK, claiming he had switched to other guns (notably the ASP in Gardner's later novels), but felt that it was time he used it again. The follow-up to Zero Minus Ten, the novelization to Tomorrow Never Dies, has Bond switching to the Walther P99, which remains Bond's main weapon throughout Benson's novels. Later novels by Benson also attempt to insert some of the characters from the films into his story. In The Facts of Death, for instance, Admiral Hargreaves is present at a party. Plot summary As the transfer of the sovereignty of Hong Kong from the British to the People's Republic of China nears, Bond is given ten days to investigate a series of terrorist attacks that could disrupt the fragile handover and cause the breakout of a large-scale war. Simultaneously a nuclear bomb is test-detonated in the Australian outback. In Hong Kong, Bond suspects a British shipping magnate, Guy Thackeray, who he catches cheating at mahjong at a casino in Macau. Later, after cheating the cheater and winning a large sum of Thackeray's money, Bond attends a press conference where Thackeray announces that he is selling his company, EurAsia Enterprises, to the Chinese; not disclosed to the public is that this is due to a long-forgotten legal document that grants the descendants of Li Wei Tam ownership of the company if the British were to ever lose control of Hong Kong. Because the descendants were believed to have abandoned China, General Wong claims the document on behalf of the Chinese government and forces Thackeray out. Immediately following the announcement, Thackeray is killed by a car bomb planted by an unknown assassin, the latest of a series of assassinations that claimed the lives of the entire EurAsia board of directors, as well as several employees. Through his Hong Kong contact, T.Y. Woo, Bond also investigates Li Xu Nan, the Triad head of the Dragon Wing society and the rightful descendant of Li Wei Tam. Li's identity as the Triad head is supposed to be a secret, though after Bond involves a hostess, Sunni Pei, 007 is forced to protect her from the Triads for breaking an oath of secrecy. When she is finally captured, Bond makes a deal, off the record, to go to Guangzhou and retrieve the long-forgotten document from General Wong that will give Li Xu Nan ownership of EurAsia Enterprises upon the handover at midnight on 1 July 1997. Through Li's contacts, Bond successfully travels and meets General Wong in Guangzhou under the guise of a solicitor from England. Bond's cover is later blown and T.Y. Woo, who followed Bond, is executed. Bond avenges his friend's death by killing General Wong, stealing the document, which he hand-delivers to Li Xu Nan, and rescuing Sunni Pei. With Li Xu Nan in Bond's debt, Bond uses Li's contacts to go to Australia to investigate EurAsia Enterprises and find a link between it and the nuclear blast. As it turns out Thackeray is very much alive and has been mining unreported uranium in Australia to make his own nuclear bomb, which he plans to detonate in Hong Kong at the moment the handover takes place in retaliation for the loss of his family's legacy. Returning to Hong Kong, Bond, Li Xu Nan, and a Royal Navy captain track down Thackeray's nuclear bomb and defuse it. The battle claims the lives of Li Xu Nan as well as Thackeray, who is drowned by Bond in the harbour. Major characters James Bond – British Secret Service agent sent to investigate numerous terrorist attacks in Hong Kong as the transfer of sovereignty of Hong Kong from the British to the People's Republic of China nears. M – The successor to Sir Miles Messervy and the head of the British Secret Service, she sends Bond to investigate a number of terrorist attacks in Hong Kong that could potentially disrupt the fragile handover and cause the breakout of a large-scale war. Guy Thackeray – A British shipping magnate, his company EurAsia Enterprises is being stripped from him when the handover takes place on 1 July 1997. In retaliation Thackeray uses his company to build, test, and attempt to detonate a nuclear bomb in Hong Kong, making it uninhabitable. General Wong – A general from the People's Republic of China. Although a member of the Communist Party, he is a corrupt and greedy leader who attempts to claim EurAsia Enterprises not only for China, but for himself. Li Xu Nan – The head of the Dragon Wing Triad. He is the rightful descendant of Li Wei Tam and heir to EurAsia Enterprises when the handover takes place on 1 July 1997. Sunni Pei – A "Blue Lantern" (associated non-member) of the Dragon Wing Triad, she seemingly betrays Li Xu Nan by giving up his identity at a club to Bond. Subsequently, Bond feels obliged to protect her once Li Xu Nan issues a death warrant for her. T.Y. Woo – Working for the British Secret Service station in Hong Kong, he meets Bond upon his arrival. He later sets up Bond in a mahjong game at a casino in Macau so that Bond can get to know Guy Thackeray. Trivia As the novel begins, Bond is in Jamaica at his newly purchased estate that he dubs "Shamelady". The estate was previously owned by a "well-known British journalist and author." The author is in fact Ian Fleming and the estate, Goldeneye, where Fleming wrote every James Bond novel till his death in 1964. Shamelady was suggested to Fleming in 1952 for Goldeneye by his wife, Ann Rothermere. "Shame Lady" is another name for the plant mimosa pudica. Publication History UK first hardback edition, Hodder & Stoughton (), 3 April 1997 US first hardback edition, Putnam (), May 5, 1997 UK paperback edition, Coronet Books (), 5 March 1998 US paperback edition, Jove Books (), August 1998 See also Outline of James Bond References External links No Tears for Hong Kong: the people and places of Zero Minus Ten, by Raymond Benson James Bond books 1997 British novels Novels by Raymond Benson Hodder & Stoughton books Novels set in Hong Kong Novels set in China Novels set in Western Australia
12749709
https://en.wikipedia.org/wiki/Really%20Simple%20Systems
Really Simple Systems
Really Simple Systems CRM is a Cloud CRM provider, offering CRM systems to small and medium sized companies. History Really Simple Systems CRM was founded in 2004 by John Paterson, former CEO of Zeus Technology and COO of Systems Union, with the public launch in January 2006. Status Really Simple Systems CRM has over 18,000 users of its hosted customer relationship management systems. Customers include the Royal Academy, the Red Cross, the NHS and IBM as well as thousands of small and medium sized companies. The company is headquartered in the United Kingdom with an office in Australia. The company has won the Software Satisfaction Awards for the best small business CRM system twice, in 2008 and 2010. In 2011, the company won the EuroCloud Award and the Database Marketing Award. In 2014, it won Best Cloud Application at the annual EuroCloud Awards and was a finalist at CRM Idol. In November 2017, PC Mag listed Really Simple Systems in Best Lead Management Software article. In 2018, the company was a finalist in the Small Business Awards for Business Innovation and the DevOps Excellence Awards 2019.More recently, in October 2019, Really Simple Systems won the Computing (magazine) Cloud Management Solution of the Year 2019 Award. In 2020, the company was nominated as a finalist in the Cloud Computing Awards 2020 for the Best CRM Solution of the Year, the European IT & Software Excellence Awards 2020 for the SME (Cloud or SaaS) Solution of the Year, and the SME National Business Awards for Best Customer Service. In October 2006, the company's Marketing module was launched, initially offering Campaign Management. Email marketing was added in August 2012. In October 2010, the company launched its Free Edition, a Freemium two user hosted CRM system. The existing product was renamed Enterprise Edition. In May 2013, the company was accredited for entry in the UK Government's GCloud CloudStore catalogue. John Paterson, CEO of Really Simple Systems, was awarded the CRM Idol Citizen of the Year in 2014. In July 2013 Really Simple Systems launched a cross-platform version of its CRM, running seamlessly on desktops, laptops, tablets and smartphones. In January 2015 the company upgraded its email marketing to offer a built-in alternative to third party products such as MailChimp and Constant Contact. Version 5, launched in March 2017, offered a new user interface, custom dashboards, help drawers and grid customisation. In September 2017, the company announced its compliance with the European Union's General Data Protection Regulation (GDPR) and its phased roll-out of GDPR compliance features in it email marketing module. In February 2018, it launched a set of GDPR compliance tools allowing users of its email marketing module to collect consent to digital marketing communications. In October 2020, launched a new version of its email marketing tool for SMEs. Followed by the Advanced Marketing tool in February 2021. Location The company is headquartered in Petersfield, Hampshire in the United Kingdom. In January 2008 the company opened its first overseas office in Sydney, Australia. Price plans There are four Really Simple Systems CRM pricing plans: Free (account, contact and opportunity management for two users, 100 accounts) Starter (unlimited users, 1,000 accounts, 1GB document storage, MailSync) Professional (unlimited users, 5,000 accounts, 5GB document storage, MailSync) Enterprise (unlimited users, unlimited accounts, "advanced" features, "unlimited" document storage, MailSync) Modules The product consists of three modules: Sales Marketing Customer Service & Support Technical information Really Simple Systems CRM is delivered as a software as a service application. As from Version 5 the product is written using the LAMP stack. Version 4 and previous versions used Microsoft technology. Some JavaScript is used on the client. References CRM software companies Customer relationship management software Software companies of the United Kingdom Web applications Cloud applications Email marketing software
33815646
https://en.wikipedia.org/wiki/English%20Software
English Software
The English Software Company, later shortened to English Software, was a Manchester, UK-based video game developer and publisher that operated from 1982 until 1987. Starting with its first release, the horizontally scrolling shooter Airstrike, English Software focused on the Atari 8-bit family of home computers, then later expanded onto other platforms. The company used the slogan "The power of excitement". History The company was set up in 1982 by Philip Morris, owner of Gemini Electronics computer store in Manchester, to release video games for the Atari 8-bit family. By the end of 1983, English Software was the largest producer of Atari 8-bit software in the UK and Morris closed Gemini Electronics to concentrate on English. The company continued to concentrate on the Atari 8-bit market but also released games for other home computers including Commodore 64, BBC Micro, Acorn Electron, Amstrad CPC, ZX Spectrum, Atari ST, and Amiga. Popular games include platformers Jet-Boot Jack (1983) and Henry's House (1984), racer Elektra Glide (1985), the multi-event Knight Games (1986) and shoot 'em up Leviathan (1987). English had licensing deals that saw some of their games released internationally e.g. through Dynamics Marketing in Germany and Datamost in the US. A number of English's games were sold at budget price by Mastertronic in the US, which included exclusive ports such as the Atari version of Henry's House and the IBM PC compatible version of Knight Games. Philip Morris said in 2013 he did not license the Atari version to Mastertronic UK, even though English Software had the option to do this. Games 1982 Airstrike (Atari 8-bit) Time Warp (Atari 8-bit) 1983 Airstrike II (Atari 8-bit) Batty Builders (Atari 8-bit) Bombastic! (Atari 8-bit) aka Bomb Blast It! Captain Sticky's Gold (Atari 8-bit) Jet-Boot Jack (Atari 8-bit, Commodore 64, BBC Micro, Acorn Electron, Amstrad CPC) Krazy Kopter (Atari 8-bit) Neptune's Daughter (Commodore 64, Atari 8-bit) Steeple Jack (Atari 8-bit) Xenon Raid (Atari 8-bit) 1984 The Adventures of Robin Hood (Atari 8-bit) Colossus Chess 3.0 (Atari 8-bit) Henry's House (Commodore 64), ported to the Atari 8-bit by Mastertronic Kissin' Kousins (BBC Micro, Acorn Electron, Atari 8-bit) Legend of the Knucker-Hole (Commodore 64) Spaceman Sid (BBC Micro, Acorn Electron) Stranded (Atari 8-bit, Commodore 64), ports of the BBC/Electron game first published by Superior Software 1985 Hijack! (Atari 8-bit) Topper the Copper (Commodore 64) Chop Suey (Atari 8-bit) Elektra Glide (Atari 8-bit, Commodore 64, Amstrad CPC) Mediator (Atari 8-bit, Commodore 64) 1986 Knight Games (Commodore 64, Amstrad CPC), ported to PC by Mastertronic Q-Ball (Atari ST, Amiga) 1987 Leviathan (Commodore 64, Amstrad CPC, ZX Spectrum, Atari ST, Amiga) Octapolis (Commodore 64) Knight Games 2: Space Trilogy (Commodore 64) References Atari 8-bit family Video game companies established in 1982 Video game companies disestablished in 1987 Defunct video game companies of the United Kingdom 1982 establishments in England 1987 disestablishments in England Defunct companies based in Manchester
4663461
https://en.wikipedia.org/wiki/Lifehacker
Lifehacker
Lifehacker is a weblog about life hacks and software that launched on January 31, 2005. The site was originally launched by Gawker Media and is currently owned by G/O Media. The blog posts cover a wide range of topics including: Microsoft Windows, Mac, Linux programs, iOS and Android, as well as general life tips and tricks. The website is known for its fast-paced release schedule from its inception, with content being published every half hour all day long. The Lifehacker motto is "Tips, tricks, and downloads for getting things done." In addition, Lifehacker has four international editions, Lifehacker Australia, Lifehacker Japan and Lifehacker UK which feature most posts from the U.S. edition along with extra content specific to local readers. History Gina Trapani founded Lifehacker and was the site's sole blogger until September 2005, when two associate editors joined her, Erica Sadun and D. Keith Robinson. Other former associate editors include Wendy Boswell, Rick Broida, Jason Fitzpatrick, Kevin Purdy, and Jackson West. Former contributing editors include The How-To Geek, and Tamar Weinberg. Lifehacker launched in January 2005 with an exclusive sponsorship by Sony. The highly publicized ad campaign was rumored to have cost $75,000 for three months. Since then, a variety of tech-oriented advertisers have appeared on the site. Lifehackers frequent guest posts have included articles by Joe Anderson, Eszter Hargittai, Matt Haughey, Meg Hourihan, Jeff Jarvis. On January 16, 2009, Trapani resigned as Lifehackers lead editor and Adam Pash assumed the position. On February 7, 2011, Lifehacker revealed a redesigned site with a cleaner layout. Then, on April 15, 2013, Lifehacker redesigned their site again to match the other newly redesigned Gawker sites, like Kotaku. On January 7, 2013, Adam Pash moved on from Lifehacker to a new start-up, and Whitson Gordon became the new editor-in-chief. On January 1, 2016, Whitson Gordon parted ways with Lifehacker to another popular technology website, How-To Geek, as their editor-in-chief replacing Lowell Heddings. In his announcement, Gordon confirmed that Alan Henry would take over as the interim editor pending interviewing processes. Alan Henry became the new editor-in-chief on February 1, 2016. On February 3, 2017, Alan Henry left his position at Lifehacker. He has since moved on to write for the New York Times. On February 28, 2017, Melissa Kirsch became the editor-in-chief. Alice Bradley was named editor-in-chief in June 2020, but left in March 2021. Former deputy editor Jordan Calhoun succeeded her as editor-in-chief. Lifehacker was one of six websites that was purchased by Univision Communications in their acquisition of Gawker Media in August 2016. Podcast Lifehacker staff ran the Ask Lifehacker podcast, which was discontinued in April 2014. From May 2014, former Lifehacker writer Adam Dachis hosted Supercharged, a podcast with the same theme and set-up, on which Lifehacker writers Alan Henry, Whitson Gordon, Eric Ravenscraft, Thorin Klosowski and Patrick Allen frequently co-hosted. As of January 2017, Lifehacker has a weekly podcast called The Upgrade. It is hosted by Jordan Calhoun and features experts "helping you improve your life, one week at a time". Staff According to this letter from the editor on Lifehacker, Alan Henry will no longer be the Editor-in-Chief at Lifehacker and has since joined The New York Times. Gizmodo Media announced Melissa Kirsch as his replacement in February 2017. Alice Bradley was named editor-in-chief in June 2020. Jordan Calhoun became editor-in-chief in March 2021. Accolades In 2005, TIME named Lifehacker one of the "50 Coolest Web Sites" in 2005, one of the "25 Sites We Can't Live Without" in 2006 and one of the "25 Best Blogs 2009" CNET named Lifehacker in their "Blog 100" in October 2005. Wired presented Gina Trapani with a Rave Award in 2006 for Best Blog. In the 2007 Weblog Awards, Lifehacker was awarded Best Group Weblog. PC Magazine named Lifehacker in "Our Favorite 100 Blogs" in October 2007. US Mensa named Lifehacker as one of their top 50 sites in 2010. References Further reading External links FAQ International Australia Japan Gawker Media American blogs Internet properties established in 2005 Former Univision Communications subsidiaries
15322092
https://en.wikipedia.org/wiki/Opticks%20%28software%29
Opticks (software)
Opticks is a remote sensing application that supports imagery, video (motion imagery), synthetic aperture radar (SAR), multi-spectral, hyper-spectral, and other types of remote sensing data. Opticks supports processing remote sensing video in the same manner as it supports imagery, which differentiates it from other remote sensing applications. Opticks was initially developed by Ball Aerospace & Technologies Corp. and other organizations for the United States Intelligence Community. Ball Aerospace open sourced Opticks hoping to increase the demand for remote sensing data and broaden the features available in existing remote sensing software. The Opticks software and its extensions are developed by over twenty different organizations, and over two hundred users are registered users at http://opticks.org. Future planned enhancements include adding the ability to ingest and visualize lidar data, as well as a three-dimensional (3-D) visualization capability. Opticks can also be used as a remote sensing software development framework. Developers can extend Opticks functionality using its plug-in architecture and public application programming interface (API). Opticks is open source, licensed under GNU Lesser General Public License (LGPL) 2.1. Opticks was brought into the open source community in Dec 2007 and has a large developer community. For more information, see the history of Opticks. Desktop Application Opticks can be used as a standard desktop application. The vanilla software can be used to read and write imagery in several formats and for some basic data analysis as described in the Opticks Feature Tour. The Opticks community provides installation packages for Microsoft Windows, Solaris 10 SPARC, and some distributions of Linux. Software Framework Opticks can also be used as a software development framework. The Opticks community provides and supports a public SDK which includes a documented API as well as several extension tutorials. The Opticks website hosts a variety of extensions, some of which are developed and maintained by the same development team as Opticks. Community Opticks has active mailing lists here and an IRC channel available here. The issue tracker is available here. The source code is available here. Opticks has applied for incubation with the OSGeo foundation. Opticks has participated in both the Google Summer of Code and ESA Summer of Code in Space programs. Google Summer of Code GSoC 2010 Opticks participated in GSoC 2010 with two students. The titles of the accepted projects were "Adding Image Stack Support and New Algorithm Plugin for Opticks" and "Speckle removal and edge detection tool for SAR image". Extensions for the projects are available here and here. GSoC 2011 Opticks participated in GSoC 2011 under the OSGeo organization with three students. The titles of the accepted projects were "Photography processing tools for Opticks", "Development of a ship detection and classification toolkit for SAR imagery in Opticks", and "Astronomical processing tools for Opticks. Extensions for the projects are available here, here, and here. GSoC 2012 Visit the current ideas page on the Opticks Website. European Space Agency Summer of Code in Space ESA SOCIS 2011 Opticks participated in ESA SOCIS in 2011. The project page can be found on the Opticks website. ESA SOCIS 2012 Visit the current ideas page on the Opticks Website. See also Remote sensing Synthetic aperture radar Hyperspectral Multispectral Imagery analysis Lidar Radar Open-source software References External links Remote sensing software Science software for Windows Solaris software Free software programmed in C++ 2001 software Synthetic aperture radar
17885119
https://en.wikipedia.org/wiki/List%20of%20Internet%20pioneers
List of Internet pioneers
Instead of a single "inventor", the Internet was developed by many people over many years. The following are some Internet pioneers who contributed to its early and ongoing development. These include early theoretical foundations, specifying original protocols, and expansion beyond a research tool to wide deployment. The pioneers Claude Shannon Claude Shannon (1916–2001) called the "father of modern information theory", published "A Mathematical Theory of Communication" in 1948. His paper gave a formal way of studying communication channels. It established fundamental limits on the efficiency of communication over noisy channels, and presented the challenge of finding families of codes to achieve capacity. Vannevar Bush Vannevar Bush (1890–1974) helped to establish a partnership between U.S. military, university research, and independent think tanks. He was appointed Chairman of the National Defense Research Committee in 1940 by President Franklin D. Roosevelt, appointed Director of the Office of Scientific Research and Development in 1941, and from 1946 to 1947, he served as chairman of the Joint Research and Development Board. Out of this would come DARPA, which in turn would lead to the ARPANET Project. His July 1945 Atlantic Monthly article "As We May Think" proposed Memex, a theoretical proto-hypertext computer system in which an individual compresses and stores all of their books, records, and communications, which is then mechanized so that it may be consulted with exceeding speed and flexibility. J. C. R. Licklider Joseph Carl Robnett Licklider (1915–1990) was a faculty member of Massachusetts Institute of Technology, and researcher at Bolt, Beranek and Newman. He developed the idea of a universal network at the Information Processing Techniques Office (IPTO) of the United States Department of Defense Advanced Research Projects Agency (ARPA). He headed IPTO from 1962 to 1963, and again from 1974 to 1975. His 1960 paper "Man-Computer Symbiosis" envisions that mutually-interdependent, "living together", tightly-coupled human brains and computing machines would prove to complement each other's strengths. Paul Baran Paul Baran (1926–2011) developed the field of redundant distributed networks while conducting research at RAND Corporation starting in 1959 when Baran began investigating the development of survivable communication networks. This led to a series of papers titled "On Distributed communications" that in 1964 described a detailed architecture for a distributed survivable packet switched communications network. In 2012, Baran was inducted into the Internet Hall of Fame by the Internet Society. Donald Davies Donald Davies (1924–2000) independently invented and named the concept of packet switching in 1965 at the United Kingdom's National Physical Laboratory (NPL). In the same year, he proposed a national data network based on packet switching in the UK. After the proposal was not taken up nationally, during 1966 he headed a team which produced a design for a local area network to serve the needs of NPL and prove the feasibility of packet switching. He and his team were the first to describe the use of an "Interface computer" to act as a router in 1966; one of the first to use the term 'protocol' in a data-commutation context in 1967; and also carried out simulation work on packet networks, including datagram networks. In 1967, a written version of the proposal entitled NPL Data Network was presented by a member of his team (Roger Scantlebury) at the inaugural Symposium on Operating Systems Principles. Scantlebury suggested packet switching for use in the ARPANET; Larry Roberts incorporated it into the design and sought input from Paul Baran. Davies gave the first public presentation on packet switching in 1968 and built the local area NPL network in England, influencing other research in the UK and Europe. The NPL network and the ARPANET were the first two networks in the world to use packet switching and NPL was the first to use high-speed links. In 2012, Davies was inducted into the Internet Hall of Fame by the Internet Society. Charles M. Herzfeld Charles M. Herzfeld (1925–2017) was an American scientist and scientific manager, best known for his time as Director of DARPA, during which, among other things, he personally took the decision to authorize the creation of the ARPANET, the predecessor of the Internet. In 2012, Herzfeld was inducted into the Internet Hall of Fame by the Internet Society. Bob Taylor Robert W. Taylor (10 February 1932 – 13 April 2017) was director of ARPA's Information Processing Techniques Office from 1965 through 1969, where he convinced ARPA to fund a computer network. From 1970 to 1983, he managed the Computer Science Laboratory of the Xerox Palo Alto Research Center (PARC), where technologies such as Ethernet and the Xerox Alto were developed. He was the founder and manager of Digital Equipment Corporation's Systems Research Center until 1996. The 1968 paper, "The Computer as a Communication Device", that he wrote together with J.C.R. Licklider starts out: "In a few years, men will be able to communicate more effectively through a machine than face to face." And while their vision would take more than "a few years", the paper lays out the future of what the Internet would eventually become. Larry Roberts Lawrence G. "Larry" Roberts (1937–2018) was an American computer scientist. After earning his PhD in electrical engineering from MIT in 1963, Roberts continued to work at MIT's Lincoln Laboratory where in 1965 he connected Lincoln Lab's TX-2 computer to the SDC Q-32 computer in Santa Monica. In 1967, he became a program manager in the ARPA Information Processing Techniques Office (IPTO), where he led the development of the ARPANET, the first wide area packet switching network. Roberts applied Donald Davies' concepts of packet switching for the ARPANET, and also sought input from Paul Baran. He asked Leonard Kleinrock to measure and model the network's performance. After Robert Taylor left ARPA in 1969, Roberts became director of the IPTO. In 1973, he left ARPA to commercialize the nascent technology in the form of Telenet, the first data network utility, and served as its CEO from 1973 to 1980. In 2012, Roberts was inducted into the Internet Hall of Fame by the Internet Society. Leonard Kleinrock Leonard Kleinrock (born 1934) published his first paper on queueing theory, "Information Flow in Large Communication Nets", in 1961. After completing his Ph.D. thesis in 1962, in which he applied queuing theory to message switching, he moved to UCLA. In 1969, under his supervision, a team at UCLA connected a computer to an Interface Message Processor, becoming the first node on ARPANET. Building on his earlier work on queueing theory, Kleinrock carried out theoretical work to model the performance of packet-switched networks, which underpinned the development of the ARPANET. His theoretical work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today. In 2012, Kleinrock was inducted into the Internet Hall of Fame by the Internet Society. Bob Kahn Robert E. "Bob" Kahn (born 1938) is an American engineer and computer scientist, who in 1974, along with Vint Cerf, invented the TCP/IP protocols. After earning a Ph.D. degree from Princeton University in 1964, he worked for AT&T Bell Laboratories, as an assistant professor at MIT, and at Bolt, Beranek and Newman (BBN), where he helped develop the ARPANET IMP. In 1972, he began work at the Information Processing Techniques Office (IPTO) within ARPA. In 1986 he left ARPA to found the Corporation for National Research Initiatives (CNRI), a nonprofit organization providing leadership and funding for research and development of the National Information Infrastructure. Douglas Engelbart Douglas Engelbart (1925–2013) was an early researcher at the Stanford Research Institute. His Augmentation Research Center laboratory became the second node on the ARPANET in October 1969, and SRI became the early Network Information Center, which evolved into the domain name registry. Engelbart was a committed, vocal proponent of the development and use of computers and computer networks to help cope with the world's increasingly urgent and complex problems. He is best known for his work on the challenges of human–computer interaction, resulting in the invention of the computer mouse, and the development of hypertext, networked computers, and precursors to graphical user interfaces. Elizabeth Feinler Elizabeth J. "Jake" Feinler (born 1931) was a staff member of Doug Engelbart's Augmentation Research Center at SRI and PI for the Network Information Center (NIC) for the ARPANET and the Defense Data Network (DDN) from 1972 until 1989. In 2012, Feinler was inducted into the Internet Hall of Fame by the Internet Society. Louis Pouzin Louis Pouzin (born 1931) is a French computer scientist. He built the first implementation of a datagram packet communications network, CYCLADES, that demonstrated the feasibility of internetworking, which he called a "catenet". Concepts from his work were used by Robert Kahn, Vinton Cerf, and others in the development of TCP/IP. In 1997, Pouzin received the ACM SIGCOMM Award for "pioneering work on connectionless packet communication". Louis Pouzin was named a Chevalier of the Legion of Honor by the French government on 19 March 2003. In 2012, Pouzin was inducted into the Internet Hall of Fame by the Internet Society. John Klensin John Klensin's involvement with Internet began in 1969, when he worked on the File Transfer Protocol. Klensin was involved in the early procedural and definitional work for DNS administration and top-level domain definitions and was part of the committee that worked out the transition of DNS-related responsibilities between USC-ISI and what became ICANN. His career includes 30 years as a principal research scientist at MIT, a stint as INFOODS Project Coordinator for the United Nations University, Distinguished Engineering Fellow at MCI WorldCom, and Internet Architecture Vice President at AT&T; he is now an independent consultant. In 1992 Randy Bush and John Klensin created the Network Startup Resource Center, helping dozens of countries to establish connections with FidoNet, UseNet, and when possible the Internet. In 2003, he received an International Committee for Information Technology Standards Merit Award. In 2007, he was inducted as a Fellow of the Association for Computing Machinery for contributions to networking standards and Internet applications. In 2012, Klensin was inducted into the Internet Hall of Fame by the Internet Society. Vint Cerf Vinton G. "Vint" Cerf (born 1943) is an American computer scientist. He is recognized as one of "the fathers of the Internet", sharing this title with Bob Kahn. He earned his Ph.D. from UCLA in 1972. At UCLA he worked in Professor Leonard Kleinrock's networking group that connected the first two nodes of the ARPANET and contributed to the ARPANET host-to-host protocol. Cerf was an assistant professor at Stanford University from 1972–1976, where he conducted research on packet network interconnection protocols and co-designed the DoD TCP/IP protocol suite with Bob Kahn. He was a program manager for the Advanced Research Projects Agency (ARPA) from 1976 to 1982. Cerf was instrumental in the formation of both the Internet Society and Internet Corporation for Assigned Names and Numbers (ICANN), serving as founding president of the Internet Society from 1992–1995 and in 1999 as Chairman of the Board and as ICANN Chairman from 2000 to 2007. His many awards include the National Medal of Technology, the Turing Award, the Presidential Medal of Freedom, and membership in the National Academy of Engineering and the Internet Society's Internet Hall of Fame. Yogen Dalal Yogen K. Dalal, also known as Yogin Dalal, is an Indian electrical engineer and computer scientist. He was an ARPANET pioneer, and a key contributor to the development of internetworking protocols. He co-authored the first TCP specification, with Vint Cerf and Carl Sunshine between 1973 and 1974. It was published as (Specification of Internet Transmission Control Program) in December 1974. It first used the term internet as a shorthand for internetworking, and later RFCs repeated this use. Dalal later proposed splitting TCP into the TCP and IP protocols between 1976 and 1977, leading to the development of TCP/IP. He also worked at Xerox PARC, where he contributed to the development of the Ethernet, the Xerox Network Systems (XNS), and the Xerox Star. After receiving a B.Tech in Electrical Engineering at the Indian Institute of Technology Bombay, he went to the United States to study for a master's degree at Stanford University in 1972 and then a PhD in 1973. His interest in data communication as a graduate student led him to working with new professor Vint Cerf as a teaching assistant in 1972, and then as a research assistant while studying for his PhD. In Summer 1973, while Cerf and Bob Kahn were attempting to formulate an internetworking protocol, Dalal joined their research team to assist them on developing what eventually became TCP. After co-authoring the first TCP protocol with Cerf and Sunshine in 1974, Dalal received his PhD in Electrical Engineering and Computer Science, and remained active in the development of TCP/IP at Stanford for several years. Between 1976 and 1977, Dalal proposed separating TCP's routing and transmission control functions into two discrete layers, which led to the splitting of TCP into the TCP and IP protocols. Due to his experience in communication protocols such as TCP, several key researchers were greatly interested in recruiting him, including Bob Kahn's ARPANET team at DARPA, Ray Tomlinson at BBN, Bob Taylor's team at Xerox PARC, and Steve Crocker at the Information Sciences Institute (ISI). In early 1977, Dalal joined Robert Metcalfe's team at Xerox PARC, where he worked on the development of the Xerox Network Systems. He also worked on the 10Mbps Ethernet Specification at Xerox PARC, along with DEC and Intel, leading to the IEEE 802.3 LAN standard. He later left Xerox, and became a founding member of the startup tech companies Claris and Metaphor Computer Systems in the early 1980s. He later became a managing partner of Mayfield, and joined the Board of Directors at several tech companies including Narus and Nuance. In 2005, he was recognized by Stanford as one of the pioneers of the Internet. Peter Kirstein Peter T. Kirstein (1933–2020) was a British computer scientist and a leader in the international development of the Internet. In 1973, he established one of the first two international nodes of the ARPANET. In 1978 he co-authored "Issues in packet-network interconnection" with Vint Cerf, one of the early technical papers on the internet concept. His research group at University College London adopted TCP/IP in 1982, a year ahead of ARPANET, and played a significant role in the very earliest experimental Internet work. Starting in 1983 he chaired the International Collaboration Board, which involved six NATO countries, served on the Networking Panel of the NATO Science Committee (serving as chair in 2001), and on Advisory Committees for the Australian Research Council, the Canadian Department of Communications, the German GMD, and the Indian Education and Research Network (ERNET) Project. He leads the Silk Project, which provides satellite-based Internet access to the Newly Independent States in the Southern Caucasus and Central Asia. In 2012, Kirstein was inducted into the Internet Hall of Fame by the Internet Society. Steve Crocker Steve Crocker (born 1944 in Pasadena, California) has worked in the ARPANET and Internet communities since their inception. As a UCLA graduate student in the 1960s, he helped create the ARPANET protocols which were the foundation for today's Internet. He created the Request for Comments (RFC) series, authoring the very first RFC and many more. He was instrumental in creating the ARPA "Network Working Group", the forerunner of the modern Internet Engineering Task Force. Crocker has been a program manager at the Advanced Research Projects Agency (ARPA), a senior researcher at USC's Information Sciences Institute, founder and director of the Computer Science Laboratory at The Aerospace Corporation and a vice president at Trusted Information Systems. In 1994, Crocker was one of the founders and chief technology officer of CyberCash, Inc. He has also been an IETF security area director, a member of the Internet Architecture Board, chair of the Internet Corporation for Assigned Names and Numbers (ICANN) Security and Stability Advisory Committee, a board member of the Internet Society and numerous other Internet-related volunteer positions. Crocker is chair of the board of ICANN. For this work, Crocker was awarded the 2002 IEEE Internet Award "for leadership in creation of key elements in open evolution of Internet protocols". In 2012, Crocker was inducted into the Internet Hall of Fame by the Internet Society. Jon Postel Jon Postel (1943–1998) was a researcher at the Information Sciences Institute. He was editor of all early Internet standards specifications, such as the RFC series. His beard and sandals made him "the most recognizable archetype of an Internet pioneer". The Internet Society's Postel Award is named in his honor, as is the Postel Center at Information Sciences Institute. His obituary was written by Vint Cerf and published as RFC 2468 in remembrance of Postel and his work. In 2012, Postel was inducted into the Internet Hall of Fame by the Internet Society. Joyce K. Reynolds Joyce K. Reynolds (died 2015) was an American computer scientist and served as part of the editorial team of the RFC series from 1987 to 2006. She performed the IANA function with Jon Postel until this was transferred to ICANN, then worked with ICANN in this role until 2001, while remaining an employee of ISI. As Area Director of the User Services area, she was a member of the Internet Engineering Steering Group of the IETF from 1990 to March 1998. Together with Bob Braden, she received the 2006 Postel Award in recognition of her services to the Internet. She is mentioned, along with a brief biography, in RFC 1336, Who's Who in the Internet (1992). Danny Cohen Danny Cohen led several projects on real-time interactive applications over the ARPANet and the Internet starting in 1973. After serving on the computer science faculty at Harvard University (1969–1973) and Caltech (1976), he joined the Information Sciences Institute (ISI) at University of Southern California (USC). At ISI (1973–1993) he started many network related projects including, one to allow interactive, real-time speech over the ARPANet, packet-voice, packet-video, and Internet Concepts. In 1981 he adapted his visual flight simulator to run over the ARPANet, the first application of packet switching networks to real-time applications. In 1993, he worked on Distributed Interactive Simulation through several projects funded by United States Department of Defense. He is probably best known for his 1980 paper "On Holy Wars and a Plea for Peace" which adopted the terminology of endianness for computing. Cohen was elected to the National Academy of Engineering in 2006 for contributions to the advanced design, graphics, and real-time network protocols of computer systems and as an IEEE Fellow in 2010 for contributions to protocols for packet switching in real-time applications. In 1993 he received a United States Air Force Meritorious Civilian Service Award. And in 2012, Cohen was inducted into the Internet Hall of Fame by the Internet Society. David J. Farber Starting in the 1980s Dave Farber (born 1934) helped conceive and organize the major American research networks CSNET, NSFNET, and the National Research and Education Network (NREN). He helped create the NSF/DARPA-funded Gigabit Network Test bed Initiative and served as the Chairman of the Gigabit Test bed Coordinating Committee. He also served as Chief Technologist at the US Federal Communications Commission (2000–2001) and is a founding editor of ICANNWatch. Farber is an IEEE Fellow, ACM Fellow, recipient of the 1995 SIGCOMM Award for vision and breadth of contributions to and inspiration of others in computer networks, distributed computing, and network infrastructure development, and the 1996 John Scott Award for seminal contributions to the field of computer networks and distributed computer systems. He served on the board of directors of the Electronic Frontier Foundation, the Electronic Privacy Information Center advisory board, the Board of Trustees of the Internet Society, and as a member of the Presidential Advisory Committee on High Performance Computing and Communications, Information Technology and Next Generation Internet. On 3 August 2013, Farber was inducted into the Pioneers Circle of the Internet Hall of Fame for his key role in many systems that converged into today's Internet. Paul Mockapetris Paul V. Mockapetris (born 1948), while working with Jon Postel at the Information Sciences Institute (ISI) in 1983, proposed the Domain Name System (DNS) architecture. He was IETF chair from 1994 to 1996. Mockapetris received the 1997 John C. Dvorak Telecommunications Excellence Award "Personal Achievement - Network Engineering" for DNS design and implementation, the 2003 IEEE Internet Award for his contributions to DNS, and the Distinguished Alumnus award from the University of California, Irvine. In May 2005, he received the ACM Sigcomm lifetime award. In 2012, Mockapetris was inducted into the Internet Hall of Fame by the Internet Society. David Clark David D. Clark (born 1944) is an American computer scientist. During the period of tremendous growth and expansion of the Internet from 1981 to 1989, he acted as chief protocol architect in the development of the Internet, and chaired the Internet Activities Board, which later became the Internet Architecture Board. He is currently a senior research scientist at the MIT Computer Science and Artificial Intelligence Laboratory. In 1990 Clark was awarded the ACM SIGCOMM Award "in recognition of his major contributions to Internet protocol and architecture." In 1998 he received the IEEE Richard W. Hamming Medal "for leadership and major contributions to the architecture of the Internet as a universal information medium". In 2001 he was inducted as a Fellow of the Association for Computing Machinery for "his preeminent role in the development of computer communication and the Internet, including architecture, protocols, security, and telecommunications policy". In 2001, he was awarded the Telluride Tech Festival Award of Technology in Telluride, Colorado, and in 2011 the Lifetime Achievement Award from the Oxford Internet Institute, University of Oxford "in recognition of his intellectual and institutional contributions to the advance of the Internet." Susan Estrada Susan Estrada founded CERFnet, one of the original regional IP networks, in 1988. Through her leadership and collaboration with PSINet and UUnet, Estrada helped form the interconnection enabling the first commercial Internet traffic via the Commercial Internet Exchange. She wrote Connecting to the Internet in 1993 and she was inducted to the Internet Hall of Fame in 2014. She is on the Board of Trustees of the Internet Society. Dave Mills David L. Mills (born 1938) is an American computer engineer. Mills earned his PhD in Computer and Communication Sciences from the University of Michigan in 1971. While at Michigan he worked on the ARPA sponsored Conversational Use of Computers (CONCOMP) project and developed DEC PDP-8 based hardware and software to allow terminals to be connected over phone lines to an IBM System/360 mainframe computer. Mills was the chairman of the Gateway Algorithms and Data Structures Task Force (GADS) and the first chairman of the Internet Architecture Task Force. He invented the Network Time Protocol (1981), the DEC LSI-11 based fuzzball router that was used for the 56 kbit/s NSFNET (1985), the Exterior Gateway Protocol (1984), and inspired the author of ping (1983). He is an emeritus professor at the University of Delaware. In 1999 he was inducted as a Fellow of the Association for Computing Machinery, and in 2002, as a Fellow of the Institute of Electrical and Electronics Engineers (IEEE). In 2008, Mills was elected to the National Academy of Engineering (NAE). In 2013 he received the IEEE Internet Award "For significant leadership and sustained contributions in the research, development, standardization, and deployment of quality time synchronization capabilities for the Internet." Radia Perlman Radia Joy Perlman (born 1951) is the software designer and network engineer who developed the spanning-tree protocol which is fundamental to the operation of network bridges. She also played an important role in the development of link-state routing protocols such as IS-IS (which had a significant influence on OSPF). In 2010 she received the ACM SIGCOMM Award "for her fundamental contributions to the Internet routing and bridging protocols that we all use and take for granted every day." Dennis M. Jennings Dennis M. Jennings is an Irish physicist, academic, Internet pioneer, and venture capitalist. In 1984, the National Science Foundation (NSF) began construction of several regional supercomputing centers to provide very high-speed computing resources for the US research community. In 1985 NSF hired Jennings to lead the establishment of the National Science Foundation Network (NSFNET) to link five of the super-computing centers to enable sharing of resources and information. Jennings made three critical decisions that shaped the subsequent development of NSFNET: that it would be a general-purpose research network, not limited to connection of the supercomputers; it would act as the backbone for connection of regional networks at each supercomputing site; and it would use the ARPANET's TCP/IP protocols. Jennings was also actively involved in the start-up of research networks in Europe (European Academic Research Network, EARN - President; EBONE - Board member) and Ireland (HEAnet - initial proposal and later Board member). He chaired the Board and General Assembly of the Council of European National Top Level Domain Registries (CENTR) from 1999 to early 2001 and was actively involved in the start-up of the Internet Corporation for Assigned Names and Numbers (ICANN). He was a member of the ICANN Board from 2007 to 2010, serving as Vice-Chair in 2009-2010. In April 2014 Jennings was inducted into the Internet Hall of Fame. Steve Wolff Stephen "Steve" Wolff participated in the development of ARPANET while working for the U.S. Army. In 1986 he became Division Director for Networking and Communications Research and Infrastructure at the National Science Foundation (NSF) where he managed the development of NSFNET. He also conceived the Gigabit Testbed, a joint NSF-DARPA project to prove the feasibility of IP networking at gigabit speeds. His work at NSF transformed the fledgling internet from a narrowly focused U.S. government project into the modern Internet with scholarly and commercial interest for the entire world. In 1994 he left NSF to join Cisco as a technical manager in Corporate Consulting Engineering. In 2011 he became the CTO at Internet2. In 2002 the Internet Society recognized Wolff with its Postel Award. When presenting the award, Internet Society (ISOC) President and CEO Lynn St. Amour said "…Steve helped transform the Internet from an activity that served the specific goals of the research community to a worldwide enterprise which has energized scholarship and commerce throughout the world." The Internet Society also recognized Wolff in 1994 for his courage and leadership in advancing the Internet. Sally Floyd Sally Floyd (1950–2019) was an American engineer recognized for her extensive contributions to Internet architecture and her work in identifying practical ways to control and stabilize Internet congestion. She invented the random early detection active queue management scheme, which has been implemented in nearly all commercially available routers, and devised the now-common method of adding delay jitter to message timers to avoid synchronization collisions. Floyd, with Vern Paxson, in 1997 identified the lack of knowledge of network topology as the major obstacle in understanding how the Internet works. This paper, "Why We Don't Know How to Simulate the Internet", was re-published as "Difficulties in Simulating the Internet" in 2001 and won the IEEE Communication Society's William R. Bennett Prize Paper Award. Floyd was also a co-author on the standard for TCP Selective acknowledgement (SACK), Explicit Congestion Notification (ECN), the Datagram Congestion Control Protocol (DCCP) and TCP Friendly Rate Control (TFRC). She received the IEEE Internet Award in 2005 and the ACM SIGCOMM Award in 2007 for her contributions to congestion control. She has been involved in the Internet Advisory Board, and, as of 2007, was one of the top-ten most cited researchers in computer science. Van Jacobson Van Jacobson is an American computer scientist, best known for his work on TCP/IP network performance and scaling. His work redesigning TCP/IP's flow control algorithms (Jacobson's algorithm) to better handle congestion is said to have saved the Internet from collapsing in the late 1980s and early 1990s. He is also known for the TCP/IP Header Compression protocol described in RFC 1144: Compressing TCP/IP Headers for Low-Speed Serial Links, popularly known as Van Jacobson TCP/IP Header Compression. He is co-author of several widely used network diagnostic tools, including traceroute, tcpdump, and pathchar. He was a leader in the development of the multicast backbone (MBone) and the multimedia tools vic, vat, and wb. For his work, Jacobson received the 2001 ACM SIGCOMM Award for Lifetime Achievement, the 2003 IEEE Koji Kobayashi Computers and Communications Award, and was elected to the National Academy of Engineering in 2006. In 2012, Jacobson was inducted into the Internet Hall of Fame by the Internet Society. Ted Nelson Theodor Holm "Ted" Nelson (born 1937) is an American sociologist and philosopher. In 1960 he founded Project Xanadu with the goal of creating a computer network with a simple user interface. Project Xanadu was to be a worldwide electronic publishing system using hypertext linking that would have created a universal library. In 1963 he coined the terms "hypertext" and "hypermedia". In 1974 he wrote and published two books in one, Computer Lib/Dream Machines, that has been hailed as "the most important book in the history of new media." His grand ideas from the 1960s and 1970s never became completed projects. Tim Berners-Lee Timothy John "Tim" Berners-Lee (born 1955) is a British physicist and computer scientist. In 1980, while working at CERN, he proposed a project using hypertext to facilitate sharing and updating information among researchers. While there, he built a prototype system named ENQUIRE. Back at CERN in 1989 he conceived of and, in 1990, together with Robert Cailliau, created the first client and server implementations for what became the World Wide Web. Berners-Lee is the director of the World Wide Web Consortium (W3C), a standards organization which oversees and encourages the Web's continued development, co-Director of the Web Science Trust, and founder of the World Wide Web Foundation. In 1994, Berners-Lee became one of only six members of the World Wide Web Hall of Fame. In 2004, Berners-Lee was knighted by Queen Elizabeth II for his pioneering work. In April 2009, he was elected a foreign associate of the United States National Academy of Sciences, based in Washington, D.C. In 2012, Berners-Lee was inducted into the Internet Hall of Fame by the Internet Society. Robert Cailliau Robert Cailliau (, born 1947), is a Belgian informatics engineer and computer scientist who, working with Tim Berners-Lee and Nicola Pellow at CERN, developed the World Wide Web. In 2012 he was inducted into the Internet Hall of Fame by the Internet Society. Nicola Pellow Nicola Pellow, one of the nineteen members of the WWW Project at CERN working with Tim Berners-Lee, is recognized for developing the first cross-platform internet browser, Line Mode Browser, that displayed web-pages on dumb terminals and was released in May 1991. She joined the project in November 1990, while an undergraduate math student enrolled in a sandwich course at Leicester Polytechnic (now De Montfort University). She left CERN at the end of August 1991, but returned after graduating in 1992, and worked with Robert Cailliau on MacWWW, the first web browser for the classic Mac OS. Mark P. McCahill Mark P. McCahill (born 1956) is an American programmer and systems architect. While working at the University of Minnesota he led the development of the Gopher protocol (1991), the effective predecessor of the World Wide Web, and contributed to the development and popularization of a number of other Internet technologies from the 1980s. Marc Andreessen Marc L. Andreessen (born 1971) is an American software engineer, entrepreneur, and investor. Working with Eric Bina while at NCSA, he co-authored Mosaic, the first widely used web browser. He is also co-founder of Netscape Communications Corporation. Eric Bina Eric J. Bina (born 1964) is an American computer programmer. In 1993, together with Marc Andreessen, he authored the first version of Mosaic while working at NCSA at the University of Illinois at Urbana–Champaign. Mosaic is famed as the first killer application that popularized the Internet. He is also a co-founder of Netscape Communications Corporation. Birth of the Internet plaque A plaque commemorating the "Birth of the Internet" was dedicated at a conference on the history and future of the internet on 28 July 2005 and is displayed at the Gates Computer Science Building, Stanford University. The text printed and embossed in black into the brushed bronze surface of the plaque reads: BIRTH OF THE INTERNET THE ARCHITECTURE OF THE INTERNET AND THE DESIGN OF THE CORE NETWORKING PROTOCOL TCP (WHICH LATER BECAME TCP/IP) WERE CONCEIVED BY VINTON G. CERF AND ROBERT E. KAHN DURING 1973 WHILE CERF WAS AT STANFORD'S DIGITAL SYSTEMS LABORATORY AND KAHN WAS AT ARPA (LATER DARPA). IN THE SUMMER OF 1976, CERF LEFT STANFORD TO MANAGE THE PROGRAM WITH KAHN AT ARPA. THEIR WORK BECAME KNOWN IN SEPTEMBER 1973 AT A NETWORKING CONFERENCE IN ENGLAND. CERF AND KAHN'S SEMINAL PAPER WAS PUBLISHED IN MAY 1974. CERF, YOGEN K. DALAL, AND CARL SUNSHINE WROTE THE FIRST FULL TCP SPECIFICATION IN DECEMBER 1974. WITH THE SUPPORT OF DARPA, EARLY IMPLEMENTATIONS OF TCP (AND IP LATER) WERE TESTED BY BOLT BERANEK AND NEWMAN (BBN), STANFORD, AND UNIVERSITY COLLEGE LONDON DURING 1975. BBN BUILT THE FIRST INTERNET GATEWAY, NOW KNOWN AS A ROUTER, TO LINK NETWORKS TOGETHER. IN SUBSEQUENT YEARS, RESEARCHERS AT MIT AND USC-ISI, AMONG MANY OTHERS, PLAYED KEY ROLES IN THE DEVELOPMENT OF THE SET OF INTERNET PROTOCOLS. KEY STANFORD RESEARCH ASSOCIATES AND FOREIGN VISITORS VINTON CERF DAG BELSNES JAMES MATHIS RONALD CRANE JUNIOR BOB METCALFE YOGEN DALAL DARRYL RUBIN JUDITH ESTRIN JOHN SHOCH RICHARD KARP CARL SUNSHINE GERARD LE LANN KUNINOBU TANNO DARPA ROBERT KAHN COLLABORATING GROUPS BOLT BERANEK AND NEWMAN WILLIAM PLUMMER • GINNY STRAZISAR • RAY TOMLINSON MIT NOEL CHIAPPA • DAVID CLARK • STEPHEN KENT • DAVID P. REED NDRE YNGVAR LUNDH • PAAL SPILLING UNIVERSITY COLLEGE LONDON FRANK DEIGNAN • MARTINE GALLAND • PETER HIGGINSON ANDREW HINCHLEY • PETER KIRSTEIN • ADRIAN STOKES USC-ISI ROBERT BRADEN • DANNY COHEN • DANIEL LYNCH • JON POSTEL ULTIMATELY, THOUSANDS IF NOT TENS TO HUNDREDS OF THOUSANDS HAVE CONTRIBUTED THEIR EXPERTISE TO THE EVOLUTION OF THE INTERNET. DEDICATED 28 July 2005 See also History of the Internet History of hypertext History of the World Wide Web IEEE Internet Award Internet Hall of Fame References External links Internet Hall of Fame, established by the Internet Society in April 2012. RFC 1336: Who's Who in the Internet: Biographies of Internet Activities Board (IAB), Internet Engineering Steering Group (IESG), and the Internet Research Steering Group (IRSG) of the Internet Research Task Force (IRTF) Members, G. Malkin, IETF, May 1992. "Past IESG Members and IETF Chairs", IETF web site "A Brief History of the Internet Advisory / Activities / Architecture Board" from the IAB web site includes historical lists of IAB Members, IAB Chairs, IAB Ex-Officio and Liaison Members (IETF Chairs), IRTF Chairs, RFC Editors, and much more historical information. "Internet Pioneers", web pages at ibiblio.org, a collaboration of the School of Information and Library Science and the School of Journalism and Mass Communication at the University of North Carolina at Chapel Hill. "Pioneers Gallery", from the Who Is Who in the Internet World (WiWiW) web site. "The Greatest Internet Pioneers You Never Heard Of: The Story of Erwise and Four Finns Who Showed the Way to the Web Browser", Juha-Pekka Tikka, 3 March 2009, Xconomy web page. Oral histories Focuses on Kahn's role in the development of computer networking from 1967 through the early 1980s. Beginning with his work at Bolt Beranek and Newman (BBN), Kahn discusses his involvement as the ARPANET proposal was being written and then implemented, and his role in the public demonstration of the ARPANET. The interview continues into Kahn's involvement with networking when he moves to IPTO in 1972, where he was responsible for the administrative and technical evolution of the ARPANET, including programs in packet radio, the development of a new network protocol (TCP/IP), and the switch to TCP/IP to connect multiple networks. Cerf describes his involvement with the ARPA network, and his relationships with Bolt Beranek and Newman, Robert Kahn, Lawrence Roberts, and the Network Working Group. Baran describes his work at RAND, and discusses his interaction with the group at ARPA who were responsible for the later development of the ARPANET. Kleinrock discusses his work on the ARPANET. The interview focuses on Robert's work at the Information Processing Techniques Office (IPTO) at ARPA including discussion of ARPA and IPTO support of research in computer science, computer networks, and artificial intelligence, the ARPANET, the involvement of universities with ARPA and IPTO, J. C. R. Licklider, Ivan Sutherland, Steve Lukasik, Wesley Clark, as well as the development of computing at the Massachusetts Institute of Technology and Lincoln Laboratory. Focuses on McCahill's work at the University of Minnesota where he led the team that created Gopher, the popular client/server software for organizing and sharing information on the Internet as well as his work on development of Pop Mail, Gopher VR, Forms Nirvana, the Electronic Grants Management System, and the University of Minnesota Portal. History of the Internet Lists of computer scientists People in information technology
60658607
https://en.wikipedia.org/wiki/Gympie%20State%20High%20School
Gympie State High School
Gympie State High School is a coeducational public secondary school located in Gympie in the Wide Bay–Burnett region in Queensland, Australia. The school has a total enrolment of more than 900 students per year, with an official count of 922 students in August 2020. Gympie State High School retains its original motto, Ecollegio Metallisque Aurum, meaning "Gold from the school as well as the mines", and its official colours of maroon and gold. History Gympie State High School opened in January 1912 as the first secondary school in Queensland. The school was originally located on Lawrence Street, where Gympie Central State School is now located. The first building on Cootharaba Road was constructed in 1917 and became one of Gympie's most iconic buildings. On 18 May 1955, the original building was destroyed by fire as a result of a science experiment. Later that year, Blocks B, C and D were built on the same site. Blocks E, J and M were later built in the 1960s, whereas Blocks A, H and Hamilton Hall were constructed during the 1970s, followed by the construction of Blocks N, O and the school pool during the 1980s. In the 1990s, K Block was reconstructed from a 1930s building and the present administration building was constructed. The last major development at Gympie State High School occurred in 2014 in preparation for the introduction of Year 7 to the school. On 21 September 2018, some of the buildings at Gympie State High School were entered into the Queensland Heritage Register. Curriculum Excellence programs Excellence programs available to students at Gympie State High School include: Gympie Music School of Excellence On-line College of Coding Rural Industries School of Excellence (RISE) program Scholarship Program Specialised School of Excellence - Mathematics & Science Sport Academy (Rugby League, Futsal and Volleyball) Learning Enhancement Program The Learning Enhancement Program aims to engage Year 8 students in a variety of innovative high-interest programs. Students may choose from one of Gympie State High School's Excellence Programs or rotate through the four 10-week courses Dance/Drama, Essential Business, Hospitality and Visual Art. English English is a compulsory core subject across the Year 7–10 curriculum. In Year 10, each student undertakes one of the subjects of General English, Essential English and Literature. English subjects available to students in Years 11 and 12 include the General subjects of English and Literature, and the Applied subject of Essential English. Mathematics Mathematics is a compulsory core subject across the Year 7–10 curriculum. In Year 10, each student undertakes one of the subjects of General Mathematics, Essential Mathematics and Mathematical Methods. Mathematics subjects available to students in Years 11 and 12 include the General subjects of General Mathematics, Mathematical Methods and Specialist Mathematics, and the Applied subject of Essential Mathematics. Humanities The Humanities subjects of History and Geography are studied as compulsory core subjects across the Year 7–9 curriculum. In Year 9, the semester-long elective subject of Business Essentials becomes available. Year 10 Humanities subjects include Business Essentials, Citizenship Education – Civics & Law, Geography, History and Japanese. Humanities subjects available to students in Years 11 and 12 include the General subjects of Accounting, Ancient History, Business, Geography, Legal Studies and Modern History, and the short course of Career Education. Science Science is a compulsory core subject across the Year 7–9 curriculum. Year 10 science subjects include Agricultural Practices, Aquatic Practices and Rural Operations, and the Preparatory Science subjects of Agriculture, Biology, Chemistry and Physics. Science subjects available to students in Years 11 and 12 include the General subjects of Agricultural Science, Biology, Chemistry and Physics, and the Applied subjects of Agricultural Practices, Aquatic Practices and Science in Practice. Health & Physical Education Health & Physical Education is a compulsory core subject across the Year 7–9 curriculum. Year 10 Health & Physical Education subjects include Physical Education and Sport & Recreation. Health & Physical Education subjects available to students in Years 11 and 12 include the General subject of Physical Education and the Applied subject of Sport & Recreation. Languages Japanese is studied as a compulsory core subject in Years 7 and 8, and from Year 9, it is studied as an elective subject. The Arts The Arts subject of Visual art is studied as a rotational subject for one term in Year 7 and Music is studied as a rotational subject for one term in Year 8. Arts subjects available to students in Years 9 and 10 include Drama, Music and Visual Art, whereas Visual Arts in Practice is available to students in Year 10 only. Arts subjects available to students in Years 11 and 12 include the General subjects of Drama, Music, Music Extension (Musicology), Music Extension (Performance), Visual Art and Film, Television & New Media, and the Applied subjects of Drama in Practice, Music in Practice and Visual Arts in Practice. Technologies Home Economics Home Economics is studied as a rotational subject for one term in Year 8. Year 9 Home Economics subjects include the semester-long subjects of Food Technology and Textile Technology, whereas Year 10 Home Economics subjects include Food – Cooking & Hospitality, Food Studies & Nutrition and Textile Studies & Early Childhood. Home Economics subjects available to students in Years 11 and 12 include the General subject of Food & Nutrition and the Applied subjects of Early Childhood Studies and Hospitality Practices. Industrial Technology & Design Industrial Technology & Design is studied as a rotational subject for one term in Year 8. Years 9 and 10 Industrial Technology & Design subjects include Industrial Technology A, Industrial Technology B and Graphics. Industrial Technology & Design subjects available to students in Years 11 and 12 include the General subject of Design and the Applied subjects of Industrial Graphics Skills and Industrial Technology Skills. Information Technology The Information Technology subject of Digital Technology is studied as a rotational subject for one term in Years 7 and 8, whereas the Information Technology subject of Science, Technology, Engineering & Mathematics (STEM) is studied as a rotational subject for one term in Year 7 only. In Year 9, Digital Technology is studied as a semester-long elective subject, whereas both the subjects of Digital Technology and STEM are studied as year-long electives in Year 10. The Applied subject of Information & Communication Technology is available to students in Years 11 and 12. Support services Learning Partnerships Program The Learning Partnerships Program is an educational program that aims to support students with a disability to develop socially and emotionally in order to become independent thinkers, communicators and problem solvers. Years 10–12 Learning Partnerships Program subjects include Practical English, Practical Mathematics and Workplace Practices (Year 12 only), and the Alternate Learning Program subjects of the ASDAN program, Café and Horticulture. Ka'bi Place The Ka'bi Place Indigenous Support Team aims to establish and maintain good working relationships with students, parents, carers and the community. Programs operated by the Ka'bi Place Indigenous Support Team include: Crossing Cultures Programs Cultural Camp Deadly Young Persons Program Ka'bi Homework Club Murri Futures Youth Employment Program Vocational Education & Training Vocational Education & Training (VET) courses available to Year 10 students include: Certificate I in Business (BSB10115) Certificate I in Construction (CPC10111) Certificate I in Information, Digital Media & Technology (ICT10115) Certificate II in Creative Industries (CUA20215) VET courses available to students in Years 11 and 12 include: Certificate I in Furnishing (MSF10113) Certificate I in Hospitality (SIT10216) Certificate I in Information, Digital Media & Technology (ICT10115) Certificate I in Manufacturing (Pathways) (MSM10216) Certificate II in Business (BSB20115) Certificate II in Engineering Pathways (MEM20413) Certificate II in Furnishing Pathways (MSF20313) Certificate II in Hospitality (SIT20316) Certificate II in Information, Digital Media & Technology (ICT20115) Certificate II in Rural Operations (AHC21216) Certificate III in Fitness (SIS30315) Certificate IV in Digital & Interactive Games (ICT40915) References External links Public high schools in Queensland Gympie Schools in Wide Bay–Burnett Educational institutions established in 1912 1912 establishments in Australia Queensland Heritage Register
6731084
https://en.wikipedia.org/wiki/Enterprise%20information%20security%20architecture
Enterprise information security architecture
Enterprise information security architecture (ZBI) is a part of enterprise architecture focusing on information security throughout the enterprise. The name implies a difference that may not exist between small/medium-sized businesses and larger organizations. Overview Enterprise information security architecture (EISA) is the practice of applying a comprehensive and rigorous method for describing a current and/or future structure and behavior for an organization's security processes, information security systems, personnel, and organizational sub-units so that they align with the organization's core goals and strategic direction. Although often associated strictly with information security technology, it relates more broadly to the security practice of business optimization in that it addresses business security architecture, performance management, and security process architecture as well. Enterprise information security architecture is becoming a common practice within the financial institutions around the globe. The primary purpose of creating an enterprise information security architecture is to ensure that business strategy and IT security are aligned. As such, enterprise information security architecture allows traceability from the business strategy down to the underlying technology. Enterprise information security architecture topics Positioning Enterprise information security architecture was first formally positioned by Gartner in their whitepaper called “Incorporating Security into the Enterprise Architecture Process”. This was published on 24 January 2006. Since this publication, security architecture has moved from being a silo based architecture to an enterprise focused solution that incorporates business, information and technology. The picture below represents a one-dimensional view of enterprise architecture as a service-oriented architecture. It also reflects the new addition to the enterprise architecture family called “Security”. Business architecture, information architecture and technology architecture used to be called BIT for short. Now with security as part of the architecture family it has become BITS. Security architectural change imperatives now include things like Business roadmaps Legislative and legal requirements Technology roadmaps Industry trends Risk trends Visionaries Goals Provide structure, coherence and cohesiveness. Must enable business-to-security alignment. Defined top-down beginning with business strategy. Ensure that all models and implementations can be traced back to the business strategy, specific business requirements and key principles. Provide abstraction so that complicating factors, such as geography and technology religion, can be removed and reinstated at different levels of detail only when required. Establish a common "language" for information security within the organization Methodology The practice of enterprise information security architecture involves developing an architecture security framework to describe a series of "current", "intermediate" and "target" reference architectures and applying them to align programs of change. These frameworks detail the organizations, roles, entities and relationships that exist or should exist to perform a set of business processes. This framework will provide a rigorous taxonomy and ontology that clearly identifies what processes a business performs and detailed information about how those processes are executed and secured. The end product is a set of artifacts that describe in varying degrees of detail exactly what and how a business operates and what security controls are required. These artifacts are often graphical. Given these descriptions, whose levels of detail will vary according to affordability and other practical considerations, decision makers are provided the means to make informed decisions about where to invest resources, where to realign organizational goals and processes, and what policies and procedures will support core missions or business functions. A strong enterprise information security architecture process helps to answer basic questions like: What is the information security risk posture of the organization? Is the current architecture supporting and adding value to the security of the organization? How might a security architecture be modified so that it adds more value to the organization? Based on what we know about what the organization wants to accomplish in the future, will the current security architecture support or hinder that? Implementing enterprise information security architecture generally starts with documenting the organization's strategy and other necessary details such as where and how it operates. The process then cascades down to documenting discrete core competencies, business processes, and how the organization interacts with itself and with external parties such as customers, suppliers, and government entities. Having documented the organization's strategy and structure, the architecture process then flows down into the discrete information technology components such as: Organization charts, activities, and process flows of how the IT Organization operates Organization cycles, periods and timing Suppliers of technology hardware, software, and services Applications and software inventories and diagrams Interfaces between applications - that is: events, messages and data flows Intranet, Extranet, Internet, eCommerce, EDI links with parties within and outside of the organization Data classifications, Databases and supporting data models Hardware, platforms, hosting: servers, network components and security devices and where they are kept Local and wide area networks, Internet connectivity diagrams Wherever possible, all of the above should be related explicitly to the organization's strategy, goals, and operations. The enterprise information security architecture will document the current state of the technical security components listed above, as well as an ideal-world desired future state (Reference Architecture) and finally a "Target" future state which is the result of engineering tradeoffs and compromises vs. the ideal. Essentially the result is a nested and interrelated set of models, usually managed and maintained with specialised software available on the market. Such exhaustive mapping of IT dependencies has notable overlaps with both metadata in the general IT sense, and with the ITIL concept of the configuration management database. Maintaining the accuracy of such data can be a significant challenge. Along with the models and diagrams goes a set of best practices aimed at securing adaptability, scalability, manageability etc. These systems engineering best practices are not unique to enterprise information security architecture but are essential to its success nonetheless. They involve such things as componentization, asynchronous communication between major components, standardization of key identifiers and so on. Successful application of enterprise information security architecture requires appropriate positioning in the organization. The analogy of city-planning is often invoked in this connection, and is instructive. An intermediate outcome of an architecture process is a comprehensive inventory of business security strategy, business security processes, organizational charts, technical security inventories, system and interface diagrams, and network topologies, and the explicit relationships between them. The inventories and diagrams are merely tools that support decision making. But this is not sufficient. It must be a living process. The organization must design and implement a process that ensures continual movement from the current state to the future state. The future state will generally be a combination of one or more Closing gaps that are present between the current organization strategy and the ability of the IT security dimensions to support it Closing gaps that are present between the desired future organization strategy and the ability of the security dimensions to support it Necessary upgrades and replacements that must be made to the IT security architecture based on supplier viability, age and performance of hardware and software, capacity issues, known or anticipated regulatory requirements, and other issues not driven explicitly by the organization's functional management. On a regular basis, the current state and future state are redefined to account for evolution of the architecture, changes in organizational strategy, and purely external factors such as changes in technology and customer/vendor/government requirements, and changes to both internal and external threat landscapes over time. High-level security architecture framework Enterprise information security architecture frameworks is only a subset of enterprise architecture frameworks. If we had to simplify the conceptual abstraction of enterprise information security architecture within a generic framework, the picture on the right would be acceptable as a high-level conceptual security architecture framework. Other open enterprise architecture frameworks are: SABSA framework and methodology The U.S. Department of Defense (DoD) Architecture Framework (DoDAF) Extended Enterprise Architecture Framework (E2AF) from the Institute For Enterprise Architecture Developments. Federal Enterprise Architecture of the United States Government (FEA) Capgemini's Integrated Architecture Framework The UK Ministry of Defence (MOD) Architecture Framework (MODAF) NIH Enterprise Architecture Framework Open Security Architecture Information Assurance Enterprise Architectural Framework (IAEAF) Service-Oriented Modeling Framework (SOMF) The Open Group Architecture Framework (TOGAF) Zachman Framework Enterprise Cybersecurity (Book) Relationship to other IT disciplines Enterprise information security architecture is a key component of the information security technology governance process at any organization of significant size. More and more companies are implementing a formal enterprise security architecture process to support the governance and management of IT. However, as noted in the opening paragraph of this article it ideally relates more broadly to the practice of business optimization in that it addresses business security architecture, performance management and process security architecture as well. Enterprise Information Security Architecture is also related to IT security portfolio management and metadata in the enterprise IT sense. See also Enterprise architecture Enterprise architecture planning Information security Information assurance References Further reading Carbone, J. A. (2004). IT architecture toolkit. Enterprise computing series. Upper Saddle River, NJ, Prentice Hall PTR. Cook, M. A. (1996). Building enterprise information architectures : reengineering information systems. Hewlett-Packard professional books. Upper Saddle River, NJ, Prentice Hall. Fowler, M. (2003). Patterns of enterprise application architecture. The Addison-Wesley signature series. Boston, Addison-Wesley. SABSA integration with TOGAF. Groot, R., M. Smits and H. Kuipers (2005). "A Method to Redesign the IS Portfolios in Large Organisations", Proceedings of the 38th Annual Hawaii International Conference on System Sciences (HICSS'05). Track 8, p. 223a. IEEE. Steven Spewak and S. C. Hill (1993). Enterprise architecture planning : developing a blueprint for data, applications, and technology. Boston, QED Pub. Group. Woody, Aaron (2013). Enterprise Security: A Data-Centric Approach to Securing the Enterprise. Birmingham, UK. Packt Publishing Ltd. Enterprise architecture Computer security
31786743
https://en.wikipedia.org/wiki/Moose%20%28analysis%29
Moose (analysis)
Moose is a free and open source platform for software and data analysis built in Pharo. Moose offers multiple services ranging from importing and parsing data, to modeling, to measuring, querying, mining, and to building interactive and visual analysis tools. Moose was born in a research context, and it is currently supported by several research groups throughout the world. It is increasingly being adopted in industry. Key Features The philosophy of Moose is to enable the analyst to produce new dedicated analysis tools, and to customize the flow of analysis. While Moose is mainly used in software analysis, it is built to work for any data. To achieve this it offers multiple mechanisms and frameworks: Importing and meta-meta-modeling is achieved through a generic meta-described engine. Any meta-model is described in terms of a self-described meta-meta-model, and based on this description, the import/export is provided through the MSE file format. Through this file format, Moose can exchange data with external tools. For parsing, Moose provides a novel framework that makes use of several parsing technologies (like parsing expression grammar) and that provides a fluent interface for easy construction. Software analysis is specifically supported through the FAMIX family of meta-models. The core of FAMIX is a language independent meta-model that is similar to UML but it is focused on analysis. Furthermore, it provides rich interface for querying models. Visualization is supported through two different engines: one for expressing graph visualizations, and one for expressing charts. They both provide a fluent interface for easy construction. Browsing is an important principle in Moose, and it is supported in multiple ways as well. A generic interface enables the analyst to browse any model. To be able to specify specific browsers, Moose offers a generic engine that eases the specification through a specific fluent interface. History 1996-1999: First infrastructure, meta-model Moose was born at the University of Bern in the context of FAMOOS, a European project that took place between Sept. 1996-Sept. 1999. FAMOOS focussed on methods and tools to analyse and detect design problems in object-oriented legacy systems, and to migrate these systems towards more flexible architectures. The main results of FAMOOS are summarized in the FAMOOS Handbook and in the Object-Oriented Reengineering Patterns book. In the beginning of the FAMOOS project Moose was merely the implementation of a language independent meta-model known as FAMIX. The parsing of C/C++ code was done through Sniff+, and the produced models were imported via the CDIF standard. Initially, Moose provided for a hard-coded importer and served as basis for simple visualization and program fact extractor (1997). Then it started to be used to compute metrics. Later on, as the meta-model evolved, it became apparent that the import/export service should be orthogonal to the meta-model and most important that the environment should support meta-model extension. As a consequence, a first, extremely simple meta-meta-model was implemented, which, at the time, could represent entities and relationships (1998). 1999-2003: Interchange formats, visualizations With the introduction of the XMI standard, a first Meta-Object Facility meta-model was implemented and CDIF meta-models were transformed into MOF meta-models for the XMI model generation. However, MOF was not used as the underlying Moose meta-meta-model. In parallel, the visualization development led to the extension of the set of metrics computed. At the time, CodeCrawler was the flagship application of Moose, and for a significant period CodeCrawler influenced the architecture of Moose (1999). For example, the metrics had to be computed for all entities before the views could be generated. The interest in researching the evolution of systems led to the implementation of the meta-model repository. As such, the first application was the Evolution Matrix (2001). Later on, more research was invested in understanding the evolution of systems, resulting in the development of Van (2002). Because the evolution analysis requires large amounts of data to be manipulated, it was not feasible anymore to manipulate all the model information all the time. Also, the computation of the metrics beforehand for all entities in the model was another bottleneck. As a consequence, several services were implemented: partial loading of the models, lazy computation of the properties, and caching of results. It became apparent that the meta-descriptions are a powerful way of separating the data representation (i.e., the meta-model) from the different techniques to manipulate this data. Consequently, the team started to implement a MOF-like meta-meta-model (2002) and replaced the original one. It offers an architecture similar to that of the Eclipse Modeling Framework (EMF). 2003-2007: Generic UI, custom interchange format, scriptable visualizations As an application of the meta-description, the development of a generic GUI was started to provide basic services such as navigation, querying, and introspection (2003). An important role in the caching mechanism and in the querying is played by the notion of a group as a first-class entity: every query or selection in Moose yields a group, and any group can be manipulated in the Browser (2003). To ease tool development, a plug-in mechanism was needed. Thus, based on meta-description, each tool can register itself to the menu attached to each entity in the meta-model. This simple mechanism allows these tools to complement each other without imposing a hard-coded dependency between them. The combination of menus and groups meant that complex analyses could be broken down into multiple steps, each of which may make use of a different tool. Combining and composing tools thereby becomes natural and transparent. In 2006, Meta was created as a self-described implementation of EMOF (Essential Meta Object Facility) and it replaced the meta-meta-model of Moose. Together with Meta, the new MSE file format was created. Because Meta is self described, Moose is now able to load both externals models and meta-models using the same mechanism. In the same time, XMI and CDIF support was dropped. To provide support for fast prototyping of interactive visual tools, Mondrian was built. Mondrian uses Smalltalk as an underlying scripting language and adds support for graph based visualizations. Mondrian received 2nd prize at the ESUG 2006 Innovation Awards. In 2007, a new engine, called EyeSee, grew up around Moose to allow for scripting Excel-like charts. EyeSee received 2nd prize at the ESUG 2007 Innovation Awards. 2008-2011: FAMIX 3.0, scriptable browsers and the move to Pharo In 2008, Meta was replaced by Fame that implements a new meta-meta-model (FM3) that is simpler and more flexible than EMOF. The effort for building Fame is correlated with the development of FAMIX 3.0, a family of meta-models for software analysis. Starting with the end of 2008, a large effort was started to move Moose from VisualWorks to Pharo, an open source Smalltalk. The first alpha version under Pharo was released in August 2009. During this time Glamour was developed, an engine for scripting interactive browsers. Glamour received the 3rd prize at the ESUG 2009 Innovation Awards. PetitParser was added to the Moose Suite. PetitParser is a novel engine for creating dedicated parsers. References External links Moose homepage. Previous Moose homepage. The Moose Book is an open book describing the Moose platform. Humane assessment is a novel approach to software and data assessment enabled by Moose. Glamorous Toolkit is a moldable development environment that evolved over some ideas of Moose and share some of its originators. Smalltalk programming language family Data analysis software Infographics
41362743
https://en.wikipedia.org/wiki/Mustafa%20Jabbar
Mustafa Jabbar
Mustafa Jabbar (, ; born 12 August 1949) is a Bangladeshi businessman, technology entrepreneur and the current Minister of Post and Telecommunication in the Government of Bangladesh. He also served as the president of Bangladesh Association of Software and Information Services (BASIS). He is best known for the creation of Bijoy Bengali keyboard, which was developed in 1988, and it was most widely used Bengali input method until the release of Unicode based Avro Keyboard. He served as the president of Bangladesh Computer Samity, the national ICT organisation of Bangladesh for four consecutive periods. He is a champion of Bangla Bhasha Procholon Ain, 1987 (বাংলা ভাষা প্রচলন আইন, ১৯৮৭; Bengali Language Implementation Act, 1987), and has been praised for promoting the Bengali language in the digital media. Early life and education Jabbar's ancestral home is in Krishnapur village, Khaliajuri Upazila in Netrokona district. He was born on Ashuganj Upazila in Brahmanbaria District to Abdul Jabbar Talukdar and Rabeya Khatun. Jabbar passed his HSC examination from Dhaka College. In 1968, he enrolled in the Department of Bangla at the University of Dhaka and completed his BA in 1972 and MA in journalism in 1974. Career Jabbar started his career as a journalist in 1972 for Daily Ganakantha until it's shut down in 1975. In 1973, he was elected as the publicity secretary of the Dhaka Union of Journalists. He got involved in the businesses of travel agency, printing and publication. He had served as the general secretary of the Association of Travel Agents of Bangladesh (ATAB). Jabbar is a founder member of the Bangladesh Computer Samity (BCS) and its four-time president. He also anchored television shows on IT. Jabbar founded Ananda Computers, best known for developing the Bangla keyboard Bijoy. He heads the Bangladesh Association of Software and Information Services — the trade body of IT entrepreneurs in Bangladesh. Jabbar started a venture involving computers and IT in 1987 and launched the Bijoy Bangla Keyboard and Software on 16 December 1988. His first novel named Nokshotrer Onger was published in 2005. His second novel named Suborne Shekorh was published in the fortnightly Tarokalok's Eid issue of 2006. He has developed Bijoy Library, a library management Software which is being used by libraries of Bangladesh including British Council. He has developed a software named Bijoy Shishu Shiksha for pre-school kids. He developed Prathomik Computer Shiksha, based on textbooks published by National Curriculum and Textbook Board (NCTB). He established schools in Bangladesh including computer-based Ananda Multimedia School and Bijoy Digital School. He is involved in writing textbooks on computer in Bangla and English. Jabbar sat on several government committees on ICT affairs, including the prime minister–formed Digital Bangladesh Taskforce. He is also a member of the Bangladesh Copyright Board. He was appointed as the Minister of Posts, Telecommunications and Information Technology of the Government of Bangladesh on 3 January 2018. Activism Jabbar was a member of Mujib Bahini (Bangladesh Liberation Force) in 1971 and participated in the Liberation War of Bangladesh. He was involved in the movement of freedom of the press and was actively associated with the Dhaka Union of Journalists. He was elected as the Organising Secretary of Dhaka Union of Journalists (DUJ). Reception Mustafa Jabbar has been praised for popularizing the use of the Bengali language in computer and other digital media. A champion of Bangla Bhasha Procholon Ain, 1987, Jabbar opines that until and unless Bengali is well-established as a language of verdict in the Supreme Court of Bangladesh and as the language of research in Bangladeshi universities, Bengali language movement cannot be called finished. Mustafa Jabbar is also known for going after popular opensource Bengali keyboard software Avro. Later, they settled after Avro removed the alleged layout from their software. The whole affair was widely discussed in Bengali blog forums with most people supporting Avro. Jabbar's recent decisions as minister of Post and Telecommunication, such as censorship, social site monitoring and blocking of PUBG, Reddit along with popular web services like Bitly URL shortener, issuu.com, medium.com, cloud file sharing website mediafire.com, Change.org, Russian social media website VK, Opera (web browser) and Internet Archive created a large group of critics and angered the young generation of Bangladesh. His ministry also blocked a webpage containing complaints against the government's student wing- BCL without any reason. References 1949 births Living people People from Khaliajuri Upazila Dhaka College alumni University of Dhaka alumni Bangladeshi businesspeople Posts, Telecommunications and Information Technology ministers People from Ashuganj Upazila
39188650
https://en.wikipedia.org/wiki/United%20States%20v.%20Andrus
United States v. Andrus
United States v. Andrus, 483 F.3d 711 (10th. Cir. 2007), decided on April 25, 2007, was a case heard in the Tenth Circuit of the United States Court of Appeals. The court held that defendant's father had the apparent authority to consent to search of defendant's computer. Facts of the Case Ray Andrus was a fifty-one-year-old man living with his parents in Leawood, Kansas who was the target of a child pornography investigation. Police visited the Andrus home, where the door was answered by Dr. Andrus, Ray's ninety-one-year-old father. Ray was not home at the time of the investigation. Dr. Andrus signed a written consent form, authorizing the officers to search the house and any computers in it. Ray Andrus has his own bedroom in the house, which included a computer in plain sight. Immediately, the officers brought in forensic equipment that by-passed any computer safeguards and allowed them to search the contents of the machine. The officers did not verify whether the computer was password protected or whether Dr. Andrus had access to the system. Apparent Authority Typically, the Fourth Amendment prohibits searches of property without a warrant. However, several exceptions exist that circumvent the need for a warrant. One such exception is voluntary consent. Voluntary consent can be given either by the individual under investigation's Actual Authority or by a third party's Apparent authority. Apparent Authority is means to actual consent for a search if an officer reasonably believes, even if erroneously, that the third party has the authority to consent. To determine whether a third party is in the position to give Apparent Consent, the court considers whether that third party has either: Mutual use of the property in question via joint access, or Control for most purposes. Here, the court found apparent authority in this case where: The officers knew that Dr. Andrus owned and lived in the house in which the computer was located. The officers knew that the house has Internet services, paid for by Dr. Andrus. The officers knew that the screen name connected to the Internet bill had been utilized to access child pornography. The officers knew that Ray Andrus lived in the room with the computer, but they also knew that Dr. Andrus had unfettered access to that room. The computer was in plain view in the room and appeared accessible to anyone. Dr. Andrus did not indicate that he did not own the computer. Holding The court held that the search was proper because the officers had received consent from a third party with Apparent Authority. They reasoned: "[v]iewed under the requisite totality-of-the-circumstances analysis, the facts known to the officers at the time the computer search commenced created an objectively reasonable perception that Dr. Andrus was, at least, one user of the computer. That objectively reasonable belief would have been enough to give Dr. Andrus apparent authority to consent to a search." Dissent The dissent found the officers' belief of apparent authority unreasonable. The search was unreasonable because: The police attached its searching software to the computer before checking whether the computer was password protected In the more than 10 minutes that it took to set up the searching software, the police did not bother to ask Dr. Andrus whether the computer being searched belonged to him. The dissent would have required the officers to at least check for a password before proceeding with the search, and, if a password was discovered, to determine the ownership of the device before taking further action. References External links 2007 in United States case law United States Court of Appeals for the Tenth Circuit cases United States Fourth Amendment case law Child pornography law Johnson County, Kansas
25651423
https://en.wikipedia.org/wiki/Allan%20G.%20Farman
Allan G. Farman
Allan George Farman (born 1949 in Birmingham, United Kingdom) is a Professor of Radiology and Imaging Science, Department of Surgical and Hospital Dentistry, The University of Louisville School of Dentistry, and also serves both as Adjunct Professor of Anatomical Sciences and Neurobiology and as Clinical Professor of Diagnostic Radiology of the School of Medicine in the same institution. Biography Farman attended George Dixon Grammar School after which time he entered Birmingham University. In 2006 he was awarded the University of Louisville President's Medal for Distinguished Service. He is Honored Guest Professor to Peking University, Beijing, China. Farman graduated from dental school at the Birmingham University, United Kingdom in 1971. He holds doctorates in oral and maxillofacial pathology (PhD) and in oral and maxillofacial radiology (DSc), both from the University of Stellenbosch, South Africa. He has graduate degrees in education (EdS, higher education administration) and in business administration (MBA with distinction) from the University of Louisville, Kentucky. He is a diplomate of the American Board of Oral and Maxillofacial Radiology (since 1982) and was in 1996 awarded diplomate status of the Japanese Board of Oral and Maxillofacial Radiology. He has specialist practicing licenses from the Kentucky Board of Dentistry in oral and maxillofacial radiology, from the General Dental Council, UK, in dental and maxillofacial radiology, and from the Health Professions Council of South Africa in oral and maxillofacial pathology. Farman has been a member of the American Academy of Oral and Maxilofacial Radiology since 1980 and served as newsletter editor (1984–1989), and as Councilor for Communications and Councilor for Educational Affairs. In 1988, he was elected scientific editor for a term that concluded in 1995 with the addition of "Oral Radiology" to the title of this journal and the publication of a special issue to commemorate the 100th birthday of the discovery of the x-ray. He was reappointed AAOMR scientific editor in 2005-09. Farman became AAOMR president elect in 2007 and served as President 2009-11. Farman is founder and chair of the International Congress on Computed Maxillofacial Imaging (CMI) that has its 17th Annual Congress in conjunction with the Computer Assisted Radiology and Surgery (CARS) consortium of organizations in Berlin, Germany, June 2011. He is a member of the CARS Organizing Committee. He was 11th president of the International Association of DentoMaxilloFacial Radiology (IADMFR) (1994–1997) and has served as IADMFR Educational Trust Fund Chair since completing his term as president. He was the First Honorary President for the Latin American Congress on DentoMaxilloFacial Radiology, held in Brazil in 1996. From 2000 through 2010, he was appointed by each successive American Dental Association (ADA) president as ADA Voting Representative to the international Digital Imaging and Communications Committee (DICOM), and was founding chair of DICOM WG 22 (Dentistry) in 2003. He is a member of the DICOM International Congress Organizing Committee. He has served as a consultant to the ADA Council on Dental Practice and to the US Technical Advisory Group for the International Organization for Standardization (ISO) TC 106. He served as AAOMR representative to the ADA Council on Dental Benefits Codes Advisory Committee from 2012. Publications Lemke HU, Vannier MW, Inamura K, Farman AG, Doi K (editors). Computer Assisted Radiology and Surgery (Proceedings of the 23rd International Symposium: Berlin, Germany) 2009; Heidelberg: Springer Verlag. [Soft cover] Int J Comput Assist Radiol Surg (Volume 4, Supplement 1). Farman AG, Scarfe WC, Haskell BS (editors). Cone Beam Computed Tomography. Semin Orthod 2009;15:1-84. Lemke HU, Vannier MW, Inamura K, Farman AG, Doi K (editors). Computer Assisted Radiology and Surgery (Proceedings of the 22nd International Symposium: Barcelona, Spain) 2008; Heidelberg: Springer Verlag. [Soft cover] Int J Comput Assist Radiol Surg (Volume 3, Supplement 1). Lemke HU, Vannier MW, Inamura K, Farman AG, Doi K (editors). Computer Assisted Radiology and Surgery (Proceedings of the 21st International Symposium: Berlin, Germany) 2007; Heidelberg: Springer Verlag. [Soft cover; 559 pages.] Int J Comput Assist Radiol Surg (Volume 2, Supplement 1). Farman AG. Panoramic Radiology: Seminars on Maxillofacial Imaging and Interpretation. May 2007. Heidelberg: Springer Verlag. [Hardcover; 231 pages; 178 illustrations, 130 in color] . Library of Congress Number: 2007921305. Lemke HU, Vannier MW, Inamura K, Farman AG, Doi K (editors). Computer Assisted Radiology and Surgery (Proceedings of the 20th International Symposium: Osaka, Japan) 2006; Heidelberg: Springer Verlag. [Soft cover; 542 pages.] Int J Comput Assist Radiol Surg (Volume 1, Supplement 1). Lemke HU, Inamura K, Doi K, Vannier MW, Farman AG (editors). Computer Assisted Radiology and Surgery (Proceedings of the 19th International Symposium: Chicago, Illinois) 2005; Amsterdam: Elsevier. ASDA SCDI WG 12.1 (Farman AG, Goyette J, Co-Chairs), Implementation Requirement for DICOM in Dentistry. American Dental Association Technical Report No.1023. Chicago: ADA, 2005. Lemke HU, Vannier MW, Inamura K, Farman AG, Doi K, Reiber JHC (editors). Computer Assisted Radiology and Surgery (Proceedings of the 18th International Symposium: Chicago, Illinois) 2004; Amsterdam: Elsevier. . Kapila S, Farman AG. Conference on Orthodontic Advances in Science and Technology (COAST) Foundation. Proceedings of the Inaugural Conference: Craniofacial Imaging in the 21st Century. Oxford, UK: Blackwell/Munksgaard. [Soft cover; 182 pages.] Orthod Craniofac Res 2003;6 (Supplement 1.) . Lemke HU, Vannier MW, Inamura K, Farman AG, Doi K, Reiber JHC (editors). Computer Assisted Radiology and Surgery (Proceedings of the 17th International Symposium: London, England) 2003; Amsterdam: Elsevier. . Lemke HU, Vannier MW, Inamura K, Farman AG, Doi K, Reiber JHC (editors). Computer Assisted Radiology and Surgery (Proceedings of the 16th International Symposium: Paris, France) 2002; Berlin: Springer-Verlag. . Lemke HU, Vannier MW, Inamura K, Farman AG, Doi K (editors). Computer Assisted Radiology and Surgery (Proceedings of the 15th International Symposium: Berlin, Germany) 2001; Amsterdam: Elsevier/Excerpta Medica. Lemke HU, Vannier MW, Inamura K, Farman AG, Doi K (editors). Computer Assisted Radiology and Surgery (Proceedings of the 14th International Symposium: San Francisco) 2000; Amsterdam: Elsevier/Excerpta Medica. . Lemke HU, Vannier MW, Inamura K, Farman AG (editors). Computer Assisted Radiology and Surgery (Proceedings of the 13th International Symposium: Paris, France) 1999; Amsterdam: Elsevier/Excerpta Medica. . Lemke HU, Vannier MW, Inamura K, Farman AG (editors). Computer Assisted Radiology and Surgery (Proceedings of the 12th International Symposium: Tokyo, Japan) 1998; Amsterdam: Elsevier/Excerpta Medica. . Farman AG, Ruprecht A, Gibbs SJ, Scarfe WC (editors). Advances in Maxillofacial Imaging (Proceedings of the 11th International Congress of Dentomaxillofacial Radiology: Louisville, Kentucky) 1997; Amsterdam: Elsevier/Excerpta Medica. . Lemke HU, Inamura K, Vannier MW, Farman AG (editors). Computer Assisted Radiology (Proceedings of the 10th International Symposium: Paris, France) 1996; Amsterdam: Elsevier/Excerpta Medica. . See also Dental radiography Vijay P. Parashar – Oral and Maxillofacial Radiologist and Associate Professor Oral and maxillofacial radiology References 1949 births Living people University of Louisville faculty Alumni of the University of Birmingham Stellenbosch University alumni
631063
https://en.wikipedia.org/wiki/Key%20management
Key management
Key management refers to management of cryptographic keys in a cryptosystem. This includes dealing with the generation, exchange, storage, use, crypto-shredding (destruction) and replacement of keys. It includes cryptographic protocol design, key servers, user procedures, and other relevant protocols. Key management concerns keys at the user level, either between users or systems. This is in contrast to key scheduling, which typically refers to the internal handling of keys within the operation of a cipher. Successful key management is critical to the security of a cryptosystem. It is the more challenging side of cryptography in a sense that it involves aspects of social engineering such as system policy, user training, organizational and departmental interactions, and coordination between all of these elements, in contrast to pure mathematical practices that can be automated. Types of keys Cryptographic systems may use different types of keys, with some systems using more than one. These may include symmetric keys or asymmetric keys. In a symmetric key algorithm the keys involved are identical for both encrypting and decrypting a message. Keys must be chosen carefully, and distributed and stored securely. Asymmetric keys, also known as public keys, in contrast are two distinct keys that are mathematically linked. They are typically used together to communicate. Public key infrastructure (PKI), the implementation of public key cryptography, requires an organization to establish an infrastructure to create and manage public and private key pairs along with digital certificates. Inventory The starting point in any certificate and private key management strategy is to create a comprehensive inventory of all certificates, their locations and responsible parties. This is not a trivial matter because certificates from a variety of sources are deployed in a variety of locations by different individuals and teams - it's simply not possible to rely on a list from a single certificate authority. Certificates that are not renewed and replaced before they expire can cause serious downtime and outages. Some other considerations: Regulations and requirements, like PCI-DSS, demand stringent security and management of cryptographic keys and auditors are increasingly reviewing the management controls and processes in use. Private keys used with certificates must be kept secure or unauthorised individuals can intercept confidential communications or gain unauthorised access to critical systems. Failure to ensure proper segregation of duties means that admins who generate the encryption keys can use them to access sensitive, regulated data. If a certificate authority is compromised or an encryption algorithm is broken, organizations must be prepared to replace all of their certificates and keys in a matter of hours. Management steps Once keys are inventoried, key management typically consists of three steps: exchange, storage and use. Key exchange Prior to any secured communication, users must set up the details of the cryptography. In some instances this may require exchanging identical keys (in the case of a symmetric key system). In others it may require possessing the other party's public key. While public keys can be openly exchanged (their corresponding private key is kept secret), symmetric keys must be exchanged over a secure communication channel. Formerly, exchange of such a key was extremely troublesome, and was greatly eased by access to secure channels such as a diplomatic bag. Clear text exchange of symmetric keys would enable any interceptor to immediately learn the key, and any encrypted data. The advance of public key cryptography in the 1970s has made the exchange of keys less troublesome. Since the Diffie-Hellman key exchange protocol was published in 1975, it has become possible to exchange a key over an insecure communications channel, which has substantially reduced the risk of key disclosure during distribution. It is possible, using something akin to a book code, to include key indicators as clear text attached to an encrypted message. The encryption technique used by Richard Sorge's code clerk was of this type, referring to a page in a statistical manual, though it was in fact a code. The German Army Enigma symmetric encryption key was a mixed type early in its use; the key was a combination of secretly distributed key schedules and a user chosen session key component for each message. In more modern systems, such as OpenPGP compatible systems, a session key for a symmetric key algorithm is distributed encrypted by an asymmetric key algorithm. This approach avoids even the necessity for using a key exchange protocol like Diffie-Hellman key exchange. Another method of key exchange involves encapsulating one key within another. Typically a master key is generated and exchanged using some secure method. This method is usually cumbersome or expensive (breaking a master key into multiple parts and sending each with a trusted courier for example) and not suitable for use on a larger scale. Once the master key has been securely exchanged, it can then be used to securely exchange subsequent keys with ease. This technique is usually termed key wrap. A common technique uses block ciphers and cryptographic hash functions. A related method is to exchange a master key (sometimes termed a root key) and derive subsidiary keys as needed from that key and some other data (often referred to as diversification data). The most common use for this method is probably in smartcard-based cryptosystems, such as those found in banking cards. The bank or credit network embeds their secret key into the card's secure key storage during card production at a secured production facility. Then at the point of sale the card and card reader are both able to derive a common set of session keys based on the shared secret key and card-specific data (such as the card serial number). This method can also be used when keys must be related to each other (i.e., departmental keys are tied to divisional keys, and individual keys tied to departmental keys). However, tying keys to each other in this way increases the damage which may result from a security breach as attackers will learn something about more than one key. This reduces entropy, with regard to an attacker, for each key involved. Key storage However distributed, keys must be stored securely to maintain communications security. Security is a big concern and hence there are various techniques in use to do so. Likely the most common is that an encryption application manages keys for the user and depends on an access password to control use of the key. Likewise, in the case of smartphone keyless access platforms, they keep all identifying door information off mobile phones and servers and encrypt all data, where just like low-tech keys, users give codes only to those they trust. In terms of regulation, there are few that address key storage in depth. "Some contain minimal guidance like 'don’t store keys with encrypted data' or suggest that 'keys should be kept securely.'" The notable exceptions to that are PCI DSS 3.2.1, NIST 800-53 and NIST 800–57. For optimal security, keys may be stored in a Hardware Security Module (HSM) or protected using technologies such as Trusted Execution Environment (TEE, e.g. Intel SGX) or Multi-Party Computation (MPC). Additional alternatives include utilizing Trusted Platform Modules (TPM), virtual HSMs, aka "Poor Man's Hardware Security Modules" (pmHSM), or non-volatile Field-Programmable-Gate-Arrays (FPGA) with supporting System-on-Chip configurations. In order to verify the integrity of a key stored without compromising its actual value a KCV algorithm can be used. Key use The major issue is length of time a key is to be used, and therefore frequency of replacement. Because it increases any attacker's required effort, keys should be frequently changed. This also limits loss of information, as the number of stored encrypted messages which will become readable when a key is found will decrease as the frequency of key change increases. Historically, symmetric keys have been used for long periods in situations in which key exchange was very difficult or only possible intermittently. Ideally, the symmetric key should change with each message or interaction, so that only that message will become readable if the key is learned (e.g., stolen, cryptanalyzed, or social engineered). Challenges Several challenges IT organizations face when trying to control and manage their encryption keys are: Scalability: Managing a large number of encryption keys. Security: Vulnerability of keys from outside hackers, malicious insiders. Availability: Ensuring data accessibility for authorized users. Heterogeneity: Supporting multiple databases, applications and standards. Governance: Defining policy-driven access control and protection for data. Governance includes compliance with data protection requirements. Compliance Key management compliance refers to the oversight, assurance, and capability of being able to demonstrate that keys are securely managed. This includes the following individual compliance domains: Physical security – the most visible form of compliance, which may include locked doors to secure system equipment and surveillance cameras. These safeguards can prevent unauthorized access to printed copies of key material and computer systems that run key management software. Logical security – protects the organization against the theft or unauthorized access of information. This is where the use of cryptographic keys comes in by encrypting data, which is then rendered useless to those who do not have the key to decrypt it. Personnel security – this involves assigning specific roles or privileges to personnel to access information on a strict need-to-know basis. Background checks should be performed on new employees along with periodic role changes to ensure security. Compliance can be achieved with respect to national and international data protection standards and regulations, such as Payment Card Industry Data Security Standard, Health Insurance Portability and Accountability Act, Sarbanes–Oxley Act, or General Data Protection Regulation. Management and compliance systems Key management system A key management system (KMS), also known as a cryptographic key management system (CKMS) or enterprise key management system (EKMS), is an integrated approach for generating, distributing and managing cryptographic keys for devices and applications. They may cover all aspects of security - from the secure generation of keys over the secure exchange of keys up to secure key handling and storage on the client. Thus, a KMS includes the backend functionality for key generation, distribution, and replacement as well as the client functionality for injecting keys, storing and managing keys on devices. Standards-based key management Many specific applications have developed their own key management systems with home grown protocols. However, as systems become more interconnected keys need to be shared between those different systems. To facilitate this, key management standards have evolved to define the protocols used to manage and exchange cryptographic keys and related information. Key Management Interoperability Protocol (KMIP) KMIP is an extensible key management protocol that has been developed by many organizations working within the OASIS standards body. The first version was released in 2010, and it has been further developed by an active technical committee. The protocol allows for the creation of keys and their distribution among disparate software systems that need to utilize them. It covers the full key life cycle of both symmetric and asymmetric keys in a variety of formats, the wrapping of keys, provisioning schemes, and cryptographic operations as well as meta data associated with the keys. The protocol is backed by an extensive series of test cases, and interoperability testing is performed between compliant systems each year. A list of some 80 products that conform to the KMIP standard can be found on the OASIS website. Closed source Non-KMIP-compliant key management Open source Barbican, the OpenStack security API. KeyBox - web-based SSH access and key management. EPKS - Echo Public Key Share, system to share encryption keys online in a p2p community. Kmc-Subset137 - key management system implementing UNISIG Subset-137 for ERTMS/ETCS railway application. privacyIDEA - two factor management with support for managing SSH keys. StrongKey - open source, last updated on SourceForge in 2016. There is no more maintenance on this project according to its home page. Vault - secret server from HashiCorp. Keeto - is it alive? NuCypher SecretHub - end-to-end encrypted SaaS key management Closed source Amazon Web Service (AWS) Key Management Service (KMS) Bell ID Key Manager Bloombase KeyCastle Cryptomathic CKMS Encryptionizer Key Manager (Windows only) Google Cloud Key Management IBM Cloud Key Protect Microsoft Azure Key Vault Porticor Virtual Private Data SSH Communications Security Universal SSH Key Manager Akeyless Vault KMS security policy The security policy of a key management system provides the rules that are to be used to protect keys and metadata that the key management system supports. As defined by the National Institute of Standards and Technology NIST, the policy shall establish and specify rules for this information that will protect its: Confidentiality Integrity Availability Authentication of source This protection covers the complete key life-cycle from the time the key becomes operational to its elimination. Bring your own encryption / key Bring your own encryption (BYOE)—also called bring your own key (BYOK)—refers to a cloud-computing security model to allow public-cloud customers to use their own encryption software and manage their own encryption keys. This security model is usually considered a marketing stunt, as critical keys are being handed over to third parties (cloud providers) and key owners are still left with the operational burden of generating, rotating and sharing their keys. IBM offers a variant of this capability called Keep Your Own Key where customers have exclusive control of their keys. Public-key infrastructure (PKI) A public-key infrastructure is a type of key management system that uses hierarchical digital certificates to provide authentication, and public keys to provide encryption. PKIs are used in World Wide Web traffic, commonly in the form of SSL and TLS. Multicast group key management Group key management means managing the keys in a group communication. Most of the group communications use multicast communication so that if the message is sent once by the sender, it will be received by all the users. The main problem in multicast group communication is its security. In order to improve the security, various keys are given to the users. Using the keys, the users can encrypt their messages and send them secretly. IETF.org released RFC 4046, entitled Multicast Security (MSEC) Group Key Management Architecture, which discusses the challenges of group key management. See also References 45.NeoKeyManager - Hancom Intelligence Inc. External links Recommendation for Key Management — Part 1: general, NIST Special Publication 800-57 NIST Cryptographic Toolkit Q* The IEEE Security in Storage Working Group (SISWG) that is creating the P1619.3 standard for Key Management American National Standards Institute - ANSI X9.24, Retail Financial Services Symmetric Key Management The OASIS Key Management Interoperability Protocol (KMIP) Technical Committee The OASIS Enterprise Key Management Infrastructure (EKMI)Technical Committee "Key Management with a Powerful Keystore" "Intelligent Key Management System - KeyGuard | Senergy Intellution" IBM Security Key Lifecycle Manager, SKLM NeoKeyManager - Hancom Intelligence Inc. Data security
59961754
https://en.wikipedia.org/wiki/Indistinguishability%20obfuscation
Indistinguishability obfuscation
In cryptography, indistinguishability obfuscation (abbreviated IO or iO) is a type of software obfuscation with the defining property that obfuscating any two programs that compute the same mathematical function results in programs that cannot be distinguished from each other. Informally, such obfuscation hides the implementation of a program while still allowing users to run it. Formally, IO satisfies the property that obfuscations of two circuits of the same size which implement the same function are computationally indistinguishable. Indistinguishability obfuscation has several interesting theoretical properties. Firstly, iO is the "best-possible" obfuscation (in the sense that any secret about a program that can be hidden by any obfuscator at all can also be hidden by iO). Secondly, iO can be used to construct nearly the entire gamut of cryptographic primitives, including both mundane ones such as public-key cryptography and more exotic ones such as deniable encryption and functional encryption (which are types of cryptography that no-one previously knew how to construct), but with the notable exception of collision-resistant hash function families. For this reason, it has been referred to as "crypto-complete". Lastly, unlike many other kinds of cryptography, indistinguishability obfuscation continues to exist even if P=NP (though it would have to be constructed differently in this case), though this does not necessarily imply that iO exists unconditionally. Though the idea of cryptographic software obfuscation has been around since 1996, indistinguishability obfuscation was first proposed by Barak et al. (2001), who proved that iO exists if P=NP is the case. For the P!=NP case (which is harder, but also more plausible), progress was slower: Garg et al. (2013) proposed a construction of iO based on a computational hardness assumption relating to multilinear maps, but this assumption was later disproven. A construction based on "well-founded assumptions" (hardness assumptions that have been well-studied by cryptographers, and thus widely assumed secure) had to wait until Jain, Lin, and Sahai (2020). (Even so, one of these assumptions used in the 2020 proposal is not secure against quantum computers.) Currently known indistinguishability obfuscation candidates are very far from being practical. As measured by a 2017 paper, even obfuscating the toy function which outputs the logical conjunction of its thirty-two Boolean data type inputs produces a program nearly a dozen gigabytes large. Formal definition Let be some uniform probabilistic polynomial-time algorithm. Then is called an indistinguishability obfuscator if and only if it satisfies both of the following two statements: Completeness or Functionality: For any Boolean circuit C of input length n and input , we have Indistinguishability: For every pair of circuits of the same size k, the distributions and are computationally indistinguishable. In other words, for any probabilistic polynomial-time adversary A, there is a negligible function (i.e., a function that eventually grows slower than for any polynomial p) such that, for every pair of circuits of the same size k, we have History The origin of this idea came from Amit Sahai in 1996 from the notion of a zero-knowledge proof. In 2001, Barak et al., showing that black-box obfuscation is impossible, also proposed the idea of an indistinguishability obfuscator, and constructed an inefficient one.Although this notion seemed relatively weak, Goldwasser and Rothblum (2007) showed that an efficient indistinguishability obfuscator would be a best-possible obfuscator, and any best-possible obfuscator would be an indistinguishability obfuscator. (However, for inefficient obfuscators, no best-possible obfuscator exists unless the polynomial hierarchy collapses to the second level.) Candidate constructions Barak et al. (2001) proved that an inefficient indistinguishability obfuscator exists for circuits; that is, the lexicographically first circuit that computes the same function. If P = NP holds, then an indistinguishability obfuscator exists, even though no other kind of cryptography would also exist. A candidate construction of IO with provable security under concrete hardness assumptions relating to multilinear maps was published by Garg et al. (2013), but this assumption was later invalidated. (Previously, Garg, Gentry, and Halevi (2012) had constructed a candidate version of a multilinear map based on heuristic assumptions.) Starting from 2016, Lin began to explore constructions of IO based on less strict versions of multilinear maps, constructing a candidate based on maps of degree up to 30, and eventually a candidate based on maps of degree up to 3. Finally, in 2020, Jain, Lin, and Sahai proposed a construction of IO based on the symmetric external Diffie-Helman, learning with errors, and learning plus noise assumptions, as well as the existence of a super-linear stretch pseudorandom generator in the function class NC0. (The existence of pseudorandom generators in NC0 (even with sub-linear stretch) was a long-standing open problem until 2006.) It is possible that this construction could be broken with quantum computing, but there is an alternative construction that may be secure even against that (although the latter relies on less established security assumptions). Practicality? There have been attempts to implement and benchmark IO candidates. For example, as of 2017, an obfuscation of the function at a security level of 80 bits took 23.5 minutes to produce and measured 11.6 GB, with an evaluation time of 77 ms. Additionally, an obfuscation of the Advanced Encryption Standard hash function at a security level of 128 bits would measure 18 PB and have an evaluation time of about 272 years. An open-source software implementation of an IO candidate was created in 2015. Mathematical details Existence It is useful to divide the question of the existence of iO by using Russell Impagliazzo's "five worlds", which are five different hypothetical situations about average-case complexity: Algorithmica: In this case P = NP, but iO exists. Heuristica: In this case NP problems are easy on average; iO does not exist. Pessiland: In this case, , but one-way functions do not exist; as a result, iO does not exist. Minicrypt: In this case, one-way functions exist, but secure public-key cryptography does not; iO does not exist (because explicit constructions of public-key cryptography from iO and one-way functions are known). Cryptomania: In this case, secure public-key cryptography exists; iO is believed to exist in this case. (The subcase of cryptomania in which iO does exist is known as obfustopia.) Potential applications Indistinguishability obfuscators, if they exist, could be used for an enormous range of cryptographic applications, so much so that it has been referred to as a "central hub" for cryptography, the "crown jewel of cryptography", or "crypto-complete". Concretely, an indistinguishability obfuscator (with the additional assumption of the existence of one-way functions) could be used to construct the following kinds of cryptography: Indistinguishability obfuscation for programs in the RAM model and for Turing machines IND-CCA-secure public-key cryptography Short digital signatures IND-CCA-secure key encapsulation schemes Perfectly zero-knowledge non-interactive zero-knowledge proofs and succinct non-interactive arguments Constant-round concurrent zero-knowledge protocols Multilinear maps with bounded polynomial degrees Injective trapdoor functions Fully homomorphic encryption Witness encryption Functional encryption Secret sharing for any monotone NP language Semi-honest oblivious transfer Deniable encryption (both sender-deniable and fully-deniable) Multiparty, non-interactive key exchange Adaptively secure succinct garbled RAM Correlation intractable functions Attribute-based encryption Oblivious transfer Traitor tracing Graded encoding schemes Additionally, if iO and one-way functions exist, then problems in the PPAD complexity class are provably hard. However, indistinguishability obfuscation cannot be used to construct every possible cryptographic protocol: for example, no black-box construction can convert an indistinguishability obfuscator to a collision-resistant hash function family, even with a trapdoor permutation, unless with an exponential loss of security. See also Black-box obfuscation, a stronger form of obfuscation proven to be impossible References Cryptographic primitives Software obfuscation
33139
https://en.wikipedia.org/wiki/World%20Wide%20Web
World Wide Web
The World Wide Web (WWW), commonly known as the Web, is an information system where documents and other web resources are identified by Uniform Resource Locators (URLs, such as ), which may be interlinked by hyperlinks, and are accessible over the Internet. The resources of the Web are transferred via the Hypertext Transfer Protocol (HTTP), may be accessed by users by a software application called a web browser, and are published by a software application called a web server. The World Wide Web is built on top of the Internet, which pre-dated the Web by over two decades. English scientist Tim Berners-Lee co-invented the World Wide Web in 1989 along with Robert Cailliau. He wrote the first web browser in 1990 while employed at CERN near Geneva, Switzerland. The browser was released outside CERN to other research institutions starting in January 1991, and then to the general public in August 1991. The Web began to enter everyday use in 1993–1994, when websites for general use started to become available. The World Wide Web has been central to the development of the Information Age and is the primary tool billions of people use to interact on the Internet. Web resources may be any type of downloaded media, but web pages are hypertext documents formatted in Hypertext Markup Language (HTML). Special HTML syntax displays embedded hyperlinks with URLs, which permits users to navigate to other web resources. In addition to text, web pages may contain references to images, video, audio, and software components, which are either displayed or internally executed in the user's web browser to render pages or streams of multimedia content. Multiple web resources with a common theme and usually a common domain name make up a website. Websites are stored in computers that are running a web server, which is a program that responds to requests made over the Internet from web browsers running on a user's computer. Website content can be provided by a publisher or interactively from user-generated content. Websites are provided for a myriad of informative, entertainment, commercial, and governmental reasons. History The underlying concept of hypertext originated in previous projects from the 1960s, such as the Hypertext Editing System (HES) at Brown University, Ted Nelson's Project Xanadu, and Douglas Engelbart's oN-Line System (NLS). Both Nelson and Engelbart were in turn inspired by Vannevar Bush's microfilm-based memex, which was described in the 1945 essay "As We May Think". Tim Berners-Lee's vision of a global hyperlinked information system became a possibility by the second half of the 1980s. By 1985, the global Internet began to proliferate in Europe and the Domain Name System (upon which the Uniform Resource Locator is built) came into being. In 1988 the first direct IP connection between Europe and North America was made and Berners-Lee began to openly discuss the possibility of a web-like system at CERN. While working at CERN, Berners-Lee became frustrated with the inefficiencies and difficulties posed by finding information stored on different computers. On 12 March 1989, he submitted a memorandum, titled "Information Management: A Proposal", to the management at CERN for a system called "Mesh" that referenced ENQUIRE, a database and software project he had built in 1980, which used the term "web" and described a more elaborate information management system based on links embedded as text: "Imagine, then, the references in this document all being associated with the network address of the thing to which they referred, so that while reading this document, you could skip to them with a click of the mouse." Such a system, he explained, could be referred to using one of the existing meanings of the word hypertext, a term that he says was coined in the 1950s. There is no reason, the proposal continues, why such hypertext links could not encompass multimedia documents including graphics, speech and video, so that Berners-Lee goes on to use the term hypermedia. With help from his colleague and fellow hypertext enthusiast Robert Cailliau he published a more formal proposal on 12 November 1990 to build a "Hypertext project" called "WorldWideWeb" (one word, abbreviated "W3") as a "web" of "hypertext documents" to be viewed by "browsers" using a client–server architecture. At this point HTML and HTTP had already been in development for about two months and the first Web server was about a month from completing its first successful test. This proposal estimated that a read-only web would be developed within three months and that it would take six months to achieve "the creation of new links and new material by readers, [so that] authorship becomes universal" as well as "the automatic notification of a reader when new material of interest to him/her has become available". While the read-only goal was met, accessible authorship of web content took longer to mature, with the wiki concept, WebDAV, blogs, Web 2.0 and RSS/Atom. The proposal was modelled after the SGML reader Dynatext by Electronic Book Technology, a spin-off from the Institute for Research in Information and Scholarship at Brown University. The Dynatext system, licensed by CERN, was a key player in the extension of SGML ISO 8879:1986 to Hypermedia within HyTime, but it was considered too expensive and had an inappropriate licensing policy for use in the general high energy physics community, namely a fee for each document and each document alteration. A NeXT Computer was used by Berners-Lee as the world's first web server and also to write the first web browser in 1990. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the first web browser (WorldWideWeb, which was a web editor as well) and the first web server. The first website, which described the project itself, was published on 20 December 1990. The first web page may be lost, but Paul Jones of UNC-Chapel Hill in North Carolina announced in May 2013 that Berners-Lee gave him what he says is the oldest known web page during a visit to UNC in 1991. Jones stored it on a magneto-optical drive and his NeXT computer. On 6 August 1991, Berners-Lee published a short summary of the World Wide Web project on the newsgroup alt.hypertext. This date is sometimes confused with the public availability of the first web servers, which had occurred months earlier. As another example of such confusion, some news media reported that the first photo on the Web was published by Berners-Lee in 1992, an image of the CERN house band Les Horribles Cernettes taken by Silvano de Gennaro; Gennaro has disclaimed this story, writing that media were "totally distorting our words for the sake of cheap sensationalism". The first server outside Europe was installed in December 1991 at the Stanford Linear Accelerator Center (SLAC) in Palo Alto, California, to host the SPIRES-HEP database. Berners-Lee's breakthrough was to marry hypertext to the Internet. In his book Weaving The Web, he explains that he had repeatedly suggested to members of both technical communities that a marriage between the two technologies was possible. But, when no one took up his invitation, he finally assumed the project himself. In the process, he developed three essential technologies: a system of globally unique identifiers for resources on the Web and elsewhere, the universal document identifier (UDI), later known as uniform resource locator (URL) and uniform resource identifier (URI); the publishing language Hypertext Markup Language (HTML); the Hypertext Transfer Protocol (HTTP). The World Wide Web had several differences from other hypertext systems available at the time. The Web required only unidirectional links rather than bidirectional ones, making it possible for someone to link to another resource without action by the owner of that resource. It also significantly reduced the difficulty of implementing web servers and browsers (in comparison to earlier systems), but in turn, presented the chronic problem of link rot. Unlike predecessors such as HyperCard, the World Wide Web was non-proprietary, making it possible to develop servers and clients independently and to add extensions without licensing restrictions. On 30 April 1993, CERN announced that the World Wide Web would be free to anyone, with no fees due. Coming two months after the announcement that the server implementation of the Gopher protocol was no longer free to use, this produced a rapid shift away from Gopher and toward the Web. An early popular web browser was ViolaWWW for Unix and the X Window System. The Web began to enter general use in 1993–1994, when websites for everyday use started to become available. Historians generally agree that a turning point for the Web began with the 1993 introduction of Mosaic, a graphical web browser developed at the National Center for Supercomputing Applications at the University of Illinois at Urbana–Champaign (NCSA-UIUC). The development was led by Marc Andreessen, while funding came from the US High-Performance Computing and Communications Initiative and the High Performance Computing Act of 1991, one of several computing developments initiated by US Senator Al Gore. Before the release of Mosaic, graphics were not commonly mixed with text in web pages, and the Web was less popular than older protocols such as Gopher and Wide Area Information Servers (WAIS). Mosaic's graphical user interface allowed the Web to become by far the most popular protocol on the Internet. The World Wide Web Consortium (W3C) was founded by Tim Berners-Lee after he left the European Organization for Nuclear Research (CERN) in October 1994. It was founded at the Massachusetts Institute of Technology Laboratory for Computer Science (MIT/LCS) with support from the Defense Advanced Research Projects Agency (DARPA), which had pioneered the Internet; a year later, a second site was founded at INRIA (a French national computer research lab) with support from the European Commission DG InfSo; and in 1996, a third continental site was created in Japan at Keio University. By the end of 1994, the total number of websites was still relatively small, but many notable websites were already active that foreshadowed or inspired today's most popular services. Connected by the Internet, other websites were created around the world. This motivated international standards development for protocols and formatting. Berners-Lee continued to stay involved in guiding the development of web standards, such as the markup languages to compose web pages and he advocated his vision of a Semantic Web. The World Wide Web enabled the spread of information over the Internet through an easy-to-use and flexible format. It thus played an important role in popularising use of the Internet. Although the two terms are sometimes conflated in popular use, World Wide Web is not synonymous with Internet. The Web is an information space containing hyperlinked documents and other resources, identified by their URIs. It is implemented as both client and server software using Internet protocols such as TCP/IP and HTTP. Berners-Lee was knighted in 2004 by Queen Elizabeth II for "services to the global development of the Internet". He never patented his invention. Function The terms Internet and World Wide Web are often used without much distinction. However, the two terms do not mean the same thing. The Internet is a global system of computer networks interconnected through telecommunications and optical networking. In contrast, the World Wide Web is a global collection of documents and other resources, linked by hyperlinks and URIs. Web resources are accessed using HTTP or HTTPS, which are application-level Internet protocols that use the Internet's transport protocols. Viewing a web page on the World Wide Web normally begins either by typing the URL of the page into a web browser or by following a hyperlink to that page or resource. The web browser then initiates a series of background communication messages to fetch and display the requested page. In the 1990s, using a browser to view web pages—and to move from one web page to another through hyperlinks—came to be known as 'browsing,' 'web surfing' (after channel surfing), or 'navigating the Web'. Early studies of this new behavior investigated user patterns in using web browsers. One study, for example, found five user patterns: exploratory surfing, window surfing, evolved surfing, bounded navigation and targeted navigation. The following example demonstrates the functioning of a web browser when accessing a page at the URL . The browser resolves the server name of the URL () into an Internet Protocol address using the globally distributed Domain Name System (DNS). This lookup returns an IP address such as 203.0.113.4 or 2001:db8:2e::7334. The browser then requests the resource by sending an HTTP request across the Internet to the computer at that address. It requests service from a specific TCP port number that is well known for the HTTP service so that the receiving host can distinguish an HTTP request from other network protocols it may be servicing. HTTP normally uses port number 80 and for HTTPS it normally uses port number 443. The content of the HTTP request can be as simple as two lines of text: GET /home.html HTTP/1.1 Host: example.org The computer receiving the HTTP request delivers it to web server software listening for requests on port 80. If the webserver can fulfill the request it sends an HTTP response back to the browser indicating success: HTTP/1.1 200 OK Content-Type: text/html; charset=UTF-8 followed by the content of the requested page. Hypertext Markup Language (HTML) for a basic web page might look like this: <html> <head> <title>Example.org – The World Wide Web</title> </head> <body> <p>The World Wide Web, abbreviated as WWW and commonly known ...</p> </body> </html> The web browser parses the HTML and interprets the markup (<title>, <p> for paragraph, and such) that surrounds the words to format the text on the screen. Many web pages use HTML to reference the URLs of other resources such as images, other embedded media, scripts that affect page behaviour, and Cascading Style Sheets that affect page layout. The browser makes additional HTTP requests to the web server for these other Internet media types. As it receives their content from the web server, the browser progressively renders the page onto the screen as specified by its HTML and these additional resources. HTML Hypertext Markup Language (HTML) is the standard markup language for creating web pages and web applications. With Cascading Style Sheets (CSS) and JavaScript, it forms a triad of cornerstone technologies for the World Wide Web. Web browsers receive HTML documents from a web server or from local storage and render the documents into multimedia web pages. HTML describes the structure of a web page semantically and originally included cues for the appearance of the document. HTML elements are the building blocks of HTML pages. With HTML constructs, images and other objects such as interactive forms may be embedded into the rendered page. HTML provides a means to create structured documents by denoting structural semantics for text such as headings, paragraphs, lists, links, quotes and other items. HTML elements are delineated by tags, written using angle brackets. Tags such as and directly introduce content into the page. Other tags such as surround and provide information about document text and may include other tags as sub-elements. Browsers do not display the HTML tags, but use them to interpret the content of the page. HTML can embed programs written in a scripting language such as JavaScript, which affects the behavior and content of web pages. Inclusion of CSS defines the look and layout of content. The World Wide Web Consortium (W3C), maintainer of both the HTML and the CSS standards, has encouraged the use of CSS over explicit presentational HTML Linking Most web pages contain hyperlinks to other related pages and perhaps to downloadable files, source documents, definitions and other web resources. In the underlying HTML, a hyperlink looks like this: <a href="http://example.org/home.html">Example.org Homepage</a> Such a collection of useful, related resources, interconnected via hypertext links is dubbed a web of information. Publication on the Internet created what Tim Berners-Lee first called the WorldWideWeb (in its original CamelCase, which was subsequently discarded) in November 1990. The hyperlink structure of the web is described by the webgraph: the nodes of the web graph correspond to the web pages (or URLs) the directed edges between them to the hyperlinks. Over time, many web resources pointed to by hyperlinks disappear, relocate, or are replaced with different content. This makes hyperlinks obsolete, a phenomenon referred to in some circles as link rot, and the hyperlinks affected by it are often called dead links. The ephemeral nature of the Web has prompted many efforts to archive websites. The Internet Archive, active since 1996, is the best known of such efforts. WWW prefix Many hostnames used for the World Wide Web begin with www because of the long-standing practice of naming Internet hosts according to the services they provide. The hostname of a web server is often www, in the same way that it may be ftp for an FTP server, and news or nntp for a Usenet news server. These hostnames appear as Domain Name System (DNS) or subdomain names, as in www.example.com. The use of www is not required by any technical or policy standard and many web sites do not use it; the first web server was nxoc01.cern.ch. According to Paolo Palazzi, who worked at CERN along with Tim Berners-Lee, the popular use of www as subdomain was accidental; the World Wide Web project page was intended to be published at www.cern.ch while info.cern.ch was intended to be the CERN home page; however the DNS records were never switched, and the practice of prepending www to an institution's website domain name was subsequently copied. Many established websites still use the prefix, or they employ other subdomain names such as www2, secure or en for special purposes. Many such web servers are set up so that both the main domain name (e.g., example.com) and the www subdomain (e.g., www.example.com) refer to the same site; others require one form or the other, or they may map to different web sites. The use of a subdomain name is useful for load balancing incoming web traffic by creating a CNAME record that points to a cluster of web servers. Since, currently, only a subdomain can be used in a CNAME, the same result cannot be achieved by using the bare domain root. When a user submits an incomplete domain name to a web browser in its address bar input field, some web browsers automatically try adding the prefix "www" to the beginning of it and possibly ".com", ".org" and ".net" at the end, depending on what might be missing. For example, entering "" may be transformed to http://www.microsoft.com/ and "openoffice" to http://www.openoffice.org. This feature started appearing in early versions of Firefox, when it still had the working title 'Firebird' in early 2003, from an earlier practice in browsers such as Lynx. It is reported that Microsoft was granted a US patent for the same idea in 2008, but only for mobile devices. In English, www is usually read as double-u double-u double-u. Some users pronounce it dub-dub-dub, particularly in New Zealand. Stephen Fry, in his "Podgrams" series of podcasts, pronounces it wuh wuh wuh. The English writer Douglas Adams once quipped in The Independent on Sunday (1999): "The World Wide Web is the only thing I know of whose shortened form takes three times longer to say than what it's short for". In Mandarin Chinese, World Wide Web is commonly translated via a phono-semantic matching to wàn wéi wǎng (), which satisfies www and literally means "myriad-dimensional net", a translation that reflects the design concept and proliferation of the World Wide Web. Tim Berners-Lee's web-space states that World Wide Web is officially spelled as three separate words, each capitalised, with no intervening hyphens. Use of the www prefix has been declining, especially when Web 2.0 web applications sought to brand their domain names and make them easily pronounceable. As the mobile Web grew in popularity, services like Gmail.com, Outlook.com, Myspace.com, Facebook.com and Twitter.com are most often mentioned without adding "www." (or, indeed, ".com") to the domain. Scheme specifiers The scheme specifiers http:// and https:// at the start of a web URI refer to Hypertext Transfer Protocol or HTTP Secure, respectively. They specify the communication protocol to use for the request and response. The HTTP protocol is fundamental to the operation of the World Wide Web, and the added encryption layer in HTTPS is essential when browsers send or retrieve confidential data, such as passwords or banking information. Web browsers usually automatically prepend http:// to user-entered URIs, if omitted. Pages A web page (also written as webpage) is a document that is suitable for the World Wide Web and web browsers. A web browser displays a web page on a monitor or mobile device. The term web page usually refers to what is visible, but may also refer to the contents of the computer file itself, which is usually a text file containing hypertext written in HTML or a comparable markup language. Typical web pages provide hypertext for browsing to other web pages via hyperlinks, often referred to as links. Web browsers will frequently have to access multiple web resource elements, such as reading style sheets, scripts, and images, while presenting each web page. On a network, a web browser can retrieve a web page from a remote web server. The web server may restrict access to a private network such as a corporate intranet. The web browser uses the Hypertext Transfer Protocol (HTTP) to make such requests to the web server. A static web page is delivered exactly as stored, as web content in the web server's file system. In contrast, a dynamic web page is generated by a web application, usually driven by server-side software. Dynamic web pages are used when each user may require completely different information, for example, bank websites, web email etc. Static page A static web page (sometimes called a flat page/stationary page) is a web page that is delivered to the user exactly as stored, in contrast to dynamic web pages which are generated by a web application. Consequently, a static web page displays the same information for all users, from all contexts, subject to modern capabilities of a web server to negotiate content-type or language of the document where such versions are available and the server is configured to do so. Dynamic pages A server-side dynamic web page is a web page whose construction is controlled by an application server processing server-side scripts. In server-side scripting, parameters determine how the assembly of every new web page proceeds, including the setting up of more client-side processing. A client-side dynamic web page processes the web page using JavaScript running in the browser. JavaScript programs can interact with the document via Document Object Model, or DOM, to query page state and alter it. The same client-side techniques can then dynamically update or change the DOM in the same way. A dynamic web page is then reloaded by the user or by a computer program to change some variable content. The updating information could come from the server, or from changes made to that page's DOM. This may or may not truncate the browsing history or create a saved version to go back to, but a dynamic web page update using Ajax technologies will neither create a page to go back to nor truncate the web browsing history forward of the displayed page. Using Ajax technologies the end user gets one dynamic page managed as a single page in the web browser while the actual web content rendered on that page can vary. The Ajax engine sits only on the browser requesting parts of its DOM, the DOM, for its client, from an application server. Dynamic HTML, or DHTML, is the umbrella term for technologies and methods used to create web pages that are not static web pages, though it has fallen out of common use since the popularization of AJAX, a term which is now itself rarely used. Client-side-scripting, server-side scripting, or a combination of these make for the dynamic web experience in a browser. JavaScript is a scripting language that was initially developed in 1995 by Brendan Eich, then of Netscape, for use within web pages. The standardised version is ECMAScript. To make web pages more interactive, some web applications also use JavaScript techniques such as Ajax (asynchronous JavaScript and XML). Client-side script is delivered with the page that can make additional HTTP requests to the server, either in response to user actions such as mouse movements or clicks, or based on elapsed time. The server's responses are used to modify the current page rather than creating a new page with each response, so the server needs only to provide limited, incremental information. Multiple Ajax requests can be handled at the same time, and users can interact with the page while data is retrieved. Web pages may also regularly poll the server to check whether new information is available. Website A website is a collection of related web resources including web pages, multimedia content, typically identified with a common domain name, and published on at least one web server. Notable examples are wikipedia.org, google.com, and amazon.com. A website may be accessible via a public Internet Protocol (IP) network, such as the Internet, or a private local area network (LAN), by referencing a uniform resource locator (URL) that identifies the site. Websites can have many functions and can be used in various fashions; a website can be a personal website, a corporate website for a company, a government website, an organization website, etc. Websites are typically dedicated to a particular topic or purpose, ranging from entertainment and social networking to providing news and education. All publicly accessible websites collectively constitute the World Wide Web, while private websites, such as a company's website for its employees, are typically a part of an intranet. Web pages, which are the building blocks of websites, are documents, typically composed in plain text interspersed with formatting instructions of Hypertext Markup Language (HTML, XHTML). They may incorporate elements from other websites with suitable markup anchors. Web pages are accessed and transported with the Hypertext Transfer Protocol (HTTP), which may optionally employ encryption (HTTP Secure, HTTPS) to provide security and privacy for the user. The user's application, often a web browser, renders the page content according to its HTML markup instructions onto a display terminal. Hyperlinking between web pages conveys to the reader the site structure and guides the navigation of the site, which often starts with a home page containing a directory of the site web content. Some websites require user registration or subscription to access content. Examples of subscription websites include many business sites, news websites, academic journal websites, gaming websites, file-sharing websites, message boards, web-based email, social networking websites, websites providing real-time stock market data, as well as sites providing various other services. End users can access websites on a range of devices, including desktop and laptop computers, tablet computers, smartphones and smart TVs. Browser A web browser (commonly referred to as a browser) is a software user agent for accessing information on the World Wide Web. To connect to a website's server and display its pages, a user needs to have a web browser program. This is the program that the user runs to download, format, and display a web page on the user's computer. In addition to allowing users to find, display, and move between web pages, a web browser will usually have features like keeping bookmarks, recording history, managing cookies (see below), and home pages and may have facilities for recording passwords for logging into web sites. The most popular browsers are Chrome, Firefox, Safari, Internet Explorer, and Edge. Server A Web server is server software, or hardware dedicated to running said software, that can satisfy World Wide Web client requests. A web server can, in general, contain one or more websites. A web server processes incoming network requests over HTTP and several other related protocols. The primary function of a web server is to store, process and deliver web pages to clients. The communication between client and server takes place using the Hypertext Transfer Protocol (HTTP). Pages delivered are most frequently HTML documents, which may include images, style sheets and scripts in addition to the text content. A user agent, commonly a web browser or web crawler, initiates communication by making a request for a specific resource using HTTP and the server responds with the content of that resource or an error message if unable to do so. The resource is typically a real file on the server's secondary storage, but this is not necessarily the case and depends on how the webserver is implemented. While the primary function is to serve content, full implementation of HTTP also includes ways of receiving content from clients. This feature is used for submitting web forms, including uploading of files. Many generic web servers also support server-side scripting using Active Server Pages (ASP), PHP (Hypertext Preprocessor), or other scripting languages. This means that the behavior of the webserver can be scripted in separate files, while the actual server software remains unchanged. Usually, this function is used to generate HTML documents dynamically ("on-the-fly") as opposed to returning static documents. The former is primarily used for retrieving or modifying information from databases. The latter is typically much faster and more easily cached but cannot deliver dynamic content. Web servers can also frequently be found embedded in devices such as printers, routers, webcams and serving only a local network. The web server may then be used as a part of a system for monitoring or administering the device in question. This usually means that no additional software has to be installed on the client computer since only a web browser is required (which now is included with most operating systems). Cookie An HTTP cookie (also called web cookie, Internet cookie, browser cookie, or simply cookie) is a small piece of data sent from a website and stored on the user's computer by the user's web browser while the user is browsing. Cookies were designed to be a reliable mechanism for websites to remember stateful information (such as items added in the shopping cart in an online store) or to record the user's browsing activity (including clicking particular buttons, logging in, or recording which pages were visited in the past). They can also be used to remember arbitrary pieces of information that the user previously entered into form fields such as names, addresses, passwords, and credit card numbers. Cookies perform essential functions in the modern web. Perhaps most importantly, authentication cookies are the most common method used by web servers to know whether the user is logged in or not, and which account they are logged in with. Without such a mechanism, the site would not know whether to send a page containing sensitive information or require the user to authenticate themselves by logging in. The security of an authentication cookie generally depends on the security of the issuing website and the user's web browser, and on whether the cookie data is encrypted. Security vulnerabilities may allow a cookie's data to be read by a hacker, used to gain access to user data, or used to gain access (with the user's credentials) to the website to which the cookie belongs (see cross-site scripting and cross-site request forgery for examples). Tracking cookies, and especially third-party tracking cookies, are commonly used as ways to compile long-term records of individuals' browsing histories a potential privacy concern that prompted European and U.S. lawmakers to take action in 2011. European law requires that all websites targeting European Union member states gain "informed consent" from users before storing non-essential cookies on their device. Google Project Zero researcher Jann Horn describes ways cookies can be read by intermediaries, like Wi-Fi hotspot providers. He recommends using the browser in incognito mode in such circumstances. Search engine A web search engine or Internet search engine is a software system that is designed to carry out web search (Internet search), which means to search the World Wide Web in a systematic way for particular information specified in a web search query. The search results are generally presented in a line of results, often referred to as search engine results pages (SERPs). The information may be a mix of web pages, images, videos, infographics, articles, research papers, and other types of files. Some search engines also mine data available in databases or open directories. Unlike web directories, which are maintained only by human editors, search engines also maintain real-time information by running an algorithm on a web crawler. Internet content that is not capable of being searched by a web search engine is generally described as the deep web. Deep web The deep web, invisible web, or hidden web are parts of the World Wide Web whose contents are not indexed by standard web search engines. The opposite term to the deep web is the surface web, which is accessible to anyone using the Internet. Computer scientist Michael K. Bergman is credited with coining the term deep web in 2001 as a search indexing term. The content of the deep web is hidden behind HTTP forms, and includes many very common uses such as web mail, online banking, and services that users must pay for, and which is protected by a paywall, such as video on demand, some online magazines and newspapers, among others. The content of the deep web can be located and accessed by a direct URL or IP address, and may require a password or other security access past the public website page. Caching A web cache is a server computer located either on the public Internet or within an enterprise that stores recently accessed web pages to improve response time for users when the same content is requested within a certain time after the original request. Most web browsers also implement a browser cache by writing recently obtained data to a local data storage device. HTTP requests by a browser may ask only for data that has changed since the last access. Web pages and resources may contain expiration information to control caching to secure sensitive data, such as in online banking, or to facilitate frequently updated sites, such as news media. Even sites with highly dynamic content may permit basic resources to be refreshed only occasionally. Web site designers find it worthwhile to collate resources such as CSS data and JavaScript into a few site-wide files so that they can be cached efficiently. Enterprise firewalls often cache Web resources requested by one user for the benefit of many users. Some search engines store cached content of frequently accessed websites. Security For criminals, the Web has become a venue to spread malware and engage in a range of cybercrimes, including (but not limited to) identity theft, fraud, espionage and intelligence gathering. Web-based vulnerabilities now outnumber traditional computer security concerns, and as measured by Google, about one in ten web pages may contain malicious code. Most web-based attacks take place on legitimate websites, and most, as measured by Sophos, are hosted in the United States, China and Russia. The most common of all malware threats is SQL injection attacks against websites. Through HTML and URIs, the Web was vulnerable to attacks like cross-site scripting (XSS) that came with the introduction of JavaScript and were exacerbated to some degree by Web 2.0 and Ajax web design that favours the use of scripts. Today by one estimate, 70% of all websites are open to XSS attacks on their users. Phishing is another common threat to the Web. In February 2013, RSA (the security division of EMC) estimated the global losses from phishing at $1.5 billion in 2012. Two of the well-known phishing methods are Covert Redirect and Open Redirect. Proposed solutions vary. Large security companies like McAfee already design governance and compliance suites to meet post-9/11 regulations, and some, like Finjan have recommended active real-time inspection of programming code and all content regardless of its source. Some have argued that for enterprises to see Web security as a business opportunity rather than a cost centre, while others call for "ubiquitous, always-on digital rights management" enforced in the infrastructure to replace the hundreds of companies that secure data and networks. Jonathan Zittrain has said users sharing responsibility for computing safety is far preferable to locking down the Internet. Privacy Every time a client requests a web page, the server can identify the request's IP address. Web servers usually log IP addresses in a log file. Also, unless set not to do so, most web browsers record requested web pages in a viewable history feature, and usually cache much of the content locally. Unless the server-browser communication uses HTTPS encryption, web requests and responses travel in plain text across the Internet and can be viewed, recorded, and cached by intermediate systems. Another way to hide personally identifiable information is by using a virtual private network. A VPN encrypts online traffic and masks the original IP address lowering the chance of user identification. When a web page asks for, and the user supplies, personally identifiable information—such as their real name, address, e-mail address, etc. web-based entities can associate current web traffic with that individual. If the website uses HTTP cookies, username, and password authentication, or other tracking techniques, it can relate other web visits, before and after, to the identifiable information provided. In this way, a web-based organization can develop and build a profile of the individual people who use its site or sites. It may be able to build a record for an individual that includes information about their leisure activities, their shopping interests, their profession, and other aspects of their demographic profile. These profiles are of potential interest to marketers, advertisers, and others. Depending on the website's terms and conditions and the local laws that apply information from these profiles may be sold, shared, or passed to other organizations without the user being informed. For many ordinary people, this means little more than some unexpected e-mails in their in-box or some uncannily relevant advertising on a future web page. For others, it can mean that time spent indulging an unusual interest can result in a deluge of further targeted marketing that may be unwelcome. Law enforcement, counter-terrorism, and espionage agencies can also identify, target, and track individuals based on their interests or proclivities on the Web. Social networking sites usually try to get users to use their real names, interests, and locations, rather than pseudonyms, as their executives believe that this makes the social networking experience more engaging for users. On the other hand, uploaded photographs or unguarded statements can be identified to an individual, who may regret this exposure. Employers, schools, parents, and other relatives may be influenced by aspects of social networking profiles, such as text posts or digital photos, that the posting individual did not intend for these audiences. Online bullies may make use of personal information to harass or stalk users. Modern social networking websites allow fine-grained control of the privacy settings for each posting, but these can be complex and not easy to find or use, especially for beginners. Photographs and videos posted onto websites have caused particular problems, as they can add a person's face to an online profile. With modern and potential facial recognition technology, it may then be possible to relate that face with other, previously anonymous, images, events, and scenarios that have been imaged elsewhere. Due to image caching, mirroring, and copying, it is difficult to remove an image from the World Wide Web. Standards Web standards include many interdependent standards and specifications, some of which govern aspects of the Internet, not just the World Wide Web. Even when not web-focused, such standards directly or indirectly affect the development and administration of websites and web services. Considerations include the interoperability, accessibility and usability of web pages and web sites. Web standards, in the broader sense, consist of the following: Recommendations published by the World Wide Web Consortium (W3C) "Living Standard" made by the Web Hypertext Application Technology Working Group (WHATWG) Request for Comments (RFC) documents published by the Internet Engineering Task Force (IETF) Standards published by the International Organization for Standardization (ISO) Standards published by Ecma International (formerly ECMA) The Unicode Standard and various Unicode Technical Reports (UTRs) published by the Unicode Consortium Name and number registries maintained by the Internet Assigned Numbers Authority (IANA) Web standards are not fixed sets of rules but are constantly evolving sets of finalized technical specifications of web technologies. Web standards are developed by standards organizations—groups of interested and often competing parties chartered with the task of standardization—not technologies developed and declared to be a standard by a single individual or company. It is crucial to distinguish those specifications that are under development from the ones that already reached the final development status (in the case of W3C specifications, the highest maturity level). Accessibility There are methods for accessing the Web in alternative mediums and formats to facilitate use by individuals with disabilities. These disabilities may be visual, auditory, physical, speech-related, cognitive, neurological, or some combination. Accessibility features also help people with temporary disabilities, like a broken arm, or ageing users as their abilities change. The Web receives information as well as providing information and interacting with society. The World Wide Web Consortium claims that it is essential that the Web be accessible, so it can provide equal access and equal opportunity to people with disabilities. Tim Berners-Lee once noted, "The power of the Web is in its universality. Access by everyone regardless of disability is an essential aspect." Many countries regulate web accessibility as a requirement for websites. International co-operation in the W3C Web Accessibility Initiative led to simple guidelines that web content authors as well as software developers can use to make the Web accessible to persons who may or may not be using assistive technology. Internationalisation The W3C Internationalisation Activity assures that web technology works in all languages, scripts, and cultures. Beginning in 2004 or 2005, Unicode gained ground and eventually in December 2007 surpassed both ASCII and Western European as the Web's most frequently used character encoding. Originally allowed resources to be identified by URI in a subset of US-ASCII. allows more characters—any character in the Universal Character Set—and now a resource can be identified by IRI in any language. See also Electronic publishing Internet metaphors Internet security Lists of websites Prestel Streaming media Web development tools Web literacy References Further reading Brügger, Niels, ed, Web25: Histories from the first 25 years of the World Wide Web (Peter Lang, 2017). Niels Brügger, ed. Web History (2010) 362 pages; Historical perspective on the World Wide Web, including issues of culture, content, and preservation. Skau, H.O. (March 1990). "The World Wide Web and Health Information". New Devices. External links The first website Early archive of the first Web site Internet Statistics: Growth and Usage of the Web and the Internet Living Internet A comprehensive history of the Internet, including the World Wide Web World Wide Web Consortium (W3C) W3C Recommendations Reduce "World Wide Wait" World Wide Web Size Daily estimated size of the World Wide Web Antonio A. Casilli, Some Elements for a Sociology of Online Interactions The Erdős Webgraph Server offers weekly updated graph representation of a constantly increasing fraction of the WWW The 25th Anniversary of the World Wide Web is an animated video produced by USAID and TechChange which explores the role of the WWW in addressing extreme poverty Computer-related introductions in 1989 English inventions British inventions Human–computer interaction Information Age CERN Tim Berners-Lee Web technology 20th-century inventions
3951220
https://en.wikipedia.org/wiki/Computational%20theory%20of%20mind
Computational theory of mind
In philosophy of mind, the computational theory of mind (CTM), also known as computationalism, is a family of views that hold that the human mind is an information processing system and that cognition and consciousness together are a form of computation. Warren McCulloch and Walter Pitts (1943) were the first to suggest that neural activity is computational. They argued that neural computations explain cognition. The theory was proposed in its modern form by Hilary Putnam in 1967, and developed by his PhD student, philosopher, and cognitive scientist Jerry Fodor in the 1960s, 1970s, and 1980s. Despite being vigorously disputed in analytic philosophy in the 1990s due to work by Putnam himself, John Searle, and others, the view is common in modern cognitive psychology and is presumed by many theorists of evolutionary psychology. In the 2000s and 2010s the view has resurfaced in analytic philosophy (Scheutz 2003, Edelman 2008). The computational theory of mind holds that the mind is a computational system that is realized (i.e. physically implemented) by neural activity in the brain. The theory can be elaborated in many ways and varies largely based on how the term computation is understood. Computation is commonly understood in terms of Turing machines which manipulate symbols according to a rule, in combination with the internal state of the machine. The critical aspect of such a computational model is that we can abstract away from particular physical details of the machine that is implementing the computation. For example, the appropriate computation could be implemented either by silicon chips or biological neural networks, so long as there is a series of outputs based on manipulations of inputs and internal states, performed according to a rule. CTM, therefore holds that the mind is not simply analogous to a computer program, but that it is literally a computational system. Computational theories of mind are often said to require mental representation because 'input' into a computation comes in the form of symbols or representations of other objects. A computer cannot compute an actual object but must interpret and represent the object in some form and then compute the representation. The computational theory of mind is related to the representational theory of mind in that they both require that mental states are representations. However, the representational theory of mind shifts the focus to the symbols being manipulated. This approach better accounts for systematicity and productivity. In Fodor's original views, the computational theory of mind is also related to the language of thought. The language of thought theory allows the mind to process more complex representations with the help of semantics. (See below in semantics of mental states). Recent work has suggested that we make a distinction between the mind and cognition. Building from the tradition of McCulloch and Pitts, the computational theory of cognition (CTC) states that neural computations explain cognition. The computational theory of mind asserts that not only cognition, but also phenomenal consciousness or qualia, are computational. That is to say, CTM entails CTC. While phenomenal consciousness could fulfill some other functional role, computational theory of cognition leaves open the possibility that some aspects of the mind could be non-computational. CTC, therefore, provides an important explanatory framework for understanding neural networks, while avoiding counter-arguments that center around phenomenal consciousness. "Computer metaphor" Computational theory of mind is not the same as the computer metaphor, comparing the mind to a modern-day digital computer. Computational theory just uses some of the same principles as those found in digital computing. While the computer metaphor draws an analogy between the mind as software and the brain as hardware, CTM is the claim that the mind is a computational system. More specifically, it states that a computational simulation of a mind is sufficient for the actual presence of a mind, and that a mind truly can be simulated computationally. 'Computational system' is not meant to mean a modern-day electronic computer. Rather, a computational system is a symbol manipulator that follows step-by-step functions to compute input and form output. Alan Turing describes this type of computer in his concept of a Turing machine. Early proponents One of the earliest proponents of the computational theory of mind was Thomas Hobbes, who said, "by reasoning, I understand computation. And to compute is to collect the sum of many things added together at the same time, or to know the remainder when one thing has been taken from another. To reason, therefore, is the same as to add or to subtract." Since Hobbes lived before the contemporary identification of computing with instantiating effective procedures, he cannot be interpreted as explicitly endorsing the computational theory of mind, in the contemporary sense. Causal picture of thoughts At the heart of the computational theory of mind is the idea that thoughts are a form of computation, and a computation is by definition a systematic set of rules for the relations among representations. This means that a mental state represents something if and only if there is some causal correlation between the mental state and that particular thing. An example would be seeing dark clouds and thinking "clouds mean rain", where there is a correlation between the thought of the clouds and rain, as the clouds causing rain. This is sometimes known as natural meaning. Conversely, there is another side to the causality of thoughts and that is the non-natural representation of thoughts. An example would be seeing a red traffic light and thinking "red means stop", there is nothing about the color red that indicates it represents stopping, and thus is just a convention that has been invented, similar to languages and their abilities to form representations. Semantics of mental states The computational theory of mind states that the mind functions as a symbolic operator, and that mental representations are symbolic representations; just as the semantics of language are the features of words and sentences that relate to their meaning, the semantics of mental states are those meanings of representations, the definitions of the 'words' of the language of thought. If these basic mental states can have a particular meaning just as words in a language do, then this means that more complex mental states (thoughts) can be created, even if they have never been encountered before. Just as new sentences that are read can be understood even if they have never been encountered before, as long as the basic components are understood, and it is syntactically correct. For example: "I have eaten plum pudding every day of this fortnight." While it's doubtful many have seen this particular configuration of words, nonetheless, most readers should be able to glean an understanding of this sentence because it is syntactically correct and the constituent parts are understood. Criticism A range of arguments have been proposed against physicalist conceptions used in computational theories of mind. An early, though indirect, criticism of the computational theory of mind comes from philosopher John Searle. In his thought experiment known as the Chinese room, Searle attempts to refute the claims that artificially intelligent agents can be said to have intentionality and understanding and that these systems, because they can be said to be minds themselves, are sufficient for the study of the human mind. Searle asks us to imagine that there is a man in a room with no way of communicating with anyone or anything outside of the room except for a piece of paper with symbols written on it that is passed under the door. With the paper, the man is to use a series of provided rule books to return paper containing different symbols. Unknown to the man in the room, these symbols are of a Chinese language, and this process generates a conversation that a Chinese speaker outside of the room can actually understand. Searle contends that the man in the room does not understand the Chinese conversation. This is essentially what the computational theory of mind presents us—a model in which the mind simply decodes symbols and outputs more symbols. Searle argues that this is not real understanding or intentionality. This was originally written as a repudiation of the idea that computers work like minds. Searle has further raised questions about what exactly constitutes a computation: the wall behind my back is right now implementing the WordStar program, because there is some pattern of molecule movements that is isomorphic with the formal structure of WordStar. But if the wall is implementing WordStar, if it is a big enough wall it is implementing any program, including any program implemented in the brain. Objections like Searle's might be called insufficiency objections. They claim that computational theories of mind fail because computation is insufficient to account for some capacity of the mind. Arguments from qualia, such as Frank Jackson's knowledge argument, can be understood as objections to computational theories of mind in this way—though they take aim at physicalist conceptions of the mind in general, and not computational theories specifically. There are also objections which are directly tailored for computational theories of mind. Putnam himself (see in particular Representation and Reality and the first part of Renewing Philosophy) became a prominent critic of computationalism for a variety of reasons, including ones related to Searle's Chinese room arguments, questions of world-word reference relations, and thoughts about the mind-body relationship. Regarding functionalism in particular, Putnam has claimed along lines similar to, but more general than Searle's arguments, that the question of whether the human mind can implement computational states is not relevant to the question of the nature of mind, because "every ordinary open system realizes every abstract finite automaton." Computationalists have responded by aiming to develop criteria describing what exactly counts as an implementation. Roger Penrose has proposed the idea that the human mind does not use a knowably sound calculation procedure to understand and discover mathematical intricacies. This would mean that a normal Turing complete computer would not be able to ascertain certain mathematical truths that human minds can. Pancomputationalism Supporters of CTM are faced with a simple yet important question whose answer has proved elusive and controversial: what does it take for a physical system (such as a mind, or an artificial computer) to perform computations? A very straightforward account is based on a simple mapping between abstract mathematical computations and physical systems: a system performs computation C if and only if there is a mapping between a sequence of states individuated by C and a sequence of states individuated by a physical description of the system Putnam (1988) and Searle (1992) argue that this simple mapping account (SMA) trivializes the empirical import of computational descriptions. As Putnam put it, “everything is a Probabilistic Automaton under some Description”.  Even rocks, walls, and buckets of water—contrary to appearances—are computing systems. Gualtiero Piccinini identifies different versions of Pancomputationalism. In response to the trivialization criticism, and to restrict SMA, philosophers of mind have offered different accounts of computational systems. These typically include causal account, semantic account, syntactic account, and mechanistic account. Instead of a semantic restriction, the syntactic account imposes a syntactic restriction. The mechanistic account was first introduced by Gualtiero Piccinini in 2007. Prominent scholars Daniel Dennett proposed the multiple drafts model, in which consciousness seems linear but is actually blurry and gappy, distributed over space and time in the brain. Consciousness is the computation, there is no extra step or "Cartesian theater" in which you become conscious of the computation. David Marr proposed that cognitive processes have three levels of description: the computational level (which describes that computational problem (i.e., input/output mapping) computed by the cognitive process); the algorithmic level (which presents the algorithm used for computing the problem postulated at the computational level); and the implementational level (which describes the physical implementation of the algorithm postulated at the algorithmic level in biological matter, e.g. the brain). (Marr 1981) Georges Rey, professor at the University of Maryland, builds on Jerry Fodor's representational theory of mind to produce his own version of a Computational/Representational Theory of Thought. Hilary Putnam proposed functionalism to describe consciousness, asserting that it is the computation that equates to consciousness, regardless of whether the computation is operating in a brain, in a computer, or in a "brain in a vat." Jerry Fodor argues that mental states, such as beliefs and desires, are relations between individuals and mental representations. He maintains that these representations can only be correctly explained in terms of a language of thought (LOT) in the mind. Further, this language of thought itself is codified in the brain, not just a useful explanatory tool. Fodor adheres to a species of functionalism, maintaining that thinking and other mental processes consist primarily of computations operating on the syntax of the representations that make up the language of thought. In later work (Concepts and The Elm and the Expert), Fodor has refined and even questioned some of his original computationalist views, and adopted a highly modified version of LOT (see LOT2). Steven Pinker described a "language instinct," an evolved, built-in capacity to learn language (if not writing). Ulric Neisser coined the term 'cognitive psychology' in his book published in 1967 (Cognitive Psychology), wherein Neisser characterizes people as dynamic information-processing systems whose mental operations might be described in computational terms. Alternative theories Classical associationism Connectionism Enactivism Memory-prediction framework Perceptual Control Theory Situated cognition See also Artificial consciousness Cognitivism (psychology) Constructivist epistemology Determinism Enchanted loom Mind–body problem Simulated reality Stimulus-Response model Notes References C. Randy Gallistel Learning and Representation. In R. Menzel (Ed) Learning Theory and Behavior. Vol 1 of Learning and Memory - A Comprehensive Reference. 4 vols (J. Byrne, Ed). Oxford: Elsevier. pp. 227–242. David Marr (1981) Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. Cambridge, Massachusetts: The MIT Press. Gualtiero Piccinini (2015). Physical Computation: A Mechanistic Account. NY, Oxford University Press. Gualtiero Piccinini (2017) "Computation in Physical Systems", The Stanford Encyclopedia of Philosophy (Summer 2017 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/sum2017/entries/computation-physicalsystems/>. Hilary Putnam (1979) Mathematics, Matter, and Method: Philosophical Papers, Vol. 1. Cambridge, Massachusetts: The MIT Press. Hilary Putnam (1991) Representation and Reality. Cambridge, Massachusetts: The MIT Press. Hilary Putnam (1995) Renewing Philosophy. Cambridge, Massachusetts: Harvard University Press. Jerry Fodor (1975) The Language of Thought. Cambridge, Massachusetts: The MIT Press. Jerry Fodor (1995) The Elm and the Expert: Mentalese and Its Semantics. Cambridge, Massachusetts: The MIT Press. Jerry Fodor (1998) Concepts: Where Cognitive Science Went Wrong. Oxford and New York: Oxford University Press. Jerry Fodor (2010) LOT2: The Language of Thought Revisited. Oxford and New York: Oxford University Press. John Searle (1992) The Rediscovery of the Mind. Cambridge, Massachusetts: The MIT Press. Matthias Scheutz, ed. (2003) Computationalism: New Directions. Cambridge, Massachusetts: The MIT Press. Ned Block, ed. (1983). Readings in Philosophy of Psychology, Volume 1. Cambridge, Massachusetts: Harvard University Press. Shimon Edelman (2008) Computing the Mind: How the Mind Really Works. Steven Pinker (1997) How the Mind Works. Tim Crane (2003). The Mechanical Mind: A Philosophical Introduction to Minds, Machines, and Mental Representation. New York, NY: Routledge. Zenon Pylyshyn (1984) Computation and Cognition. Cambridge, Massachusetts: The MIT Press. External links A Computational Foundation for the Study of Cognition by David Chalmers Bruno Marchal argues that physical supervenience is not compatible with computational theory (French) Collection of links to online papers Computationalism: The Very Idea, an overview of computationalism by David Davenport. Fodor, The Mind Doesn't Work that Way The Cognitive Process Consciousness model of the Mind Theory of mind Cognitive science Philosophy of artificial intelligence Cognitive psychology Information
9073648
https://en.wikipedia.org/wiki/Reference%20software
Reference software
Reference software is software which emulates and expands upon print reference forms including the dictionary, translation dictionary, encyclopaedia, thesaurus, and atlas. Like print references, reference software can either be general or specific to a domain, and often includes maps and illustrations, as well as bibliography and statistics. Reference software may include multimedia content including animations, audio, and video, which further illustrate a concept. Well designed reference software improves upon the navigability of print references, through the use of search functionality and hyperlinks. Origins and development Many dictionaries and encyclopedias rushed into CD-ROM editions soon after the widespread introduction of the CD-ROM to home computers. A second major development occurred as the internet also became widely available in homes, with reference works becoming available online as well. The conversion of previously print-only reference materials to electronic format marked a major change to the marketing and accessibility of such works. A striking case study is that of the venerable Encyclopædia Britannica, which was previously only available at prices of USD 1500 and higher, restricting it to the better libraries and the wealthy. Today, the Encyclopædia Britannica and World Book Encyclopedia retail in electronic format for around 50 USD, with cheaper OEM versions sometimes bundled with new computers. Such dramatic changes brought conventionally restricted knowledge repositories to the fingertips of an almost universal audience in a period of less than 11 years. The opportunities brought by new media enticed new competitors into the reference software market. One of the earliest and most well-known was Microsoft Encarta, first introduced on CD-ROM and then also moving online along with other major reference works. In the dictionaries market, one of the more prolific brands was Merriam-Webster, which released CD-ROM and then online versions of English dictionaries, thesauri and foreign language dictionaries. Since 2010, reference materials have begun to appear as apps on Smartphones. In the field of English as a foreign or second language, this is seen as an interesting pedagogical development and is the subject of much discussion , as language learners increasingly abandon print dictionaries for online editions and apps like the one produced by Macmillan Education. A list of online dictionaries is maintained under "dictionaries". Wikipedia and its offspins (such as Wiktionary) marked a new departure in educational reference software. Previous encyclopedias and dictionaries had compiled their contents on the basis of invited and closed teams of specialists. The Wiki concept allowed anyone and everyone to join in creating and editing an online set of reference works. See also Encarta and other CD-ROM encyclopedias Comparison of reference management software Educational software Lists of encyclopedias List of online dictionaries List of online encyclopedias Educational software
215392
https://en.wikipedia.org/wiki/Michael%20A.%20Jackson%20%28computer%20scientist%29
Michael A. Jackson (computer scientist)
Michael Anthony Jackson (born 16 February 1936) is a British computer scientist, and independent computing consultant in London, England. He is also a visiting research professor at the Open University in the UK. Biography Born in Birmingham to Montagu M. Jackson and Bertha (Green) Jackson, Jackson was educated at Harrow School in Harrow, London, England. There he was taught by Christopher Strachey and wrote his first program under Strachey's guidance. From 1954 to 1958, he studied classics (known as "Greats") at Merton College, Oxford; a fellow student, two years ahead of him, was C. A. R. Hoare. They shared an interest in logic, which was studied as part of Greats at Oxford. After his graduation in 1961, Jackson started as computer science designer and consultant for Maxwell Stamp Associates in London. Here he designed, coded and tested his first programs for IBM and Honeywell computers, working in assembler. There Jackson found his calling, as he recollected in 2000: "Although I was a careful designer — drawing meticulous flowcharts before coding — and a conscientious tester, I realised that program design was hard and the results likely to be erroneous..." Information system design was in need of a structured approach. In 1964, Jackson joined the new consultancy firm John Hoskyns and Company in London, before founding his own company Michael Jackson Systems Limited in 1971. In the 1960s, he had started his search for a "more reliable and systematic way of programming." He contributed to the emerging modular programming movement, meeting Larry Constantine, George H. Mealy and several others on a 1968 symposium. In the 1970s, Jackson developed Jackson Structured Programming (JSP). In the 1980s, with John Cameron, he developed Jackson System Development (JSD). Then, in the 1990s, he developed the Problem Frames Approach. As a part-time researcher at AT&T Labs Research, in collaboration with Pamela Zave, Jackson created "Distributed Feature Composition", a virtual architecture for specification and implementation of telecommunication services. Jackson received the Stevens Award for Software Development Methods in 1997. and British Computer Society Lovelace Medal in 1998. In 1961, Jackson married Judith Wendy Blackburn; they have four sons, one of whom, Daniel, is also a computer scientist based at MIT. Work Jackson has developed a series of methods. Each of these methods covers a wider scope than the previous one, and builds on ideas that appeared, but were not fully developed, in the previous one. Reading his books in sequence allows you to follow the evolution of his thinking. Jackson Structured Programming Jackson Structured Programming (JSP) was the first software development method that Jackson developed. It is a program design method, and was described in his book Principles of Program Design. JSP covers the design of individual programs, but not systems. Jackson System Development The Jackson System Development (JSD) was the second software development method that Jackson developed. JSD is a system development method not just for individual programs, but for entire systems. JSD is most readily applicable to information systems, but it can easily be extended to the development of real-time embedded systems. JSD was described in his book System Development. Problem Frames Approach Problem Analysis or the Problem Frames Approach was the third software development method that Jackson developed. It concerns itself with aspects of developing all kinds of software, not just information systems. It was first sketched in his book Software Requirements and Specifications, and described much more fully in his book Problem Frames. The First International Workshop on Applications and Advances in Problem Frames was held as part of ICSE’04 held in Edinburgh, Scotland. Publications Michael Jackson's books include: 1975. Principles of Program Design . 1983. System Development . 1995. Software Requirements & Specifications . 1997. Business Process Implementation 2001. Problem Frames: Analysing and Structuring Software Development Problems . Many of his essays have been collected, along with research papers relating to his work, in the book: 2010. Software Requirements and Design: The Work of Michael Jackson, Bashar Nuseibeh and Pamela Zave, editors. References External links Michael Jackson home page The Jackson Software Development Methods The World and the Machine software engineering blog by Michael Jackson 1936 births Living people People from Birmingham, West Midlands People educated at Harrow School Alumni of Merton College, Oxford Academics of the Open University British computer programmers British computer scientists British software engineers Computer science writers Formal methods people Software engineering researchers
28323115
https://en.wikipedia.org/wiki/System%201
System 1
The Macintosh "System 1" is the first version of Apple Macintosh operating system and the beginning of the classic Mac OS series. It was developed for the Motorola 68000 microprocessor. System 1 was released on January 24, 1984, along with the Macintosh 128K, the first in the Macintosh family of personal computers. It received one update, "System 1.1" on December 29, 1984, before being succeeded by System 2. Features This operating system introduced many features that would appear for years to come, some that still exist in the current macOS, and a few that exist in other graphical operating systems such as Microsoft Windows. The features of the operating system included the Finder and menu bar. In addition to this, it popularized the graphical user interface and desktop metaphor, which was used under license from Xerox PARC. Due to the limited amount of random-access memory and the lack of an internal hard disk in the original Macintosh, there was no multitasking with multiple applications, although there were desktop accessories that could run while another application was loaded. Also, items in the Trash were permanently deleted when the computer was shut down or an application was loaded (quitting the Finder). System 1's total size is about 216 KB and contained six files: System (which includes the desk accessories), Finder, Clipboard, an Imagewriter printer driver, Scrapbook, and Note Pad. A separate diskette included "A Guided Tour of Macintosh", which contains tutorial demonstrations of the Macintosh system, running on a modified pre-release version of Finder 1.0, as well as training programs for learning to use the mouse, and the Finder. Also included was a 33-minute audio cassette designed to run alongside the demonstrations, emphasising the disk's purpose as a guided tour. Menu bar The menu bar was a new and revolutionary part of the OS. Similar to the one found on Lisa OS, the System 1 Finder had five menus: the Apple menu, File, Edit, View, and Special. When in an application, the menus would change to ones defined by the application, but most software retained at least the File and Edit menus. While within the Finder, the Apple menu contained the "About the Finder" information, along with the desktop accessories. "File" menu items included Open, Eject, and Close. "Edit" had entries for cutting, copying, and pasting. "Special" was used for managing the hardware and other system functions, and was always the rightmost entry on the menu bar in the Finder. In System 1, the menu had items related to emptying the Trash, cleaning up the desktop, and disk options. By System 1.1, the menu allowed the user to choose an alternate startup program to be run instead of the Finder at boot time; the feature was replaced in System 7 by the "Startup Items" folder in the System Folder. Desk accessories System 1 came with multiple desk accessories (DA). These included an Alarm Clock, Calculator, Control Panel, Key Caps, Note Pad, Puzzle, and Scrapbook. A difference between desktop accessories and applications is that multiple desktop accessories could be run at once but only one application could run at a time. Desk accessories could also run on top of an application. Alarm Clock: This DA could be used just like an alarm clock, as the computer would beep, and the menu bar would flash when the alarm's set time was reached. It could also be used as an easier way to change/set the time and date on the computer. When opened, it would show the time and date set on the computer. Calculator: A basic calculator capable of addition, subtraction, multiplication, and division. It featured the basic 18 buttons for input. Control Panel: The control panel was used to adjust some of the settings on the computer. What made the original control panel unique from subsequent Mac OS control panels was the intended absence of any text. This was chosen to demonstrate the graphical user interface. Representation was achieved by using symbols. It could be used to adjust settings such as volume, double click speed, mouse sensitivity, and desktop background. On the Macintosh 128K, Macintosh 512K, and the Macintosh Plus, the screen brightness was controlled by a mechanical adjustment wheel beneath the screen. Key Caps: A DA used to show the layout of the original Macintosh keyboard. It showed what happened when normal keys were pressed along with special characters (Command, Shift, Option). Note Pad: A note taking DA that would save text entered into it on the floppy disk. Multiple note pages could be written when using the folded corner symbol in the bottom left corner of the note page. Puzzle: A basic 1–15 slide puzzle, similar to the picture puzzle found in later versions of the Mac OS. Scrapbook: This DA was similar to a cut, copy, and paste library. In it, one could store text selections and photos which could then be transferred to other applications. See also Apple v. Digital Research GEM/1 Windows 1.0x References External links Macintosh System 1 in your browser—A web-based simulator System 1.0 Headquarters—a walk-through of System 1 with screenshots (from 1998, via archive.org) 1984 software Classic Mac OS
6333391
https://en.wikipedia.org/wiki/HomeBank
HomeBank
HomeBank is a personal accounting software package that runs on OpenBSD, Linux, FreeBSD, Microsoft Windows, macOS (via MacPorts or Homebrew) and AmigaOS. Released under the GPL-2.0-or-later license, HomeBank is free software. HomeBank can be found in the software repositories of Linux distributions such as Debian, Fedora, Mandriva, openSUSE, Gentoo Linux, Arch Linux and Ubuntu. See also List of personal finance software References External links Free accounting software Free software programmed in C Office software that uses GTK Accounting software for Linux
4012802
https://en.wikipedia.org/wiki/Menashe%20Business%20Mercantile%20Ltd%20v%20William%20Hill%20Organization%20Ltd
Menashe Business Mercantile Ltd v William Hill Organization Ltd
Menashe Business Mercantile Ltd. & Anor v William Hill Organization Ltd. [2002] EWCA Civ 1702 was a patent case regarding Internet usage. The case addressed a European patent covering the United Kingdom for an invention referred to as "Interactive, computerized gaming system with remote control". Menashe sued William Hill, claiming that William Hill was infringing the patent by operating an online gaming system. William Hill's defence argued that it did not infringe the patent because the server on which it operated the system was located outside of the UK, in Antigua or Curaçao. Although accepting that their supply of software was in the UK and that this was an essential part of the invention, they further argued that the patent was for the parts of the system, and as one essential part of the system was not located in the UK, there could be no infringement. This aspect of William Hill's case was tried at a preliminary issue before Mr. Justice Jacob in the High Court in 2002. Mr. Justice Jacob found against William Hill holding that the patent related to the entire system, being the sum of all its elements. Simply locating one part of the system abroad did not prevent infringement when the result was still providing UK punters with the system's benefits. The Court's ruling took a broad interpretation, concentrating on the spirit and intention of patent protection and not confining itself to the linguistic construction of the law which developed before the advent of the Internet. Lord Justice Aldous heard the appeal and while he maintained the result of the judgment of the Patents Court, the reasoning was very different and was based upon where the invention was being "used". The claimed invention required there to be a host or server computer. According to the judgment, it did not matter where the host computer was situated. It could be in the United Kingdom, on a satellite, or even on the border between two countries. Its location was not important to the user of the invention nor to the claimed gaming system. In that respect, there was a real difference between the claimed gaming system and an ordinary machine. The judge therefore believed that it would be wrong to apply the old ideas of location to inventions of the type under consideration. A person who is situated in the United Kingdom who obtains in the United Kingdom a CD and then uses his terminal to address a host computer is not bothered where the host computer is located. It is of no relevance to him, the user, nor the patentee as to whether or not it is situated in the United Kingdom. If the host computer is situated in Antigua and the terminal computer is in the United Kingdom, it is pertinent to ask who uses the claimed gaming system. The answer must be the punter. Where does he use it? There can be no doubt that he uses his terminal in the United Kingdom and it is not a misuse of language to say that he uses the host computer in the United Kingdom. It is the input to and output of the host computer that is important to the punter and in a real sense the punter uses the host computer in the United Kingdom even though it is situated in Antigua and operates in Antigua. In those circumstances it is not straining the word "use" to conclude that the United Kingdom punter will use the claimed gaming system in the United Kingdom, even if the host computer is situated in, say, Antigua. Thus the supply of the CD in the United Kingdom to the United Kingdom punter will be intended to put the invention into effect in the United Kingdom. See also Software patents under United Kingdom patent law External links Full Text of Judgements on BAILII: First Instance: Appeal: United Kingdom patent case law Court of Appeal (England and Wales) cases 2002 in case law 2002 in British law William Hill (bookmaker)
2702375
https://en.wikipedia.org/wiki/Friedrich%20L.%20Bauer
Friedrich L. Bauer
Friedrich Ludwig "Fritz" Bauer (10 June 1924 – 26 March 2015) was a German pioneer of computer science and professor at the Technical University of Munich. Life Bauer earned his Abitur in 1942 and served in the Wehrmacht during World War II, from 1943 to 1945. From 1946 to 1950, he studied mathematics and theoretical physics at Ludwig-Maximilians-Universität in Munich. Bauer received his Doctor of Philosophy (Ph.D.) under the supervision of Fritz Bopp for his thesis Gruppentheoretische Untersuchungen zur Theorie der Spinwellengleichungen ("Group-theoretic investigations of the theory of spin wave equations") in 1952. He completed his habilitation thesis Über quadratisch konvergente Iterationsverfahren zur Lösung von algebraischen Gleichungen und Eigenwertproblemen ("On quadratically convergent iteration methods for solving algebraic equations and eigenvalue problems") in 1954 at the Technical University of Munich. After teaching as a privatdozent at the Ludwig Maximilian University of Munich from 1954 to 1958, he became extraordinary professor for applied mathematics at the University of Mainz. Since 1963, he worked as a professor of mathematics and (since 1972) computer science at the Technical University of Munich. He retired in 1989. Work Bauer's early work involved constructing computing machinery (e.g. the logical relay computer STANISLAUS from 1951–1955). In this context, he was the first to propose the widely used stack method of expression evaluation. Bauer was a member of the committees that developed the imperative computer programming languages ALGOL 58, and its successor ALGOL 60, important predecessors to all modern imperative programming languages. For ALGOL 58, Bauer was with the German Gesellschaft für Angewandte Mathematik und Mechanik (GAMM, Society of Applied Mathematics and Mechanics) which worked with the American Association for Computing Machinery (ACM). For ALGOL 60, Bauer was with the International Federation for Information Processing (IFIP) IFIP Working Group 2.1 on Algorithmic Languages and Calculi, which specified, maintains, and supports the languages ALGOL 60 and ALGOL 68. Bauer was an influential figure in establishing computer science as an independent subject in German universities, which until then was usually considered part of mathematics. In 1967, he held the first lecture in computer science at a German university at the Technical University of Munich, titled Information Processing. By 1972, computer science had become an independent academic discipline at the TUM. In 1992, it was separated from the Department of Mathematics to form an independent Department of Informatics, though Bauer had retired from his chair in 1989. In 1968, he coined the term software engineering which has been in widespread use since, and has become a discipline in computer science. His scientific contributions spread from numerical analysis (Bauer–Fike theorem) and fundamentals of interpretation and translation of programming languages, to his later works on systematics of program development, especially program transformation methods and systems (CIP-S) and the associated wide-spectrum language system CIP-L. He also wrote a well-respected book on cryptology, Decrypted secrets, now in its fourth edition. He was the doctoral advisor of 39 students, including Rudolf Berghammer, Manfred Broy, David Gries, Manfred Paul, Gerhard Seegmüller, Josef Stoer, Peter Wynn, and Christoph Zenger. Friedrich Bauer was one of the 19 founding members of the German Informatics Society. He was editor of the Informatik Spektrum from its founding in 1978, and held that position until his death. Friedrich Bauer was married to Hildegard Bauer-Vogg. He was the father of three sons and two daughters. Definition of software engineering Bauer was a colleague of the German Representative the NATO Science Committee. In 1967, NATO had been discussing 'The Software Crisis' and Bauer had suggested the term 'Software Engineering' as a way to conceive of both the problem and the solution. In 1972, Bauer published the following definition of software engineering: "Establishment and use of sound engineering principles to economically obtain software that is reliable and works on real machines efficiently." Legacy Since 1992, the Technical University of Munich has awarded the in computer science. In 2014, the Technical University of Munich renamed their largest lecture hall in the Department of Informatics building after him. Awards 1944: Iron Cross 2nd Class 1968: Member of the Bavarian Academy of Sciences in mathematics and science class 1971: Bavarian Order of Merit 1978: Wilhelm Exner Medal (Austria). 1982: Federal Merit Cross 1st Class 1984: Member of the German Academy of Sciences Leopoldina 1986: Bavarian Maximilian Order for Science and Art 1987: Honorary Member of the Society for computer science 1988: Golden Ring of Honour of the German Museum 1988: IEEE Computer Pioneer Award 1997: Heinz-Maier-Leibnitz Medal from the Technical University of Munich 1998: corresponding member of the Austrian Academy of Sciences 2002: Honorary Member of the Deutsches Museum 2004: Silver Medal of Merit of the Bavarian Academy of Sciences Honorary doctorates 1974: Honorary Doctor of the University of Grenoble 1989: Honorary Doctor of the University of Passau 1998: Honorary doctorate from the Bundeswehr University Munich (Neubiberg) Publications , a very influential paper on compilers References External links Oral history interview with Friedrich L. Bauer, Charles Babbage Institute, University of Minnesota. Bauer discusses his education and early research, including the European side of the development of ALGOL, as well as his later work in numerical analysis and programming languages. Photograph of F. L. Bauer (provided by Brian Randell) Bauer about Rutishauser at a symposium at the ETH Zürich in 2002 Author profile in the database zbMATH 1924 births 2015 deaths German computer scientists 20th-century German mathematicians Historians of mathematics People from Regensburg Programming language designers Software engineering researchers Computer science educators Technical University of Munich faculty Ludwig Maximilian University of Munich alumni Officers Crosses of the Order of Merit of the Federal Republic of Germany 21st-century German mathematicians Members of the Austrian Academy of Sciences German military personnel of World War II
22192523
https://en.wikipedia.org/wiki/Master%20of%20Science%20in%20Information%20Technology
Master of Science in Information Technology
A Master of Science in Information Technology (abbreviated M.Sc.IT, MScIT or MSIT) is a master's degree in the field of information technology awarded by universities in many countries or a person holding such a degree. The MSIT degree is designed for those managing information technology, especially the information systems development process. The MSIT degree is functionally equivalent to a Master of Information Systems Management, which is one of several specialized master's degree programs recognized by the Association to Advance Collegiate Schools of Business (AACSB). One can become software engineer and data scientist after completing M.Sc. IT, usually M.Sc. IT student has more knowledge as compared to B.Tech. CSE or MCA. M.Sc. IT is more valuable than any other degree. A joint committee of Association for Information Systems (AIS) and Association for Computing Machinery (ACM) members develop a model curriculum for the Master of Science in Information Systems (MSIT). The most recent version of the MSIS Model Curriculum was published in 2016. The course of study is concentrated around the Information Systems discipline. The core courses are (typically) Systems analysis, Systems design, Data Communications, Database design, Project management and Security. The degree typically includes coursework in both computer science and business skills, but the core curriculum might depend on the school and result in other degrees and specializations, including: Master of Science (Information Technology) M.Sc.(I.T) Master of Computer Applications (MCA) Master in Information Science (MIS) Master of Science in Information and Communication Technologies (MS-ICT) Master of Science in Information Systems Management (MISM) Master of Science in Information Technology (MSIT or MS in IT) Master of Computer Science (MCS) Master of Science in Information Systems (MSIS) Master of Science in Management of Information Technology (M.S. in MIT) Master of Information Technology (M.I.T.) Master of IT (M. IT or MIT) in Denmark Candidatus/candidata informationis technologiæ (Cand. it.) in Denmark Master of Information Science and Technology (M.I.S.T.) from the University of Tokyo, Japan See also ABET - Accreditation Board for Engineering and Technology (United States) List of master's degrees Bachelor of Computer Information Systems Bachelor of Science in Information Technology Master of Science in Information Systems References Science in Information Technology Computer science education Information technology education Information technology qualifications Science education
8484
https://en.wikipedia.org/wiki/Deus%20Ex%20%28video%20game%29
Deus Ex (video game)
Deus Ex is a 2000 action role-playing game developed by Ion Storm and published by Eidos Interactive. Set in a cyberpunk-themed dystopian world in the year 2052, the game follows JC Denton, an agent of the fictional agency United Nations Anti-Terrorist Coalition (UNATCO), who is given superhuman abilities by nanotechnology, as he sets out to combat hostile forces in a world ravaged by inequality and a deadly plague. His missions entangle him in a conspiracy that brings him into conflict with the Triads, Majestic 12, and the Illuminati. Deus Exs gameplay combines elements of the first-person shooter with stealth elements, adventure, and role-playing genres, allowing for its tasks and missions to be completed in a variety of ways, which in turn lead to differing outcomes. Presented from the first-person perspective, the player can customize Denton's various abilities such as weapon skills or lockpicking, increasing his effectiveness in these areas; this opens up different avenues of exploration and methods of interacting with or manipulating other characters. The player can complete side missions away from the primary storyline by moving freely around the available areas, which can reward the player with experience points to upgrade abilities and alternative ways to tackle main missions. Powered by the Unreal Engine, the game was released for Microsoft Windows in June 2000, with a Mac OS port following the next month. A modified version of the game was released for the PlayStation 2 in 2002 as Deus Ex: The Conspiracy. In the years following its release, Deus Ex has received additional improvements and content from its fan community. The game received critical acclaim, including being named "Best PC Game of All Time" in PC Gamers "Top 100 PC Games" in 2011 and a poll carried out by the UK gaming magazine PC Zone. It received several Game of the Year awards, drawing praise for its pioneering designs in player choice and multiple narrative paths. Deus Ex has been regarded as one of the best video games of all time. It has sold more than 1 million copies, as of April 23, 2009. The game led to a series, which includes the sequel Deus Ex: Invisible War (2003), and three prequels: Deus Ex: Human Revolution (2011), Deus Ex: The Fall (2013), and Deus Ex: Mankind Divided (2016). Gameplay Deus Ex incorporates elements from four video game genres: role-playing, first-person shooter, adventure, and "immersive simulation", the last of which being a game where "nothing reminds you that you're just playing a game." For example, the game uses a first-person camera during gameplay and includes exploration and character interaction as primary features. The player assumes the role of JC Denton, a nanotech-augmented operative of the United Nations Anti-Terrorist Coalition (UNATCO). This nanotechnology is a central gameplay mechanism and allows players to perform superhuman feats. Role-playing elements As the player accomplishes objectives, the player character is rewarded with "skill points". Skill points are used to enhance a character's abilities in eleven different areas, and were designed to provide players with a way to customize their characters; a player might create a combat-focused character by increasing proficiency with pistols or rifles, while a more furtive character can be created by focusing on lock picking and computer hacking abilities. There are four different levels of proficiency in each skill, with the skill point cost increasing for each successive level. Weapons may be customized through "weapon modifications", which can be found or purchased throughout the game. The player might add scopes, silencers, or laser sights; increase the weapon's range, accuracy, or magazine size; or decrease its recoil and reload time; as appropriate to the weapon type. Players are further encouraged to customize their characters through nano-augmentations—cybernetic devices that grant characters superhuman powers. While the game contains eighteen different nano-augmentations, the player can install a maximum of nine, as each must be used on a certain part of the body: one in the arms, legs, eyes, and head; two underneath the skin; and three in the torso. This forces the player to choose carefully between the benefits offered by each augmentation. For example, the arm augmentation requires the player to decide between boosting their character's skill in hand-to-hand combat or his ability to lift heavy objects. Interaction with non-player characters (NPCs) was a significant design focus. When the player interacts with a non-player character, the game will enter a cutscene-like conversation mode where the player advances the conversation by selecting from a list of dialogue options. The player's choices often have a substantial effect on both gameplay and plot, as non-player characters will react in different ways depending on the selected answer (e.g., rudeness makes them less likely to assist the player). Combat elements Deus Ex features combat similar to first-person shooters, with real-time action, a first-person perspective, and reflex-based gameplay. As the player will often encounter enemies in groups, combat often tends toward a tactical approach, including the use of cover, strafing, and "hit-and-run". A USA Today reviewer found, "At the easiest difficulty setting, your character is puréed again and again by an onslaught of human and robotic terrorists until you learn the value of stealth." However, through the game's role-playing systems, it is possible to develop a character's skills and augmentations to create a tank-like combat specialist with the ability to deal and absorb large amounts of damage. Non-player characters will praise or criticize the main character depending on the use of force, incorporating a moral element into the gameplay. Deus Ex features a head-up display crosshair, whose size dynamically shows where shots will fall based on movement, aim, and the weapon in use; the reticle expands while the player is moving or shifting their aim, and slowly shrinks to its original size while no actions are taken. How quickly the reticle shrinks depends on the character's proficiency with the equipped weapon, the number of accuracy modifications added to the weapon, and the level of the "Targeting" nano-augmentation. Deus Ex features twenty-four weapons, ranging from crowbars, electroshock weapons, and riot baton, to laser-guided anti-tank rockets and assault rifles; both lethal and non-lethal weapons are available. The player can also make use of several weapons of opportunity, such as fire extinguishers. Player choice Gameplay in Deus Ex emphasizes player choice. Objectives can be completed in numerous ways, including stealth, sniping, heavy frontal assault, dialogue, or engineering and computer hacking. This level of freedom requires that levels, characters, and puzzles be designed with significant redundancy, as a single play-through of the game will miss large sections of dialogue, areas, and other content. In some missions, the player is encouraged to avoid using deadly force, and specific aspects of the story may change depending on how violent or non-violent the player chooses to be. The game is also unusual in that two of its boss villains can be killed off early in the game, or left alive to be defeated later, and this too affects how other characters interact with the player. Because of its design focus on player choice, Deus Ex has been compared with System Shock, a game that inspired its design. Together, these factors give the game a high degree of replayability, as the player will have vastly different experiences, depending on which methods they use to accomplish objectives. Multiplayer Deus Ex was designed as a single-player game, and the initial releases of the Windows and Macintosh versions of the game did not include multiplayer functionality. Support for multiplayer modes was later incorporated through patches. The component consists of three game modes: deathmatch, basic team deathmatch, and advanced team deathmatch. Five maps, based on levels from the single-player portion of the game, were included with the original multiplayer patch, but many user-created maps exist, while also many features of the single-player game missing in multiplayer have been re-introduced by various user RPG modifications. The PlayStation 2 release of Deus Ex does not offer a multiplayer mode. In April 2014 it was announced that GameSpy would cease their masterserver services, also affecting Deus Ex. A community-made patch for the multiplayer mode has been created as a response to this. Synopsis Setting and characters Deus Ex takes place in 2052, in an alternate history where real-world conspiracy theories are true. These include speculations regarding black helicopters, vaccinations, and FEMA, as well as Area 51, the ECHELON network, Men in Black, chupacabras (in the form of "greasels"), and grey aliens. Mysterious groups such as Majestic 12, the Illuminati, the Knights Templar, the Bilderberg Group, and the Trilateral Commission also either play a central part in the plot or are alluded to during the course of the game. The plot of Deus Ex depicts a society on a slow spiral into chaos. There is a massive division between the rich and the poor, not only socially, but in some cities physically. A lethal pandemic, known as the "Gray Death", ravages the world's population, especially within the United States, and has no cure. A synthetic vaccine, "Ambrosia", manufactured by the company VersaLife, nullifies the effects of the virus but is in critically short supply. Because of its scarcity, Ambrosia is available only to those deemed "vital to the social order", and finds its way primarily to government officials, military personnel, the rich and influential, scientists, and the intellectual elite. With no hope for the common people of the world, riots occur worldwide, and some terrorist organizations have formed with the professed intent of assisting the downtrodden, among them the National Secessionist Forces (NSF) of the U.S. and a French group known as Silhouette. To combat these threats to the world order, the United Nations has expanded its influence around the globe to form the United Nations Anti-Terrorist Coalition (UNATCO). It is headquartered near New York City in a bunker beneath Liberty Island, placed there after a terrorist strike on the Statue of Liberty. The main character of Deus Ex is UNATCO agent JC Denton (voiced by Jay Franke), one of the first in a new line of agents physically altered with advanced nanotechnology to gain superhuman abilities, alongside his brother Paul (also voiced by Jay Franke), who joined UNATCO to avenge his parents' deaths at the hands of Majestic 12. His UNATCO colleagues include the mechanically-augmented and ruthlessly efficient field agents Gunther Hermann and Anna Navarre; Quartermaster General Sam Carter, and the bureaucratic UNATCO chief Joseph Manderley. UNATCO communications tech Alex Jacobson's character model and name are based on Warren Spector's nephew, Alec Jacobson. JC's missions bring him into contact with various characters, including NSF leader Juan Lebedev, hacker and scientist Tracer Tong, nano-tech expert Gary Savage, Nicolette DuClare (daughter of an Illuminati member), former Illuminati leader Morgan Everett, the Artificial Intelligences (AI) Daedalus and Icarus, and Bob Page, owner of VersaLife and leader of Majestic 12, a clandestine organization that has usurped the infrastructure of the Illuminati, allowing him to control the world for his own ends. Plot After completing his training, UNATCO agent JC Denton takes several missions given by Director Joseph Manderley to track down members of the National Secessionist Forces (NSF) and their stolen shipments of the Ambrosia vaccine, the treatment for the Gray Death virus. Through these missions, JC is reunited with his brother, Paul, who is also nano-augmented. JC tracks the Ambrosia shipment to a private terminal at LaGuardia Airport. Paul meets JC outside the plane and explains that he has defected from UNATCO and is working with the NSF after learning that the Gray Death is a human-made virus, with UNATCO using its power to make sure only the elite receive the vaccine. JC returns to UNATCO headquarters and is told by Manderley that both he and Paul have been outfitted with a 24-hour kill switch and that Paul's has been activated due to his betrayal. Manderley orders JC to fly to Hong Kong to eliminate Tracer Tong, a hacker whom Paul has contact with, and who can disable the kill switches. Instead, JC returns to Paul's apartment to find Paul hiding inside. Paul further explains his defection and encourages JC to also defect by sending out a distress call to alert the NSF's allies. Upon doing so, JC becomes a wanted man by UNATCO, and his kill switch is activated by Federal Emergency Management Agency (FEMA) Director Walton Simons. JC is unable to escape UNATCO forces, and both he and Paul (provided he survived the raid on the apartment) are taken to a secret prison below UNATCO headquarters. An entity named "Daedalus" contacts JC and informs him that the prison is part of Majestic 12, and arranges for him and Paul to escape. The two flee to Hong Kong to meet with Tong, who deactivates their kill switches. Tong requests that JC infiltrate the VersaLife building. Doing so, JC discovers that the corporation is the source for the Gray Death, and he can steal the plans for the virus and destroy the universal constructor (UC) that produces it. Analysis of the virus shows that its structure was designed by the Illuminati, prompting Tong to send JC to Paris to obtain their help fighting Majestic 12. JC meets with Illuminati leader Morgan Everett and learns that the technology behind the Gray Death was intended to be used for augmentation, but Majestic 12, led by trillionaire businessman and former Illuminatus Bob Page, stole and repurposed it. Everett recognizes that without VersaLife's UC, Majestic 12 can no longer create the virus, and will likely target Vandenberg Air Force Base, where X-51, a group of former Area 51 scientists, have built another one. After aiding the base personnel in repelling a Majestic 12 attack, JC meets X-51 leader Gary Savage, who reveals that Daedalus is an artificial intelligence (AI) borne out of the ECHELON program. Everett attempts to gain control over Majestic 12's communications network by releasing Daedalus onto the U.S. military networks, but Page counters by releasing his own AI, Icarus. Icarus merges with Daedalus to form a new AI, Helios, which can control all global communications. Savage enlists JC's help in procuring schematics for reconstructing components for the UC that were damaged during Majestic 12's raid of Vandenberg. JC finds the schematics and transmits them to Savage. Page intercepts the transmission and launches a nuclear missile at Vandenberg to ensure that Area 51, now Majestic 12's headquarters, will be the only location in the world with an operational UC. However, JC can reprogram the missile to strike Area 51. JC travels to Area 51 to confront Page. Page reveals that he seeks to merge with Helios and gain full control over nanotechnology. JC is contacted by Tong, Everett, and the Helios AI simultaneously. All three factions ask for his help in defeating Page while furthering their own objectives. Tong seeks to plunge the world into a Dark Age by destroying the global communications hub and preventing anyone from taking control of the world. Everett offers Denton the chance to return the Illuminati to power by killing Page and using the Area 51 technology to rule the world with an invisible hand. Helios wishes to merge with Denton and rule the world as a benevolent dictator with infinite knowledge and reason. The player's decision determines the future and brings the game to a close. Development After Looking Glass Technologies and Origin Systems released Ultima Underworld II: Labyrinth of Worlds in January 1993, producer Warren Spector began to plan Troubleshooter, the game that would become Deus Ex. In his 1994 proposal, he described the concept as "Underworld-style first-person action" in a real-world setting with "big-budget, nonstop action". After Spector and his team were laid off from Looking Glass, John Romero of Ion Storm offered him the chance to make his "dream game" without any restrictions. Preproduction for Deus Ex began around August 1997 and lasted roughly six months. The project's budget was $5 million to $7 million. The game's working title was Shooter: Majestic Revelations, and it was scheduled for release on Christmas 1998. The team developed the setting before the game mechanics. Noticing his wife's fascination with The X-Files, Spector connected the "real world, millennial weirdness, [and] conspiracy" topics on his mind and decided to make a game about them that would appeal to a broad audience. The Shooter design document cast the player as an augmented agent working against an elite cabal in the "dangerous and chaotic" 2050s. It cited Half-Life, Fallout, Thief: The Dark Project, and GoldenEye 007 as game design influences, and used the stories and settings of Colossus: The Forbin Project, The Manchurian Candidate, Robocop, The X-Files, and Men in Black as reference points. The team designed a skill system that featured "special powers" derived from nanotechnological augmentation and avoided the inclusion of die rolling and skills that required micromanagement. Spector also cited Konami's 1995 role-playing video game Suikoden as an inspiration, stating that the limited choices in Suikoden inspired him to expand on the idea with more meaningful choices in Deux Ex. In early 1998, the Deus Ex team grew to 20 people, and the game entered a 28-month production phase. The development team consisted of three programmers, six designers, seven artists, a writer, an associate producer, a "tech", and Spector. Two writers and four testers were hired as contractors. Chris Norden was the lead programmer and assistant director, Harvey Smith the lead designer, Jay Lee the lead artist, and Sheldon Pacotti the lead writer. Close friends of the team who understood the intentions behind the game were invited to playtest and give feedback. The wide range of input led to debates in the office and changes to the game. Spector later concluded that the team was "blinded by promises of complete creative freedom", and by their belief that the game would have no budget, marketing, or time restraints. By mid-1998, the game's title had become Deus Ex, derived from the Latin literary device deus ex machina ("god from the machine"), in which a plot is resolved by an unpredictable intervention. Spector felt that the best aspects of Deus Exs development were the "high-level vision" and length of preproduction, flexibility within the project, testable "proto-missions", and the Unreal Engine license. The team's pitfalls included the management structure, unrealistic goals, underestimating risks with artificial intelligence, their handling of proto-missions, and weakened morale from bad press. Deus Ex was released on June 23, 2000, and published by Eidos Interactive for Microsoft Windows. The team planned third-party ports for Mac OS 9 and Linux. Design The original 1997 design document for Deus Ex privileges character development over all other features. The game was designed to be "genre-busting": in parts simulation, role-playing, first-person shooter, and adventure. The team wanted players to consider "who they wanted to be" in the game, and for that to alter how they behaved in the game. In this way, the game world was "deeply simulated", or realistic and believable enough that the player would solve problems in creative, emergent ways without noticing distinct puzzles. However, the simulation ultimately failed to maintain the desired level of openness, and they had to brute force "skill", "action", and "character interaction" paths through each level. Playtesting also revealed that their idea of a role-playing game based on the real world was more interesting in theory than in reality, as certain aspects of the real world, such as hotels and office buildings, were not compelling in a game. The game's story changed considerably during production, but the idea of an augmented counterterrorist protagonist named JC Denton remained throughout. Though Spector originally pictured Deus Ex as akin to The X-Files, lead writer Sheldon Pacotti felt that it ended up more like James Bond. Spector wrote that the team overextended itself by planning highly elaborate scenes. Designer Harvey Smith removed a mostly complete White House level due to its complexity and production needs. Finished digital assets were repurposed or abandoned by the team. Pete Davison of USgamer referred to the White House and presidential bunker as "the truly deleted scenes of Deus Ex lost levels". One of the things that Spector wanted to achieve in Deus Ex was to make JC Denton a cipher for the player, to create a better immersion and gameplay experience. He did not want the character to force any emotion, so that whatever feelings the player may be experiencing come from themselves rather than from JC Denton. To do this, Spector instructed voice actor Jay Anthony Franke to record his dialogue without any emotion but in a monotone voice, which is unusual for a voice acting role. Once coded, the team's game systems did not work as intended. The early tests of the conversation system and user interface were flawed. The team also found augmentations and skills to be less interesting than they had seemed in the design document. In response, Harvey Smith substantially revised the augmentations and skills. Production milestones served as wake-up calls for the game's direction. A May 1998 milestone that called for a functional demo revealed that the size of the game's maps caused frame rate issues, which was one of the first signs that maps needed to be cut. A year later, the team reached a milestone for finished game systems, which led to better estimates for their future mission work and the reduction of the 500-page design document to 270 pages. Spector recalled Smith's mantra on this point: "less is more". One of the team's biggest blind spots was the AI programming for NPCs. Spector wrote that they considered it in preproduction, but that they did not figure out how to handle it until "relatively late in development". This led to wasted time when the team had to discard their old AI code. The team built atop their game engine's shooter-based AI instead of writing new code that would allow characters to exhibit convincing emotions. As a result, NPC behavior was variable until the very end of development. Spector felt that the team's "sin" was their inconsistent display of a trustable "human AI". Technology The game was developed on systems including dual-processor Pentium Pro 200s and Athlon 800s with eight and nine gigabyte hard drives, some using SCSI. The team used "more than 100 video cards" throughout development. Deus Ex was built using Visual Studio, Lightwave, and Lotus Notes. They also made a custom dialogue editor, ConEdit. The team used UnrealEd atop the Unreal game engine for map design, which Spector wrote was "superior to anything else available". Their trust in UnrealScript led them to code "special-cases" for their immediate mission needs instead of more generalized multi-case code. Even as concerned team members expressed misgivings, the team only addressed this later in the project. To Spector, this was a lesson to always prefer "general solutions" over "special casing", such that the toolset works predictably. They waited to license a game engine until after preproduction, expecting the benefits of licensing to be more time for the content and gameplay, which Spector reported to be the case. They chose the Unreal engine, as it did 80% of what they needed from an engine and was more economical than building from scratch. Their small programming team allowed for a larger design group. The programmers also found the engine accommodating, though it took about nine months to acclimate to the software. Spector felt that they would have understood the code better had they built it themselves, instead of "treating the engine as a black box" and coding conservatively. He acknowledged that this precipitated into the Direct3D issues in their final release, which slipped through their quality assurance testing. Spector also noted that the artificial intelligence, pathfinding, and sound propagation were designed for shooters and should have been rewritten from scratch instead of relying on the engine. He thought the licensed engine worked well enough that he expected to use the same for the game's sequel Deus Ex: Invisible War and Thief 3. He added that developers should not attempt to force their technology to perform in ways it was not intended, and should find a balance between perfection and pragmatism. Music The soundtrack of Deus Ex, composed by Alexander Brandon (primary contributor, including main theme), Dan Gardopée ("Naval Base" and "Vandenberg"), Michiel van den Bos ("UNATCO", "Lebedev's Airfield", "Airfield Action", "DuClare Chateau", plus minor contribution to some of Brandon's tracks), and Reeves Gabrels ("NYC Bar"), was praised by critics for complementing the gritty atmosphere predominant throughout the game with melodious and ambient music incorporated from a number of genres, including techno, jazz, and classical. The music sports a basic dynamic element, similar to the iMUSE system used in early 1990s LucasArts games; during play, the music will change to a different iteration of the currently playing song based on the player's actions, such as when the player starts a conversation, engages in combat, or transitions to the next level. All the music in the game is tracked - Gabrels' contribution, "NYC Bar", was converted to a module by Alexander Brandon. Release Deus Ex has been re-released in several iterations since its original publication and has also been the basis of several mods developed by its fan community. The Deus Ex: Game of the Year Edition, which was released on May 8, 2001, contains the latest game updates and a software development kit, a separate soundtrack CD, and a page from a fictional newspaper featured prominently in Deus Ex titled The Midnight Sun, which recounts recent events in the game's world. However, later releases of said version do not include the soundtrack CD and contain a PDF version of the newspaper on the game's disc. The Mac OS version of the game, released a month after the Windows version, was shipped with the same capabilities and can also be patched to enable multiplayer support. However, publisher Aspyr Media did not release any subsequent editions of the game or any additional patches. As such, the game is only supported in Mac OS 9 and the "Classic" environment in Mac OS X, neither of which are compatible with Intel-based Macs. The Windows version will run on Intel-based Macs using Crossover, Boot Camp, or other software to enable a compatible version of Windows to run on a Mac. A PlayStation 2 port of the game, retitled Deus Ex: The Conspiracy outside of Europe, was released on March 26, 2002. Along with motion-captured character animations and pre-rendered introductory and ending cinematics that replaced the original versions, it features a simplified interface with optional auto-aim. There are many minor changes in level design, some to balance gameplay, but most to accommodate loading transition areas, due to the memory limitations of the PlayStation 2. The PlayStation 2 version was re-released in Europe on the PlayStation 3 as a PlayStation 2 Classic on May 16, 2012. Loki Games worked on a Linux version of the game, but the company went out of business before releasing it. The OpenGL layer they wrote for the port, however, was sent out to Windows gamers through an online patch. Though their quality assurance did not see major Direct3D issues, players noted "dramatic slowdowns" immediately following the launch, and the team did not understand the "black box" of the Unreal engine well enough to make it do exactly what they needed. Spector characterized Deus Ex reviews into two categories based on how they begin with either how "Warren Spector makes games all by himself" or that "Deus Ex couldn't possibly have been made by Ion Storm". He has said that the game won over 30 "best of" awards in 2001, and concluded that their final game was not perfect, but that they were much closer for having tried to "do things right or not at all". Mods Deus Ex was built on the Unreal Engine, which already had an active community of modders. In September 2000, Eidos Interactive and Ion Storm announced in a press release that they would be releasing the software development kit (SDK), which included all the tools used to create the original game. Several team members, as well as project director Warren Spector, stated that they were "really looking forward to seeing what [the community] does with our tools". The kit was released on September 22, 2000, and soon gathered community interest, followed by the release of tutorials, small mods, up to announcements of large mods and conversions. While Ion Storm did not hugely alter the engine's rendering and core functionality, they introduced role-playing elements. In 2009, a fan-made mod called The Nameless Mod (TNM) was released by Off Topic Productions. The game's protagonist is a user of an Internet forum, with digital places represented as physical locations. The mod offers roughly the same amount of gameplay as Deus Ex and adds several new features to the game, with a more open world structure than Deus Ex and new weapons such as the player character's fists. The mod was developed over seven years and has thousands of lines of recorded dialogue and two different parallel story arcs. Upon its release, TNM earned a 9/10 overall from PC PowerPlay magazine. In Mod DB's 2009 Mod of the Year awards, The Nameless Mod won the Editor's Choice award for Best Singleplayer Mod. In 2015, during the 15th anniversary of the game's release, Square Enix (who had acquired Eidos earlier) endorsed a free fan-created mod, Deus Ex: Revision, which was released through Steam. The mod, created by Caustic Creative, is a graphical overhaul of the original game, adding in support for newer versions of DirectX, upgraded textures adapted from previous mods, a remixed soundtrack, and more world-building aesthetics. It also alters aspects of gameplay, including new level design paths and in-game architecture. Another overhaul mod, GMDX, released its final version in mid-2017 with enhanced artificial intelligence, improved physics, and upgraded visual textures. The Lay D Denton Project, a mod adding the ability to play as a female JC – a feature that had been planned for Deus Ex but ultimately not implemented – was released in 2021. This included the re-recording of all of JC's voice lines by voice actress Karen Rohan, the addition of 3D models for the character, and editing of all gendered references to JC including other characters' voice clips. The audio editing was the most difficult aspect, as any abnormalities would have been noticed easily; a few characters were too difficult to edit, and had to be recast for the mod. Reception Sales According to Computer Gaming Worlds Stefan Janicki, Deus Ex had "sold well in North America" by early 2001. In the United States, it debuted at #6 on PC Data's sales chart for the week ending June 24, at an average retail price of $40. It fell to eighth place in its second week but rose again to position 6 in its third. It proceeded to place in the top 10 rankings for August 6–12 and the week ending September 2 and to secure 10th place overall for the months of July and August. Deus Ex achieved sales of 138,840 copies and revenues of $5 million in the United States by the end of 2000, according to PC Data. The firm tracked another 91,013 copies sold in the country during 2001. The game was a larger hit in Europe; Janicki called it a "blockbuster" for the region, which broke a trend of weak sales for 3D games. He wrote, "[I]n Europe—particularly in England—the action/RPG dominated the charts all summer, despite competition from heavyweights like Diablo II and The Sims." In the German-speaking market, PC Player reported sales over 70,000 units for Deus Ex by early 2001. It debuted at #3 in the region for July 2000 and held the position in August, before dropping to #10, #12 and #27 over the following three months. In the United Kingdom, Deus Ex reached #1 on the sales charts during August and spent three months in the top 10. It received a "Silver" award from the Entertainment and Leisure Software Publishers Association (ELSPA) in February 2002, indicating lifetime sales of at least 100,000 units in the United Kingdom. The ELSPA later raised it to "Gold" status, for 200,000 sales. In April 2009, Square-Enix revealed that Deus Ex had surpassed 1 million sales globally, but was outsold by Deus Ex: Invisible War. Critical response Deus Ex received critical acclaim, attaining a score of 90 out of 100 from 28 critics on Metacritic. Thierry Nguyen from Computer Gaming World said that the game "delivers moments of brilliance, idiocy, ingenuity, and frustration". Computer Games Magazine praised the title for its deep gameplay and its use of multiple solutions to situations in the game. Similarly, Edge highlighted the game's freedom of choice, saying that Deus Ex "never tells you what to do. Goals are set, but alter according to your decisions." Eurogamers Rob Fahey lauded the game, writing, "Moody and atmospheric, compelling and addictive, this is first person gaming in grown-up form, and it truly is magnificent." Jeff Lundrigan reviewed the PC version of the game for Next Generation, rating it five stars out of five, and stated that "This is hands-down one of the best PC games ever made. Stop reading and go get yours now." Former GameSpot reviewer Greg Kasavin, though awarding the game a score of 8.2 of 10, was disappointed by the security and lockpicking mechanics. "Such instances are essentially noninteractive", he wrote. "You simply stand there and spend a particular quantity of electronic picks or modules until the door opens or the security goes down." Kasavin made similar complaints about the hacking interface, noting that "Even with basic hacking skills, you'll still be able to bypass the encryption and password protection ... by pressing the 'hack' button and waiting a few seconds". The game's graphics and voice acting were also met with muted enthusiasm. Kasavin complained of Deus Exs relatively sub-par graphics, blaming them on the game's "incessantly dark industrial environments". GamePro reviewer Chris Patterson took the time to note that despite being "solid acoustically", Deus Ex had moments of weakness. He poked fun at JC's "Joe Friday, 'just the facts', deadpan", and the "truly cheesy accents" of minor characters in Hong Kong and New York City. IGN called the graphics "blocky", adding that "the animation is stiff, and the dithering is just plain awful in some spots", referring to the limited capabilities of the Unreal Engine used to design the game. The website, later on, stated that "overall Deus Ex certainly looks better than your average game". Reviewers and players also complained about the size of Deus Exs save files. An Adrenaline Vault reviewer noted that "Playing through the entire adventure, [he] accumulated over 250 MB of save game data, with the average file coming in at over 15 MB." The game developed a strong cult following, leading to a core modding and playing community that remained active over 15 years after its release. In an interview with IGN in June 2015, game director Warren Spector said he never expected Deus Ex to sell many copies, but he did expect it to become a cult classic among a smaller, active community, and he continues to receive fan mail from players to date regarding their experiences and thoughts about Deus Ex. Awards and accolades Deus Ex received over 30 "best of" awards in 2001, from outlets such as IGN, GameSpy, PC Gamer, Computer Gaming World, and The Adrenaline Vault. It won "Excellence in Game Design" and "Game Innovation Spotlight" at the 2001 Game Developers Choice Awards, and it was nominated for "Game of the Year". At the Interactive Achievement Awards, it won in the "Computer Innovation" and "Computer Action/Adventure" categories and received nominations for "Sound Design", "PC Role-Playing", and "Game of the Year" in both the PC and overall categories. The British Academy of Film and Television Arts named it "PC Game of the Year". The game also collected several "Best Story" accolades, including first prize in Gamasutra's 2006 "Quantum Leap" awards for storytelling in a video game. Deus Ex has appeared in several lists of the greatest games. It was included in IGN "100 Greatest Games of All Time" (#40, #21 and #34 in 2003, 2005 and 2007, respectively), "Top 25 Modern PC Games" (4th place in 2010) and "Top 25 PC Games of All Time" (#20 and #21 in 2007 and 2009 respectively) lists. GameSpy featured the game in its "Top 50 Games of All Time" (18th place in 2001) and "25 Most Memorable Games of the Past 5 Years" (15th place in 2004) lists, and in the site's "Hall of Fame". PC Gamer placed Deus Ex on its "Top 100 PC Games of All Time" (#2, #2, #1 by staff and #4 by readers in 2007, 2008, 2010 and 2010 respectively) and "50 Best Games of All Time" (#10 and #27 in 2001 and 2005) lists, and it was awarded 1st place in PC Zones "101 Best PC Games Ever" feature. It was also included in Yahoo! UK Video Games' "100 Greatest Computer Games of All Time" (28th place) list, and in Edges "The 100 Best Videogames" (29th place in 2007) and "100 Best Games to Play Today" (57th place in 2009) lists. Deus Ex was named the second-best game of the 2000s by Gamasutra. In 2012, Time named it one of the 100 greatest video games of all time, and G4tv named it the 53rd best game of all time for its "complex and well-crafted story that was really the start of players making choices that genuinely affect the outcome". 1UP.com listed it as one of the most important games of all time, calling its influence "too massive to properly gauge". In 2019, the Guardian named it the 29th best game of the 21st century, describing it as a "cultural event". Legacy Sequels A sequel, Deus Ex: Invisible War, was released in the United States on December 2, 2003, and in Europe in early 2004 for Windows and Xbox. A second sequel, titled Deus Ex: Clan Wars, was initially conceived as a multiplayer-focused third game for the series. After the commercial performance and public reception of Deus Ex: Invisible War failed to meet expectations, the decision was made to set the game in a separate universe, and Deus Ex: Clan Wars was eventually published under the title Project: Snowblind. On March 29, 2007, Valve announced Deus Ex and its sequel would be available for purchase from their Steam service. Among the games announced are several other Eidos franchise titles, including Thief: Deadly Shadows and Tomb Raider. Eidos Montréal produced a prequel to Deus Ex called Deus Ex: Human Revolution. This was confirmed on November 26, 2007, when Eidos Montréal posted a teaser trailer for the title on their website. The game was released on August 23, 2011, for the PC, PlayStation 3, and Xbox 360 platforms and received critical acclaim. On April 7, 2015, Eidos announced a sequel to Deus Ex: Human Revolution and second prequel to Deus Ex titled Deus Ex: Mankind Divided. It was released on August 23, 2016. Adaptation A film adaptation based on the game was initially announced in May 2002 by Columbia Pictures. The film was being produced by Laura Ziskin, along with Greg Pruss attached with writing the screenplay. Peter Schlessel, president of the production for Columbia Pictures, and Paul Baldwin, president of marketing for Eidos Interactive, stated that they were confident in that the adaptation would be a successful development for both the studios and the franchise. In March 2003, during an interview with Greg Pruss, he informed IGN that the character of JC Denton would be "a little bit filthier than he was in the game". He further stated that the script was shaping up to be darker in tone than the original game. Although a release date was scheduled for 2006, the film did not get past the scripting stage. In 2012, CBS films revived the project, buying the rights and commissioning a film inspired by the Deux Ex series; its direct inspiration was the 2011 game Human Revolution. C. Robert Cargill and Scott Derrickson were to write the screenplay, and Derrickson was to direct the film. References Notes Footnotes Sources External links Official page on Eidos site 2000 video games Fiction set in 2052 Action role-playing video games Cancelled Linux games Cyberpunk video games Cyberpunk Postcyberpunk Nanopunk Deus Ex Dystopian video games Eidos Interactive games Existentialist works First-person shooters Interactive Achievement Award winners Ion Storm games Classic Mac OS games Multiplayer and single-player video games Multiplayer online games PlayStation 2 games Stealth video games Works about globalism Unreal Engine games Video games scored by Alexander Brandon Video games scored by Dan Gardopée Video games scored by Michiel van den Bos Video games developed in the United States Video games about viral outbreaks Video games set in California Video games set in Hong Kong Video games set in Nevada Video games set in New York City Video games set in Paris Video games set in the 2050s Video games with alternate endings Windows games Works about conspiracy theories Motion capture in video games Immersive sims
113014
https://en.wikipedia.org/wiki/Autodesk%20Maya
Autodesk Maya
Autodesk Maya, commonly shortened to just Maya ( ), is a 3D computer graphics application that runs on Windows, macOS and Linux, originally developed by Alias and currently owned and developed by Autodesk. It is used to create assets for interactive 3D applications (including video games), animated films, TV series, and visual effects. History Maya was originally an animation product based on code from The Advanced Visualizer by Wavefront Technologies, Thomson Digital Image (TDI) Explore, PowerAnimator by Alias, and Alias Sketch!. The IRIX-based projects were combined and animation features were added; the project codename was Maya. Walt Disney Feature Animation collaborated closely with Maya's development during its production of Dinosaur. Disney requested that the user interface of the application be customizable so that a personalized workflow could be created. This was a particular influence in the open architecture of Maya, and partly responsible for it becoming popular in the animation industry. After Silicon Graphics Inc. acquired both Alias and Wavefront Technologies, Inc., Wavefront's technology (then under development) was merged into Maya. SGI's acquisition was a response to Microsoft Corporation acquiring Softimage 3D. The new wholly owned subsidiary was named "AliasWavefront". In the early days of development Maya started with Tcl as the scripting language, in order to leverage its similarity to a Unix shell language, but after the merger with Wavefront it was replaced with Maya Embedded Language (MEL). Sophia, the scripting language in Wavefront's Dynamation, was chosen as the basis of MEL. Maya 1.0 was released in February 1998. Following a series of acquisitions, Maya was bought by Autodesk in 2005. Under the name of the new parent company, Maya was renamed Autodesk Maya. However, the name "Maya" continues to be the dominant name used for the product. Overview Maya is an application used to generate 3D assets for use in film, television, games, and commercials. The software was initially released for the IRIX operating system. However, this support was discontinued in August 2006 after the release of version 6.5. Maya was available in both "Complete" and "Unlimited" editions until August 2008, when it was turned into a single suite. Users define a virtual workspace (scene) to implement and edit media of a particular project. Scenes can be saved in a variety of formats, the default being .mb (Maya D). Maya exposes a node graph architecture. Scene elements are node-based, each node having its own attributes and customization. As a result, the visual representation of a scene is based entirely on a network of interconnecting nodes, depending on each other's information. For the convenience of viewing these networks, there is a dependency and a directed acyclic graph. Nowadays, the 3D models can imported to game engines such as Unreal Engine and Unity. Industry usage The widespread use of Maya in the film industry is usually associated with its development on the film Dinosaur, released by Disney and The Secret Lab on May 19, 2000. In 2003, when the company received an Academy Award for Technical Achievement, it was noted to be used in films such as The Lord of the Rings: The Two Towers, Spider-Man (2002), Ice Age, and Star Wars: Episode II – Attack of the Clones. By 2015, VentureBeat Magazine stated that all ten films in consideration for the Best Visual Effects Academy Award had used Autodesk Maya and that it had been "used on every winning film since 1997." Awards On March 1, 2003, Alias was given an Academy Award for Technical Achievement by the Academy of Motion Picture Arts and Sciences for scientific and technical achievement for their development of Maya software. In 2005, while working for Alias|Wavefront, Jos Stam shared an Academy Award for Technical Achievement with Edwin Catmull and Tony DeRose for their invention and application of subdivision surfaces. On February 8, 2008, Duncan Brinsmead, Jos Stam, Julia Pakalns and Martin Werner received an Academy Award for Technical Achievement for the design and implementation of the Maya Fluid Effects system. See also References External links Autodesk products 3D graphics software Computer-aided design software 3D animation software 3D graphics software that uses Qt Software that uses Qt IRIX software 3D computer graphics software for Linux Proprietary software that uses Qt Proprietary commercial software for Linux
1789628
https://en.wikipedia.org/wiki/NetSurf
NetSurf
NetSurf is an open-source web browser which uses its own layout engine. Its design goal is to be lightweight and portable. NetSurf provides features including tabbed browsing, bookmarks and page thumbnailing. The NetSurf project was started in April 2002 in response to a discussion of the deficiencies of the RISC OS platform's existing web browsers. Shortly after the project's inception, development versions for RISC OS users were made available for download by the project's automated build system. NetSurf was voted "Best non-commercial software" four times in Drobe Launchpad's annual RISC OS awards between 2004 and 2008. NetSurf supports both mainstream systems (e.g. macOS and Unix-like) and older or uncommon platforms (e.g. AmigaOS, Haiku, Atari TOS, RISC OS, and Redox). The browser was ranked in 2011 as in an article highlighting 10 browsers for Linux published in TechRepublic and ZDNet. It was referred to in 2010 as a superior CLI browser to w3m. Features NetSurf's multi-platform core is written in ANSI C, and implements most of the HTML 4 and CSS 2.1 specifications using its own bespoke layout engine. As of version 2.0, NetSurf uses Hubbub, an HTML parser that follows the HTML5 specification. As well as rendering GIF, JPEG, PNG and BMP images, the browser also supports formats native to RISC OS, including Sprite, Draw and ArtWorks files. It was suggested by developer John-Mark Bell in 2007 that support for JavaScript could be added. This feature did not make it into NetSurf v2 back in 2008, nor into NetSurf v3 of 2013, but as of December 2012 there are some NetSurf preview-builds available which contain early-stage JavaScript support (later much improved). On April 20, 2013, NetSurf 3.0 was released. History NetSurf began in April 2002 as a web browser for the RISC OS platform. Work on a GTK port began in June 2004 to aid development and debugging. It has since gained many of the user interface features present in the RISC OS version. The browser is packaged with several distributions including Ubuntu, NetBSD, and OpenBSD. After five years of development, the first stable version of the browser was released on 19 May 2007 to coincide with the Wakefield RISC OS show. Version 1.0 was made available for download from the project's web site and the software was sold on CD at the show. After the release of NetSurf 1.0 there were two point-releases, which largely comprised bug fixes. NetSurf 1.1 was released in August 2007 and in March 2008 the NetSurf 1.2 release was made available. NetSurf participated in Google Summer of Code in 2008 as a mentoring organisation, running four projects. These included improving the GTK front end, adding paginated PDF export support and developing the project's HTML 5 compliant parsing library, Hubbub. All NetSurf development builds since 11 August 2008 have used Hubbub to parse HTML and it is available for use in other projects under the MIT license. NetSurf was again accepted as a mentoring organisation into Google Summer of Code 2009. The projects they ran included development of LibDOM, the project's Document Object Model, and improvement of NetSurf's user interface. The interface work included moving previously RISC OS-only functionality to the multi-platform core, including bookmarks, global history, cookie management and page search features. A port to the Windows operating system was also started. In 2010 the NetSurf project did not apply to participate in Google Summer of Code due to the developers having other commitments. NetSurf 2.0 was released in April 2009 for RISC OS, Linux and other Unix-like platforms, BeOS, Haiku, and AmigaOS 4. This was the first version to use the project's HTML5 parsing library, Hubbub. In May 2009 a maintenance release, NetSurf 2.1, was issued to users. It incorporated bug fixes and some improvements to page layout. NetSurf 2.5 was released in April 2010. This was the first release to use the project's library for CSS parsing and selection, LibCSS and a new internal cache for fetched content. September 2010 saw the release of NetSurf 2.6, which included a number of fixes and improvements. NetSurf 2.7 was released in April 2011, and added treeview support for features including bookmarking (called the Hotlist manager in NetSurf), history management, and cookie management. It was also the first version to be released for Mac OS X. In September 2011 NetSurf 2.8 was released. It added support for frames and iframes in the browser's core rendering engine, making them available to all front ends. The release also included support for MIME type sniffing and improved the performance of loading the images used by a web page. In April 2012 NetSurf 2.9 was released. The most significant changes were new multi-tasking behaviour, optimised URL handling, fetcher optimisations, cache optimisations, and faster CSS selection. In April 2013 NetSurf 3.0 was released. The biggest difference was the use of the new Document Object Model library, LibDOM. This new library is a foundation that paves the way for NetSurf developers to implement a fully dynamic layout engine in the future. Other improvements in NetSurf 3.0 include completely new textarea support, ability to fetch and parse CSS in parallel with HTML documents, extensive behind-the-scenes refactoring, and a host of smaller changes and fixes. In April 2014 NetSurf 3.1 was released, containing many improvements over the previous release. The highlights include much faster CSS selection performance, faster start up time, new look and feel to the treeviews (hotlist/bookmarks, global history and cookie manager), improved options handling, undo/redo support in textareas, and general improvement of forms. Also included are many other additions, optimisations and bug fixes. In July 2019 NetSurf 3.9 was released, with support for CSS Media Queries (level 4) and improvements to JavaScript handling. Ports A native BeOS/Haiku port has been developed. Since the GTK version was built for AmigaOS, using Cygnix which provides an X11 environment, a native AmigaOS port has also been developed. In January 2009, NetSurf was made available on MorphOS, an operating system that is API-compatible with AmigaOS. A Windows port is also available for download. A framebuffer port was created in September 2008. Unlike the other ports, it does not use any GUI toolkit, but instead renders its own mouse pointer, scrollbars and other widgets. The framebuffer frontend has been used to create a web kiosk on embedded systems. The Plan 9 port is also based on it. In January 2010, the NetSurf Developers announced the release of what they expected at the time to be the last release for RISC OS. Lead developer John-Mark Bell said at the time "Realistically, the people qualified to maintain the RISC OS port are up to their necks in other stuff." Subsequently, Steve Fryatt volunteered himself as maintainer. January 2011 saw the announcement of a Mac OS X port. A port to Atari 16-bit and 32-bit computers was also started in January 2011. Forks visurf visurf is a fork of NetSurf led by Drew DeVault. It has vi-inspired key bindings and Wayland-only UI. See also Dillo Browser timeline Comparison of web browsers Comparison of lightweight web browsers List of web browsers External links References Web browsers Free web browsers Web browsers for AmigaOS RISC OS software AmigaOS 4 software BeOS software MorphOS software POSIX web browsers 2002 software Web browsers that use GTK Free software programmed in C Cross-platform free software
3492587
https://en.wikipedia.org/wiki/Automattic
Automattic
Automattic Inc. is an American global distributed company which was founded in August 2005 and is most notable for WordPress.com (a freemium blogging service), as well as its contributions to WordPress (an open source blogging software). The company's name is a play on founder Matt Mullenweg's first name. Automattic raised US$846 million in six funding rounds. The last round of US$288 million was closed in February 2021. A subsequent private stock buyback valued the company at US$7.5 billion. The company had 1,792 employees as of October 2021. Its remote working culture was the topic of a participative journalism project by Scott Berkun, resulting in the 2013 book The Year Without Pants: WordPress.com and the Future of Work. CEO Matt Mullenweg allows company employees to work from wherever they want, whenever they want. History On January 11, 2006, it was announced that Toni Schneider would be leaving Yahoo! to become CEO of Automattic. He was previously CEO of Oddpost before it was acquired by Yahoo!, where he had continued as a senior executive. In April 2006 Automattic's Regulation D filing showed it had raised approximately $1.1 million in funding, which Mullenweg addressed in his blog. Investors were Polaris Ventures, True Ventures, Radar Partners. On 18 October 2007, Automattic acquired Gravatar. On September 23, 2008, Automattic announced acquiring IntenseDebate. Two months later, on November 15, 2008, Automattic acquired PollDaddy. On September 9, 2010, Automattic gave the WordPress trademark and control over bbPress and BuddyPress to the WordPress Foundation. On April 4, 2014, Automattic acquired Longreads. On May 19, 2015, Automattic announced the acquisition of WooThemes, including their flagship product WooCommerce. On November 21, 2016, Automattic, via a subsidiary company (Knock Knock, WHOIS There) managed the launch and later development of the .blog gTLD, becoming domain registrars. In 2017, Automattic announced that it would close its San Francisco office, which had served as an optional co-working space for its employees alongside similar spaces near Portland, Maine and in Cape Town, South Africa. On June 21, 2018, Automattic acquired Atavist and its magazine. On May 21, 2019, Automattic acquired Prospress, which provided a number of popular WooCommerce extensions and tools. On August 12, 2019, Automattic acquired Tumblr from Verizon Media. On August 16, 2019, Automattic acquired Zero BS CRM and rebranded it a year later to Jetpack CRM. On September 19, 2019, Automattic announced a Series D funding round of $300 million from Salesforce, bringing the post-money valuation of the company to $3 billion. On February 8, 2021, Automattic acquired content analytics platform parse.ly for WPVIP, Founder Matt Mullenweg announced on his blog. On June 14, 2021, Automattic acquired journaling app Day One. On July 16, 2021, Automattic acquired the podcasts app Pocket Casts. Projects Other projects include: After the Deadline – online proofreading tool now built into WordPress.com and Jetpack Atavist – multimedia publishing platform Akismet – anti-comment spam system capable of integration with many blogging platforms and forums bbPress – forum software blo.gs – RSS feed aggregator BuddyPress – social networking plugin suite Cloudup – file sharing application Crowdsignal (formerly Polldaddy) – polls and survey tools GlotPress – collaborative translation tool Gravatar – globally recognized avatars HappyTools - resource planning software IntenseDebate – blog comment hosting service that was launched as a private beta in January 2007 by Co-Founders Jon Fox, Isaac Keyet, and Josh Morgan, and launched as an open beta on October 30, 2007. On September 23, 2008, Automattic announced its acquisition of IntenseDebate's properties, and returned to private beta until November of that year. In 2007, IntenseDebate was selected to be part of the first class of Techstars, a Boulder, Colorado-based startup accelerator Jetpack - WordPress plugin providing a range of basic services (backup, speed, stats, etc.) Longreads – original reporting and journalism aggregator Mongoose ODM – mongodb object modeling for node.js Pocket Casts – app for listening to podcasts on IOS and Android Poster – blogging app for IOS Ping-O-Matic – pinging service Simplenote – note-taking and sync service acquired by Automattic in 2013 and later open-sourced Scroll Kit – code-free web design tool Tumblr - Microblogging platform VaultPress – backup and security service for WordPress sites VideoPress – hosted HD video for WordPress sites WooCommerce – eCommerce plugin for WordPress with a marketplace for extensions WPVIP – Enterprise WordPress hosting, support, and consulting References External links 2005 establishments in California American companies established in 2005 Companies based in San Francisco Free software companies Privately held companies based in California Remote companies Software companies based in the San Francisco Bay Area Software companies established in 2005 Software companies of the United States WordPress
32915226
https://en.wikipedia.org/wiki/List%20of%20Israeli%20Ashkenazi%20Jews
List of Israeli Ashkenazi Jews
This is a list of notable Israeli Ashkenazi Jews, including both original immigrants who obtained Israeli citizenship and their Israeli descendants. Although traditionally the term "Ashkenazi Jews" was used as an all-encompassing term referring to the Jews descended from the Jewish communities of Europe, due to the melting pot effect of Israeli society the term "Ashkenazi Jews" gradually becomes more vague as many of the Israeli descendants of the Ashkenazi Jewish immigrants gradually adopted the characteristics of Israeli culture and as more descendants intermarry with descendants of other Jewish communities. The list is ordered by category of human endeavor. Persons with significant contributions in two of these are listed in both of them, to facilitate easy lookup. Politicians Shulamit Aloni – former minister Ehud Barak – prime minister (1999–01) Menachem Begin – prime minister (1977–83); Nobel Peace Prize (1978) Yossi Beilin – leader of the Meretz-Yachad party and peace negotiator David Ben-Gurion – first Prime Minister of Israel (1948–54, 1955–63) Yitzhak Ben-Zvi – first elected/second president President of Israel (1952–63) Gilad Erdan Levi Eshkol – prime minister (1963–69) Miriam Feirberg Yael German Teddy Kollek – former mayor of Jerusalem Yosef Lapid – former leader of the Shinui party Golda Meir – prime minister (1969–74) Benjamin Netanyahu – prime minister (1996–99, 2009–2021); was minister of finance; Likud party chairman Ehud Olmert – prime minister (2006–09); former mayor of Jerusalem Shimon Peres – President of Israel (2007–); prime minister (1984–86, 1995–96); Nobel Peace Prize (1994) Yitzhak Rabin – prime minister (1974–77, 1992–95); Nobel Peace Prize (1994) (assassinated November 1995) Yitzhak Shamir – prime minister (1983–84, 1986–92) Moshe Sharett – prime minister (1954–55) Ariel Sharon – prime minister (2001–06) Chaim Weizmann – first President of Israel (1949–52) Rehavam Zeevi – founder of the Moledet party (assassinated October 2001) Shelly Yachimovich – former leader of the opposition. Military Yigal Allon – politician, a commander of the Palmach, and a general in the IDF Haim Bar-Lev – former Chief of General Staff of the Israel Defense Forces Moshe Dayan – military leader Giora Epstein – combat pilot, modern-day "ace of aces" Uziel Gal – designer of the Uzi submachine gun Benny Gantz – former Chief of General Staff of the Israel Defense Forces Wolfgang Lotz – spy Tzvi Malkhin – Mossad agent, captured Adolf Eichmann Yonatan Netanyahu – Sayeret Matkal commando, leader of Operation Entebbe Yitzhak Rabin – military leader and fifth Prime Minister of Israel Ilan Ramon – astronaut on Columbia flight STS-107 Gilad Shalit – kidnapped soldier held in Gaza Yael Rom – first female to graduate from a full military flight course in the Western world; first woman to graduate from the Israeli Air Force Religious figures Religious-rabbis David Hartman Avraham Yitzchak Kook (1865–1935) – pre-state Ashkenazic Chief Rabbi of the Land of Israel, Israel Meir Lau – Ashkenazic Chief Rabbi of Israel (1993–2003), Chief Rabbi of Netanya (1978–88), (1937–) Aharon Lichtenstein Yona Metzger – Ashkenazic Chief Rabbi of Israel Shlomo Riskin – Ashkenazic Chief Rabbi of Efrat Haredi rabbis Yaakov Aryeh Alter – Gerrer Rebbe Shlomo Zalman Auerbach Yaakov Blau Yisroel Moshe Dushinsky – second Dushinsky rebbe and Chief Rabbi of Jerusalem (Edah HaChareidis) Yosef Tzvi Dushinsky – first Dushinsky rebbe and Chief Rabbi of Jerusalem (Edah HaChareidis) Yosef Tzvi Dushinsky – third Dushinsky rebbe Yosef Sholom Eliashiv Issamar Ginzberg – Nadvorna-Kechnia Rebbe Chaim Kanievsky Avraham Yeshayeh Karelitz (1878–1953) – Chazon Ish Nissim Karelitz – Head Justice of Rabbinical Court of Bnei Brak Meir Kessler – Chief Rabbi of Modi'in Illit Zundel Kroizer (1924–2014) – author of Ohr Hachamah Dov Landau – rosh yeshiva of the Slabodka yeshiva of Bnei Brak Yissachar Dov Rokeach – Belzer rebbe Yitzchok Scheiner (born 1922) – rosh yeshiva of the Kamenitz yeshiva of Jerusalem Elazar Menachem Shach (1899–2001) – Rav Shach Moshe Shmuel Shapira – rosh yeshiva of Beer Yaakov Dovid Shmidel – Chairman of Asra Kadisha Yosef Chaim Sonnenfeld – Chief Rabbi of Jerusalem (Edah HaChareidis) Yitzchok Tuvia Weiss – Chief Rabbi of Jerusalem (Edah HaChareidis) Amram Zaks (1926–2012) – rosh yeshiva of the Slabodka yeshiva of Bnei Brak Uri Zohar – former film director, actor, and comedian who left the entertainment world to become a rabbi Activists Uri Avnery – peace activist, Gush Shalom Yael Dayan – writer, politician, activist Michael Dorfman – Russian-Israeli essayist and human rights activist Uzi Even – gay rights activist Yehuda Glick – Israeli activist and rabbi who campaigned for expanding Jewish access to the Temple Mount Daphni Leef – Israeli activist; in 2011 sparked one of the largest waves of mass protest in Israel's history Rudy Rochman – Jewish, Zionist activist Uri Savir – peace negotiator, Peres Center for Peace Israel Shahak – political activist Natan Sharansky – Soviet-era human rights activist Cultural and entertainment figures Film, TV, and stage Popular musicians Classical musicians Writers Artists Yaacov Agam – kinetic artist Yitzhak Danziger – sculptor Uri Fink – comic book artist and writer Dudu Geva – artist and comic-strip illustrator Nachum Gutman – painter Israel Hershberg – realist painter Shimshon Holzman – painter Models Yael Bar Zohar – model Michaela Bercu – model Nina Brosh – model Anat Elimelech – model and actress; murdered in 1997 by her partner Gal Gadot – model and actress Esti Ginzborg – model Heli Goldenberg – former model and actress Yael Goldman – model Galit Gutmann – model Adi Himmelbleu – model Mor Katzir Rina Mor – model Hilla Nachshon – model Bar Refaeli – model Shiraz Tal – model Pnina Rosenblum – former model Academic figures Physics and chemistry Yakir Aharonov – physicist, Aharonov–Bohm effect and winner of the 1998 Wolf Prize in Physics Jacob Bekenstein – black hole thermodynamics David Deutsch – quantum computing pioneer; 1998 Paul Dirac Prize winner Richard Feynman – path integral formation, quantum theory, superfluidity; winner of 1965 Nobel Prize in Physics Josef Imry – physicist Joshua Jortner – molecular energy; 1988 winner of the Wolf Prize in Chemistry Aaron Katzir – physical chemistry Ephraim Katzir – immobilized enzymes; Japan Prize (1985) and the fourth President of Israel Rafi Levine – molecular energy; 1988 winner of the Wolf Prize in Chemistry Zvi Lipkin – physicist Mordehai Milgrom – modified Newtonian dynamics (MOND) Yuval Ne'eman – the "Eightfold way" Asher Peres – quantum theory Alexander Pines – nuclear magnetic resonance; Wolf Prize in Chemistry Laureate (1991) Giulio Racah – spectroscopy Nathan Rosen – EPR paradox Nathan Seiberg – string theory Dan Shechtman – chemist; winner of the 1999 Wolf Prize in Physics and Winner of the 2011 Nobel Prize in Chemistry for "the discovery of quasicrystals" Igal Talmi – particle physics Reshef Tenne – discovered inorganic fullerenes and inorganic nanotubes Arieh Warshel – chemist, winner of the 2013 Nobel Prize in Chemistry and contributed to the development of multiscale models for complex chemical systems" Chaim Weizmann – acetone production Biology and medicine Ruth Arnon – developed Copaxone; Wolf Prize in Medicine (1998) Aaron Ciechanover – ubiquitin system; Lasker Award (2000), Nobel Prize in Chemistry (2004) Moshe Feldenkrais – invented Feldenkrais method used in movement therapy Lior Gepstein – received American College of Cardiology's Zipes Award for his development of heart cells and pacemakers from stem cells Eyal Gur – selected by Newsweek as one of the world's top microsurgeons Avram Hershko – ubiquitin system; Lasker Award (2000), Nobel Prize in Chemistry (2004) Gavriel Iddan – inventor of capsule endoscopy Benjamin Kahn – marine biologist, defender of the Red Sea reef Yona Kosashvili – orthopedic surgeon and Chess Grandmaster Andy Lehrer – entomologist Shulamit Levenberg – inventor of a muscle tissue which isn't rejected by the body after transplant; selected by Scientific American as one of the 50 leading scientists in the world Alexander Levitzki – cancer research; Wolf Prize in Medicine (2005) Gideon Mer – malaria control Saul Merin – ophthalmologist, author of Inherited Eye Diseases Leo Sachs – blood cell research; Wolf Prize in Medicine (1980) Michael Sela – developed Copaxone; Wolf Prize in Medicine (1998) Joel Sussman – 3D structure of acetylcholinesterase, Elkeles Prize for Research in Medicine (2005) Meir Wilchek – affinity chromatography; Wolf Prize in Medicine (1987) Ada Yonath – structure of ribosome; 2009 winner of the Nobel Prize in Chemistry Amotz Zahavi – proposed the Handicap Principle Social sciences Yehuda Bauer – historian Daniel Elazar – political scientist Haim Ginott – psychologist, child psychology Eliyahu Goldratt – business consultant, Theory of Constraints Louis Guttman – sociologist Elhanan Helpman – economist, international trade Daniel Kahneman – behavioural scientist, prospect theory; Nobel Prize in Economics (2002) Smadar Lavie – anthropologist Amihai Mazar – archaeologist Benjamin Mazar – archaeologist Eilat Mazar – archaeologist Benny Morris – historians, New Historians Erich Neumann – analytical psychologist, development, consciousness Nurit Peled-Elhanan – educator Renee Rabinowitz – psychologist and lawyer Anat Rafaeli – organisational behaviour researcher. Ariel Rubinstein – economist Amos Tversky – behavioral scientist, prospect theory with Daniel Kahneman Yigael Yadin – archaeologist Computing and mathematics Ron Aharoni – mathematician, working in finite and infinite combinatorics Noga Alon – mathematician, computer scientist, winner of the Gödel Prize (2005) Shimshon Amitsur – mathematician, ring theory abstract algebra Robert Aumann – mathematical game theory; Nobel Prize in Economics (2005) Amir Ban – computer programmer; one of the main programmers of the Junior chess program Moshe Bar – computer programmer and creator and main developer of openMosix Yehoshua Bar-Hillel – philosopher, mathematician, and linguist, best known for his pioneering work in machine translation and formal linguistics Joseph Bernstein – mathematician; works in algebraic geometry, representation theory, and number theory Eli Biham – cryptographer and cryptanalyst, specializing in differential cryptanalysis Shay Bushinsky – computer programmer; one of the main programmers of the Junior chess program Aryeh Dvoretzky – mathematician, eighth president of the Weizmann Institute of Science Uriel Feige – computer scientist, winner of the Gödel Prize (2001) Abraham Fraenkel – mathematician, known for his contributions to axiomatic set theory and the ZF set theory Hillel Furstenberg – mathematician; Wolf Prize in Mathematics (2006/7) Shafi Goldwasser – computer scientist, winner of the Gödel Prize (1993 and 2001) David Harel – computer scientist; Israel Prize (2004) Abraham Lempel – LZW compression; IEEE Richard W. Hamming Medal (2007 and 1995) Elon Lindenstrauss – mathematician; known in the area of dynamics, particularly in the area of ergodic theory and its applications in number theory; Fields Medal recipient (2010) Joram Lindenstrauss – mathematician, known for the Johnson–Lindenstrauss lemma Michel Loève – probabilist Joel Moses – MIT provost and writer of Macsyma Yoram Moses – computer scientist, winner of the 2000 Gödel Prize in theoretical computer science and the 2009 Dijkstra Prize in distributed computing Judea Pearl – computer scientist and philosopher; known for championing the probabilistic approach to artificial intelligence and the development of Bayesian networks (see the article on belief propagation); Turing Award winner (2011) Ilya Piatetski-Shapiro – mathematician; representation theory; Wolf Prize in Mathematics winner (1990) Amir Pnueli – temporal logic; Turing Award (1996) Michael O. Rabin – nondeterminism, primality testing; Turing Award (1976) Sheizaf Rafaeli – computer scientist, scholar of computer-mediated communication Shmuel Safra – computer scientist, winner of the (2001) Gödel Prize] Adi Shamir – computer scientist; RSA encryption, differential cryptanalysis; Turing Award winner (2002) Nir Shavit – computer scientist, winner of the (2001) Gödel Prize] Saharon Shelah – mathematician, well known for logic; Wolf Prize in Mathematics winner (2001) Ehud Shapiro – computer scientist; Concurrent Prolog, DNA computing pioneer Moshe Y. Vardi – computer scientist; Godel Prize winner (2000) Avi Wigderson – mathematician, known for randomized algorithms; Nevanlinna Prize winner (1994) Doron Zeilberger – mathematician, known for his contributions to combinatorics Jacob Ziv – LZW compression; IEEE Richard W. Hamming Medal (2007 and 1995) Engineering David Faiman – solar engineer and director of the National Solar Energy Center Liviu Librescu – Professor of Engineering Science and Mechanics at Virginia Tech; killed in the Virginia Tech massacre Moshe Zakai – electrical engineering Jacob Ziv – electrical engineering Philosophy Martin Buber – philosopher Yeshayahu Leibowitz – philosopher Avishai Margalit – philosopher Joseph Raz – philosopher Gershom Scholem (1897–1982) – philosopher, historian Humanities Shmuel Ben-Artzi – Bible scholar; father of psychologist Sara Netanyahu and father-in-law of Israeli Prime Minister Benjamin Netanyahu Noam Chomsky – linguist Aharon Dolgopolsky – linguist, Nostratic Moshe Goshen-Gottstein – Bible scholar Michael Oren – historian, educator, writer, and Israeli ambassador to the US> Hans Jakob Polotsky – linguist Chaim Rabin – Bible scholar Alice Shalvi – English literature, educator. Gershon Shaked – Hebrew literature Shemaryahu Talmon – Bible scholar Emanuel Tov – Bible scholar Architecture Richard Kauffmann – architect Neri Oxman – architect Entrepreneurs and businesspeople Technology Amnon Amir – co-founder of Mirabilis (developer of ICQ) Moshe Bar – founder of XenSource, Qumranet Naftali Bennett – founder of Cyota, current Member of the Knesset and leader of The Jewish Home political party Safra Catz – president of Oracle Yair Goldfinger – co-founder of Mirabilis (developer of ICQ) Yossi Gross – recipient of almost 600 patents; founder of 27 medical technology companies in Israel; Chief Technology Office officer of Rainbow Medical Andi Gutmans – co-founder of Zend Technologies (developer of PHP) Daniel M. Lewin – founder of Akamai Technologies Bob Rosenschein – founder of Kivun Computers, Accent Software, GuruNet, Answers.com, Curiyo (Israeli-based) Gil Schwed – founder of Check Point Zeev Suraski – co-founder of Zend Technologies (developer of PHP) Ariki and Yossi Vardi – co-founder of Mirabilis (developer of ICQ) Sefi Vigiser – co-founder of Mirabilis (developer of ICQ) Other industries Ted, Micky and Shari Arison – founder and owners of Carnival Corporation Amir Gal-Or Jamie Geller – celebrity chef and founder of the Kosher Media Network Eival Gilady Eli Hurvitz – head of Teva Pharmaceuticals Mordecai Meirowitz – inventor of the Mastermind board game Arnon Milchan – Hollywood film producer & founder of Regency Enterprises. Sammy Ofer – shipping magnate Yuli Ofer – real estate mogul Guy Oseary – talent agent, businessman, investor, and music manager; founder of Maverick Records; personal music manager of American entertainer Madonna Stef Wertheimer – manufacturing industrialist Josh Reinstein – director of the Knesset Christian Allies Caucus Eyal Ofer - real estate and shipping magnate Moris Kahn - billionaire, entrepreneur Sports Association football Eyal Berkovic – midfielder (national team), Maccabi Haifa, Southampton, West Ham United, Celtic, Manchester City, Portsmouth Ronnie Rosenthal – left winger/striker (national team), Maccabi Haifa, Liverpool, Tottenham, Watford Giora Spiegel – midfielder (national team), Maccabi Tel Aviv Mordechai Spiegler – Soviet Union/Israel – striker (Israel national team), manager Nahum Stelmach – striker (national team) Yochanan Vollach – defender (national team), Maccabi Haifa, Hapoel Haifa, HKFC; current president of Maccabi Haifa Basketball Miki Berkovich – Maccabi Tel Aviv David Blu (formerly "Bluthenthal") – US and Israel, Euroleague 6' 7" forward (Maccabi Tel Aviv) Tal Brody – US and Israel, Euroleague 6' 2" shooting guard, Maccabi Tel Aviv Tal Burstein – Maccabi Tel Aviv Tanhum Cohen-Mintz – Latvian-born Israeli, 6' 8" center; two-time Euroleague All-Star Shay Doron – Israel and US, WNBA 5' 9" guard, University of Maryland (New York Liberty) Tamir Goodman – US and Israel, 6' 3" shooting guard Yotam Halperin – 6' 5" guard, drafted in 2006 NBA draft by Seattle SuperSonics (Olympiacos) Gal Mekel – former point guard in NBA team Dallas Mavericks Amit Tamir – 6' 10" center/forward, University of California, PAOK Thessaloniki (Hapoel Jerusalem) Boxing Hagar Finer – WIBF bantamweight champion Yuri Foreman – Belarusian-born Israeli US middleweight and World Boxing Association super welterweight champion Roman Greenberg – International Boxing Organization's Intercontinental heavyweight champion; "The Lion from Zion" Fencing Boaz Ellis – foil, five-time Israeli champion Lydia Hatoel-Zuckerman – foil, six-time Israeli champion Andre Spitzer – killed by terrorists Figure skating Alexei Beletski – Ukrainian-born Israeli ice dancer, Olympian Galit Chait – ice dancer; World Championship bronze 2002 Natalia Gudina – Ukrainian-born Israeli figure skater, Olympian Tamar Katz – US-born Israeli figure skater Lionel Rumi – ice dancer Sergei Sakhnovsky – ice dancer, World Championship bronze 2002 Michael Shmerkin – Soviet-born Israeli figure skater Alexandra Zaretski – Belarusian-born Israeli; ice dancer, Olympian Roman Zaretski – Belarusian-born Israeli ice dancer, Olympian Sailing Zefania Carmel – yachtsman, world champion (420 class) Gal Fridman – windsurfer (Olympic gold: 2004 (Israel's first gold medalist), bronze: 1996 (Mistral class); world champion: 2002) Lydia Lazarov – yachting world champion (420 class) Swimming Vadim Alexeev – Kazakhstan-born Israeli swimmer, breaststroke Guy Barnea – swimmer who participated in the 2008 Summer Olympics Adi Bichman – 400-m and 800-m freestyle, 400-m medley Yoav Bruck – 50-m freestyle and 100-m freestyle Eran Groumi – 100 and 200 m backstroke, 100-m butterfly Judith Haspel (born "Judith Deutsch") – Austrian-born Israeli; held every Austrian women's middle and long distance freestyle record in 1935; refused to represent Austria at the 1936 Summer Olympics, protesting Hitler, stating, "I refuse to enter a contest in a land which so shamefully persecutes my people" Dan Kutler – US-born Israeli; 100-m butterfly, 4×100-m medley relay Keren Leibovitch – Paralympic swimmer, four-time gold medal-winner, 100-m backstroke, 50- and 100-m freestyle, 200-m individual medley Tal Stricker – 100- and 200-m breaststroke, 4×100-m medley relay Eithan Urbach – backstroke swimmer, European championship silver and bronze; 100-m backstroke Tennis Noam Behr Ilana Berger Gilad Bloom Jonathan Erlich – 6 doubles titles, 6 doubles finals; won 2008 Australian Open Men's Doubles (w/Andy Ram), highest world doubles ranking No. 5 Shlomo Glickstein – highest world singles ranking No. 22, highest world doubles ranking No. 28 Julia Glushko Amos Mansdorf – highest world singles ranking No. 18 Shahar Pe'er – three WTA career titles; highest world singles ranking No. 11, highest world doubles ranking No. 21 Other Alex Averbukh – pole vaulter (European champion: 2002, 2006) Boris Gelfand – chess Grandmaster; ~2700 peak ELO rating Michael Kolganov – Soviet-born Israeli, sprint canoer/kayak paddler, world champion, Olympic bronze 2000 (K-1 500-meter) Marina Kravchenko – Ukrainian-born Israeli table tennis player, Soviet and Israeli national teams Sofia Polgar – Hungarian-born Israeli chess Grandmaster; sister of chess grandmasters Susan Polgar and Judith Polgar Ilya Smirin – chess Grandmaster; ~2700 peak ELO rating Emil Sutovsky – chess Grandmaster; ~2700 peak ELO rating Criminals Hanan Goldblatt – actor, comedian and singer; was convicted in 2008 of perpetrating acts of rape and other sex offenses against women in his acting class Baruch Goldstein – massacred 29 Arabs in the Cave of the Patriarchs in 1994 Avraham Hirschson – politician who was among other things the former Israeli Minister of Finance; convicted of stealing close to 2 million NIS from the National Workers Labor Federation while he was its chairman Zeev Rosenstein – mob boss and drug trafficker Gonen Segev – former Israeli member of Knesset and government minister; convicted for an attempt of drug smuggling, for forgery and electronic commerce fraud Ehud Tenenbaum – computer hacker also known as "The Analyzer" who became famous in 1998 when he was caught by the FBI after hacking into the computers of NASA, the Pentagon, the Knesset and the US Army, and after installing trojan horse software on some of those computers Dudu Topaz – TV personality, comedian, actor, screenwriter, playwright, author and radio and television host; committed suicide in August 2009 after being charged with inciting violence against national media figures See also Israelis List of notable Israelis List of Ashkenazi Jews in central and eastern Europe List of Ashkenazi Jews in northern Europe List of Sephardic Jews in southern and western Europe List of Sephardic Jews in the Balkans References Israeli Ashkenazi Jews Ashkenazi Israeli people of European descent
28504786
https://en.wikipedia.org/wiki/Stock%20market%20data%20systems
Stock market data systems
Stock market data systems communicate market data—information about securities and stock trades—from stock exchanges to stockbrokers and stock traders. History The earliest stock exchanges were in France in the 12th century and in Bruges and Italy in the 13th. Presumably data about trades in those times was written down by scribes and traveled by courier. In the early 19th century Reuters sent data by carrier pigeons between Germany and Belgium In London early exchanges were located near coffee houses which may have played a part in trading. Chalk boards In the late 1860s, in New York, young men called "runners" prices between the exchange and broker’s offices, and often these prices were posted by hand on large chalk boards in the offices. Updating a chalk board was an entry point for many traders getting into financial markets and as mentioned in the book Reminiscences of a Stock Operator those updating the boards would wear fur sleeves so they wouldn't accidentally erase prices. The New York Stock Exchange is known as the "Big Board", perhaps because of these large chalk boards. Until recently, in some countries such chalkboards continued in use. Morse code was used in Chicago until 1967 for traders to send data to clerks called "board markers". Newspapers From 1797 to 1811 in the United States, the New York Price Current was first published. It was apparently the first newspaper to publish stock prices, and also showed prices of various commodities. In 1884 the Dow Jones company published the first stock market averages, and in 1889 the first issue of the Wall Street Journal appeared. As time passed, other newspapers added market pages. The New York Times was first published in 1851, and added stock market tables at a later date. Electronic systems Ticker tape In 1863 Edward A. Calahan of the American Telegraph Company invented a stock telegraph printing instrument which allowed data on stocks, bonds, and commodities to be sent directly from exchanges to broker offices around the country. It printed the data on wide paper tape wound on large reels. The sound it made while printing earned it the name "stock ticker". Other inventors improved on this device, and ultimately Thomas Edison patented a "universal stock ticker", selling over 5,000 in the late 19th century. In the early 20th century Western Union acquired rights to an improved ticker which could deal with the increasing volume of stocks sold per day. At the time of the stock market crash in October, 1929, trading volumes were so high that the tickers fell behind, contributing to the panic. In the 1930s the New York Quotation Stock Ticker became widely used. A further improvement was in place in 1960. In 1923 Trans Lux Corporation delivered a rear projection system which projected the moving ticker onto a screen where all in a brokerage office could see it. It was a great success, and by 1949 there were more than 1400 stock-ticker projectors in the U.S. and another 200 in Canada. In 1959 they started shipping a Trans-Video system called CCTV which gave a customer a small video desk monitor where he could monitor the tickers. In August 1963 Ultronics introduced Lectrascan, the first wall mounted all electronic ticker display system. By 1964 there were over 1100 units in operation in stock broker offices in the U.S. and Canada. Competition, including Ultronics' Lectrscan electron wall system, led Trans-Lux to introduce the Trans-Lux Jet. Jets of air controlled lighted disks which moved on a belt on the broker's wall. Brokers ordered over 1000 units in the first six months, and by the middle of 1969 more than 3000 were in use in the U.S. and Canada. Automatic quotation boards A quotation board is a large vertical electronic display located in a brokerage office, which automatically gives current data on stocks chosen by the local broker. In 1929 the Teleregister Corporation installed the first such display, and by 1964 over 650 brokerage offices had them. The information included the previous day’s closing price, opening price, high for the day, low for the day, and current price. Teleregister offered data from the New York, American, Midwest, Chicago Mercantile, Commodity, New York Cocoa, New York coffee and sugar, New York Mercantile, New York Produce, New York Cotton, and New Orleans Cotton exchanges, along with the Chicago Board of Trade. Some firms had a battery of telephone operators seated in front of a Teleregister board to supply commission houses with price and volume data. In 1962 two such batteries handled over 39,000 calls per day. In 1955 Scantlin Electronics, Inc. introduced a competitive display system very similar in appearance but with digits twice the size of Teleregister’s, fitting into the same board area. It was less expensive and soon was installed in many broker offices. Stock market quotation systems In the late 1950s brokers had become accustomed to several problems doing business with their customers. To make a trade, an investor had to know the current price for the stock. The investor got this from a broker who could find it on his board. If the last trade (or the stock itself) hadn't made it to the board (or there was no board) the broker telegraphed a request for the price to that firm's "wire room" in New York. There, such requests would be forwarded to the floor of the appropriate exchange, where messengers could copy down prices at the locations where those stocks were traded, and telephone answers back to the wire room. Typical elapsed times were between 15 and 30 minutes just to inform the broker. Quotron Jack Scantlin of Scantlin Electronics, Inc. (SEI) developed the Quotron I system, consisting of a magnetic tape storage unit that could be sited at a brokerage and Desk Units with a keyboard and printer. The storage unit recorded the data from the ticker line. Brokers could enter the stock symbol on a desk unit. This triggered a backward search on the magnetic tape (which continued recording incoming ticker data). When a transaction was located, the price was sent to the desk unit, which printed it on a tape. The first Quotron units were installed in 1960, and were an immediate success. By the end of 1961 brokers were leasing Quotrons in some 800 offices, serving some 2,500 desk units across the United States. Ultronics vs. Quotron Quotron’s success attracted the attention of Robert S. Sinn, who observed its disadvantages: it could only give a last price. The opening price, high and low for the day, and share volume were not available. His system received the ticker transmissions from the various stock and commodity exchanges. These were then automatically interpreted by a hard wired digital computer updating a drum memory with the last sale prices and at the same time computing and updating highs and lows and total volume for each stock. As these items were updated a data packet would be generated for transmission by AT&T Dataphone at 1000 bits/second to identical magnetic drum storage devices in each major city in the United States. These slave drum memory units located in the metropolitan centers in the US could then be accessed by desk units in local stock brokerage offices again using Dataphone transmission. The desk units would set up the ticker symbol code for the desired stock by mechanical means actuating micro switches. The local control box would continuously interrogate each desk unit in sequence and send a request data packet by Dataphone to the local drum memory which would then send return data packets back to the local brokerage office. Each packet both for request and answer would contain the request stock alphabetic symbols plus a desk unit identifier. Because the desk units set up the requested stock code statically the desk unit would automatically update the stock price, volume, and highs and lows without any operator intervention because the control unit could complete the interrogation of all desk units in the office in about every one or two seconds, This is the first use of data packet transmission with the sender’s identification imbedded in the data packet in order to avoid switching- a forerunner of the internet? There was no switching in the entire system; all was done with data packets containing sender identifiers. Sinn formed Ultronic Systems Corp. in December 1960, and was president and CEO from then until he left the company in 1970. By the fall of 1961 Ultronics had installed its first desk units (Stockmaster) in New York and Philadelphia, followed by San Francisco and Los Angeles. The Stockmaster desk units offered the user quick access and continuous monitoring of last sale, bid, ask, high, low, total volume, open, close, earnings, and dividends for each stock on the NYSE and the AMEX plus commodities from the various U.S. commodity exchanges. Ultimately Ultronics and General Telephone (which bought Ultronics in 1967) installed some 10,000 units world-wide. In June 1964 Ultronics and the British news company Reuters signed a joint venture agreement to market Stockmaster worldwide outside of North America. This venture lasted for 10 years and was very successful, capturing the worldwide market for U.S. stock and commodity price information. Ultronics invented time division multiplex equipment to utilize Reuters’ voice grade lines to Europe and the Far East to transmit U.S. stock and commodity information plus Reuters’ teletype news channels. In the early 1960s when these first desk top quotation units were developed the only real time information available from the various stock and commodity exchanges were the last sale and the bid and ask ticker lines. The last sale ticker contained every trade with both price and volume for each trade. The bid-ask ticker contained only the two prices and no size. The volume of data on the last sale ticker was therefore much greater than on the bid-ask ticker. Because of this, on high volume days the last sale ticker would run as much as fifteen minutes behind the bid-ask ticker. This time difference made having the bid-ask on their desk top unit extremely important to a stockbroker even though there was an extra charge to the exchange for this information. Scantlin Electronics reacted immediately to the Ultronic threat. In early 1962 they began work on their own computer-based system and put it into service in December, 1962. It used four Control Data CDC 160A computers in New York which recorded trading data in magnetic core memory. Major cities hosted Central Office equipment connected to newly designed Quotron II desk units in brokerage offices on which a broker could request, for any stock, price and net change from the opening, or a summary which included highs, lows, and volumes (later SEI added other features like dividends and earnings). The requests went to a Central Office, which condensed and forwarded them to the New York computer. Replies followed the sequence in reverse. The data was transmitted on AT&T's Dataphone high-speed telephone service. In 1963 the new system was accepted by many brokers, and was installed in hundreds of their offices. At the end of each day, this same system transmitted stock market pages to United Press International, which in turn sold them to its newspaper customers all over the world. When Ultronics introduced their Stockmaster desk units in 1961 they priced the service at approximately the same price as the Quotron desk units. They did not want a price competition only a performance competition. All of these stock quote devices were sold on leases with monthly rental charges. The cost of the system, desk units and installation was therefore born solely by the vendor not the customer (broker). The pricing at that time made the units quite profitable and allowed the companies to finance the cost and use rapid accounting depreciation of the equipment. In 1964 Teleregister introduced their Telequote desk units at prices significantly less than Stockmaster or Quotron. This forced Ultronics and Scantlin to reduce the prices of their Stockmaster and Quotron systems. The Telequote desk units never did gain a significant share of the desk top quotation business, but their price cutting did seriously reduce the overall profitability of this business in the U.S.. Ultronics was fortunate to have made the joint venture arrangement with Reuters for the stock quotation business outside of North America where this price cutting was not a factor. The 1962 Cuban Missile Crisis blocked Ultronics from winning a historic first. In July 1962 AT&T launched the first commercial satellite (Telstar) to transmit television and telephone voice channels between the U.S. and England and France. This was a non-synchronous satellite which circled the earth in an elliptical orbit of about 2.5 hours, such that it gave only about 20 minutes of communication between the U.S. and Europe on each pass. Ultronics arranged with AT&T to use one of its voice channels to transmit U.S. stock prices to Paris in October 1962. All of the arrangements were made-a Stockmaster unit was installed in the Bache brokerage office on Rue Royale and all of the American Stockbrokers and the press and television were ready for this historic event. About two hours before the pass the stock transmission was cancelled because U.S. president John Kennedy was going to use the pass to send his speech to France concerning the Cuban missile crisis. Am-Quote In 1964 Teleregister introduced the Am-Quote system with which a broker could enter code numbers into a standard telephone; a second later a pleasant (prerecorded) voice repeated the code numbers and provided the required price information. This system, like Ultronics’, made use of magnetic drums. Computerized quoting NASDAQ, founded in 1972 was the first electronic stock market. It was originally designed only as an electronic quotation system, with no ability to perform electronic trades. Other systems soon followed and by the turn of the century, every exchange was using this model. See also Stock Market Stock trader Stock Exchange Financial data vendor Electronic trading Securities Stockbrokers Ticker tape Trading room References External links Stock Ticker History Stock market Telephony
30110697
https://en.wikipedia.org/wiki/.NET%20Framework%20version%20history
.NET Framework version history
Microsoft started development on the .NET Framework in the late 1990s originally under the name of Next Generation Windows Services (NGWS). By late 2001 the first beta versions of .NET 1.0 were released. The first version of .NET Framework was released on 13 February 2002, bringing managed code to Windows NT 4.0, 98, 2000, ME and XP. Since the first version, Microsoft has released nine more upgrades for .NET Framework, seven of which have been released along with a new version of Visual Studio. Two of these upgrades, .NET Framework 2.0 and 4.0, have upgraded Common Language Runtime (CLR). New versions of .NET Framework replace older versions when the CLR version is the same. The .NET Framework family also includes two versions for mobile or embedded device use. A reduced version of the framework, the .NET Compact Framework, is available on Windows CE platforms, including Windows Mobile devices such as smartphones. Additionally, the .NET Micro Framework is targeted at severely resource-constrained devices. .NET Framework 4.8 was the final version of .NET Framework, future work going into the rewritten and cross-platform .NET Core platform, which shipped as .NET 5 in November 2020. Overview .NET Framework 1.0 The first version of the .NET Framework was released on 13 February 2002 for Windows 98, ME, NT 4.0, 2000, and XP. Mainstream support for this version ended on 10 July 2007, and extended support ended on 14 July 2009, with the exception of Windows XP Media Center and Tablet PC editions. On 19 June 2001, the tenth anniversary of the release of Visual Basic, .NET Framework 1.0 Beta 2 was released. .NET Framework 1.0 is supported on Windows 98, ME, NT 4.0, 2000, XP, and Server 2003. Applications utilizing .NET Framework 1.0 will also run on computers with .NET Framework 1.1 installed, which supports additional operating systems. Service Pack 1 The .NET Framework 1.0 Service Pack 1 was released on 18 March 2002. Service Pack 2 The .NET Framework 1.0 Service Pack 2 was released on 7 February 2005. Service Pack 3 The .NET Framework 1.0 Service Pack 3 was released on 30 August 2004. .NET Framework 1.1 Version 1.1 is the first minor .NET Framework upgrade. It is available on its own as a redistributable package or in a software development kit, and was published on 3 April 2003. It is also part of the second release of Visual Studio .NET 2003. This is the first version of the .NET Framework to be included as part of the Windows operating system, shipping with Windows Server 2003. Mainstream support for .NET Framework 1.1 ended on 14 October 2008, and extended support ended on 8 October 2013. .NET Framework 1.1 is the last version to support Windows NT 4.0, and provides full backward compatibility to version 1.0, except in rare instances where an application will not run because it checks the version number of a library. Changes in 1.1 include: Built-in support for mobile ASP.NET controls, which was previously available as an add-on Enables Windows Forms assemblies to execute in a semi-trusted manner from the Internet Enables Code Access Security in ASP.NET applications Built-in support for ODBC and Oracle Database, which was previously available as an add-on .NET Compact Framework, a version of the .NET Framework for small devices Internet Protocol version 6 (IPv6) support .NET Framework 1.1 is supported on Windows 98, ME, NT 4.0, 2000, XP, Server 2003, Vista, and Server 2008. Service Pack 1 The .NET Framework 1.1 Service Pack 1 was released on 30 August 2004. .NET Framework 2.0 Version 2.0 was released on 22 January 2006. It was also released along with Visual Studio 2005, Microsoft SQL Server 2005, and BizTalk 2006. A software development kit for this version was released on 29 November 2006. It was the last version to support Windows 98 and Windows Me. Not to be confused with .NET Standard 2.0, announced in August 14th, 2017. Changes in 2.0 include: Full 64-bit computing support for both the x64 and the IA-64 hardware platforms Microsoft SQL Server integration: Instead of using T-SQL, one can build stored procedures and triggers in any of the .NET-compatible languages A new hosting API for native applications wishing to host an instance of the .NET runtime: The new API gives a fine grain control on the behavior of the runtime with regards to multithreading, memory allocation and assembly loading. It was initially developed to efficiently host the runtime in Microsoft SQL Server, which implements its own scheduler and memory manager. New personalization features for ASP.NET, such as support for themes, skins, master pages and webparts .NET Micro Framework, a version of the .NET Framework related to the Smart Personal Objects Technology initiative Membership provider Partial classes Nullable types Anonymous methods Iterators Data tables Common Language Runtime (CLR) 2.0 Language support for generics built directly into the .NET CLR .NET Framework 2.0 is supported on Windows 98, ME, 2000, XP, Server 2003, Vista, Server 2008, and Server 2008 R2. Applications utilizing .NET Framework 2.0 will also run on computers with .NET Framework 3.0 or 3.5 installed, which supports additional operating systems. Service Pack 1 The .NET Framework 2.0 Service Pack 1 was released on 19 November 2007. Service Pack 2 The .NET Framework 2.0 Service Pack 2 was released on 16 January 2009. It requires Windows 2000 with SP4 plus KB835732 or KB891861 update, Windows XP with SP2 plus Windows Installer 3.1. It is the last version to support Windows 2000 although there have been some unofficial workarounds to use a subset of the functionality from Version 3.5 in Windows 2000. .NET Framework 3.0 .NET Framework 3.0, formerly called WinFX, was released on 21 November 2006. It includes a new set of managed code APIs that are an integral part of Windows Vista and Windows Server 2008. It is also available for Windows XP SP2 and Windows Server 2003 as a download. There are no major architectural changes included with this release; .NET Framework 3.0 uses the same CLR as .NET Framework 2.0. Unlike the previous major .NET releases there was no .NET Compact Framework release made as a counterpart of this version. Version 3.0 of the .NET Framework shipped with Windows Vista. It also shipped with Windows Server 2008 as an optional component (disabled by default). .NET Framework 3.0 consists of four major new components: Windows Presentation Foundation (WPF), formerly code-named Avalon: A new user interface subsystem and API based on XAML markup language, which uses 3D computer graphics hardware and Direct3D technologies Windows Communication Foundation (WCF), formerly code-named Indigo: A service-oriented messaging system which allows programs to interoperate locally or remotely similar to web services Windows Workflow Foundation (WF): Allows building task automation and integrated transactions using workflows Windows CardSpace, formerly code-named InfoCard: A software component which securely stores a person's digital identities and provides a unified interface for choosing the identity for a particular transaction, such as logging into a website .NET Framework 3.0 is supported on Windows XP, Server 2003, Vista, Server 2008, and Server 2008 R2. Applications utilizing .NET Framework 3.0 will also run on computers with .NET Framework 3.5 installed, which supports additional operating systems. Service Pack 1 The .NET Framework 3.0 Service Pack 1 was released on 19 November 2007. Service Pack 2 The .NET Framework 3.0 Service Pack 2 was released on 22 February 2010. .NET Framework 3.5 Version 3.5 of the .NET Framework was released on 19 November 2007. As with .NET Framework 3.0, version 3.5 uses Common Language Runtime (CLR) 2.0, that is, the same version as .NET Framework version 2.0. In addition, .NET Framework 3.5 also installs .NET Framework 2.0 SP1 and 3.0 SP1 (with the later 3.5 SP1 instead installing 2.0 SP2 and 3.0 SP2), which adds some methods and properties to the BCL classes in version 2.0 which are required for version 3.5 features such as Language Integrated Query (LINQ). These changes do not affect applications written for version 2.0, however. As with previous versions, a new .NET Compact Framework 3.5 was released in tandem with this update in order to provide support for additional features on Windows Mobile and Windows Embedded CE devices. The source code of the Framework Class Library in this version has been partially released (for debugging reference only) under the Microsoft Reference Source License. .NET Framework 3.5 is supported on Windows XP, Server 2003, Vista, Server 2008, 7, Server 2008 R2, 8, Server 2012, 8.1, Server 2012 R2, 10, and Server 2016. Starting from Windows 8, .NET Framework 3.5 is an optional feature that can be turned on or off in control panel. Although .NET Framework 3.5 is over 10 years old, it is also shipped as Windows Container image, allowing old applications that based on .NET Framework 2.0–3.5 to run in container environment. Service Pack 1 The .NET Framework 3.5 Service Pack 1 was released on 11 August 2008. This release adds new functionality and provides performance improvements under certain conditions, especially with WPF where 20–45% improvements are expected. Two new data service components have been added, the ADO.NET Entity Framework and ADO.NET Data Services. Two new assemblies for web development, System.WebAbstraction and System.WebRouting, have been added; these are used in the ASP.NET MVC framework and, reportedly, will be used in the future release of ASP.NET Forms applications. Service Pack 1 is included with SQL Server 2008 and Visual Studio 2008 Service Pack 1. It also featured a new set of controls called "Visual Basic Power Packs" which brought back Visual Basic controls such as "Line" and "Shape." Version 3.5 SP1 of the .NET Framework shipped with Windows 7. It also shipped with Windows Server 2008 R2 as an optional component (disabled by default). .NET Framework 3.5 SP1 Client Profile For the .NET Framework 3.5 SP1 there is also a new variant of the .NET Framework, called the ".NET Framework Client Profile", which at 28 MB is significantly smaller than the full framework and only installs components that are the most relevant to desktop applications. However, the Client Profile amounts to this size only if using the online installer on Windows XP SP2 when no other .NET Frameworks are installed or using Windows Update. When using the off-line installer or any other OS, the download size is still 250 MB. .NET Framework 4.0 Key focuses for this release are: Parallel Extensions to improve support for parallel computing, which target multi-core or distributed systems. To this end, technologies like PLINQ (Parallel LINQ), a parallel implementation of the LINQ engine, and Task Parallel Library, which exposes parallel constructs via method calls, are included. New Visual Basic .NET and C# language features, such as implicit line continuations, dynamic dispatch, named parameters, and optional parameters Support for Code Contracts Inclusion of new types to work with arbitrary-precision arithmetic (System.Numerics.BigInteger) and complex numbers (System.Numerics.Complex) Introduced Common Language Runtime (CLR) 4.0 .NET Framework 4.0 is supported on Windows XP (with Service Pack 3), Windows Server 2003, Vista, Server 2008, 7 and Server 2008 R2. Applications utilizing .NET Framework 4.0 will also run on computers with .NET Framework 4.5 or 4.6 installed, which supports additional operating systems. .NET Framework 4.0 is the last version to support Windows XP and Windows Server 2003. History Microsoft announced the intention to ship .NET Framework 4 on 29 September 2008. The Public Beta was released on 20 May 2009. On 28 July 2009, a second release of the .NET Framework 4 beta was made available with experimental software transactional memory support. This functionality is not available in the final version of the framework. On 19 October 2009, Microsoft released Beta 2 of the .NET Framework 4. At the same time, Microsoft announced the expected launch date for .NET Framework 4 as 22 March 2010. This launch date was subsequently delayed to 12 April 2010. On 10 February 2010, a release candidate was published: Version:RC. On 12 April 2010, the final version of .NET Framework 4.0 was launched alongside the final release of Microsoft Visual Studio 2010. On 18 April 2011, version 4.0.1 was released supporting some customer-demanded fixes for Windows Workflow Foundation. Its design-time component, which requires Visual Studio 2010 SP1, adds a workflow state machine designer. On 27 October 2011, version 4.0.2 was released supporting some new features of Microsoft SQL Server. On 5 March 2012, version 4.0.3 was released. Windows Server AppFabric After the release of the .NET Framework 4, Microsoft released a set of enhancements, named Windows Server AppFabric, for application server capabilities in the form of AppFabric Hosting and in-memory distributed caching support. .NET Framework 4.5 .NET Framework 4.5 was released on 15 August 2012; a set of new or improved features were added into this version. The .NET Framework 4.5 is only supported on Windows Vista or later. The .NET Framework 4.5 uses Common Language Runtime 4.0, with some additional runtime features. .NET Framework 4.5 is supported on Windows Vista, Server 2008, 7, Server 2008 R2, 8, Server 2012, 8.1 and Server 2012 R2. Applications utilizing .NET Framework 4.5 will also run on computers with .NET Framework 4.6 installed, which supports additional operating systems. .NET for Metro-style apps Metro-style apps were originally designed for specific form factors and leverage the power of the Windows operating system. Two subset of the .NET Framework is available for building Metro-style apps using C# or Visual Basic: One for Windows 8 and Windows 8.1, called .NET APIs for Windows 8.x Store apps. Another for Universal Windows Platform (UWP), called .NET APIs for UWP. This version of .NET Framework, as well as the runtime and libraries used for Metro-style apps, is a part of Windows Runtime, the new platform and development model for Metro-style apps. It is an ecosystem that houses many platforms and languages, including .NET Framework, C++ and HTML5 with JavaScript. Core features Ability to limit how long the regular expression engine will attempt to resolve a regular expression before it times out. Ability to define the culture for an application domain. Console support for Unicode (UTF-16) encoding. Support for versioning of cultural string ordering and comparison data. Better performance when retrieving resources. Native support for Zip compression (previous versions supported the compression algorithm, but not the archive format). Ability to customize a reflection context to override default reflection behavior through the CustomReflectionContext class. New asynchronous features were added to the C# and Visual Basic languages. These features add a task-based model for performing asynchronous operations, implementing futures and promises. Managed Extensibility Framework (MEF) The Managed Extensibility Framework or MEF is a library for creating lightweight, extensible applications. It allows application developers to discover and use extensions with no configuration required. It also lets extension developers easily encapsulate code and avoid fragile hard dependencies. MEF not only allows extensions to be reused within applications, but across applications as well. ASP.NET Support for new HTML5 form types. Support for model binders in Web Forms. These let you bind data controls directly to data-access methods, and automatically convert user input to and from .NET Framework data types. Support for unobtrusive JavaScript in client-side validation scripts. Improved handling of client script through bundling and minification for improved page performance. Integrated encoding routines from the Anti-XSS library (previously an external library) to protect from cross-site scripting attacks. Support for WebSocket protocol. Support for reading and writing HTTP requests and responses asynchronously. Support for asynchronous modules and handlers. Support for content distribution network (CDN) fallback in the ScriptManager control. Networking Provides a new programming interface for HTTP applications: System.NetHttp namespace and System.Net.HttpHeaders namespaces are added Improved internationalization and IPv6 support RFC-compliant URI support Support for internationalized domain name (IDN) parsing Support for Email Address Internationalization (EAI) .NET Framework 4.5.1 The release of .NET Framework 4.5.1 was announced on 17 October 2013 along Visual Studio 2013. This version requires Windows Vista SP2 and later and is included with Windows 8.1 and Windows Server 2012 R2. New features of .NET Framework 4.5.1: Debugger support for X64 edit and continue (EnC) Debugger support for seeing managed return values Async-aware debugging in the Call Stack and Tasks windows Debugger support for analyzing .NET memory dumps (in the Visual Studio Ultimate SKU) Tools for .NET developers in the Performance and Diagnostics hub Code Analysis UI improvements ADO.NET idle connection resiliency .NET Framework 4.5.2 The release of .NET Framework 4.5.2 was announced on 5 May 2014. This version requires Windows Vista SP2 and later. For Windows Forms applications, improvements were made for high DPI scenarios. For ASP.NET, higher reliability HTTP header inspection and modification methods are available as is a new way to schedule background asynchronous worker tasks. .NET Framework 4.6 .NET Framework 4.6 was announced on 12 November 2014. It was released on 20 July 2015. It supports a new just-in-time compiler (JIT) for 64-bit systems called RyuJIT, which features higher performance and support for SSE2 and AVX2 instruction sets. WPF and Windows Forms both have received updates for high DPI scenarios. Support for TLS 1.1 and TLS 1.2 has been added to WCF. This version requires Windows Vista SP2 or later. The cryptographic API in .NET Framework 4.6 uses the latest version of Windows CNG cryptography API. As a result, NSA Suite B Cryptography is available to .NET Framework. Suite B consists of AES, the SHA-2 family of hashing algorithms, elliptic curve Diffie–Hellman, and elliptic curve DSA. .NET Framework 4.6 is supported on Windows Vista, Server 2008, 7, Server 2008 R2, 8, Server 2012, 8.1, Server 2012 R2, 10 and Server 2016. However, .NET Framework 4.6.1 and 4.6.2 drops support for Windows Vista and Server 2008, and .NET Framework 4.6.2 drops support for Windows 8. .NET Framework 4.6.1 The release of .NET Framework 4.6.1 was announced on 30 November 2015. This version requires Windows 7 SP1 or later. New features and APIs include: WPF improvements for spell check, support for per-user custom dictionaries and improved touch performance. Enhanced support for Elliptic Curve Digital Signature Algorithm (ECDSA) X509 certificates. Added support in SQL Connectivity for AlwaysOn, Always Encrypted and improved connection open resiliency when connecting to Azure SQL Database. Azure SQL Database now supports distributed transactions using the updated System.Transactions APIs . Many other performance, stability, and reliability related fixes in RyuJIT, GC, WPF and WCF. .NET Framework 4.6.2 The preview of .NET Framework 4.6.2 was announced on 30 March 2016. It was released on 2 August 2016. This version requires Windows 7 SP1 or later. New features include: Support for paths longer than 260 characters Support for FIPS 186-3 DSA in X.509 certificates TLS 1.1/1.2 support for ClickOnce Support for localization of data annotations in ASP.NET Enabling .NET desktop apps with Project Centennial Soft keyboard and per-monitor DPI support for WPF .NET Framework 4.6.2 is also shipped as Windows container image. .NET Framework 4.7 On 5 April 2017, Microsoft announced that .NET Framework 4.7 was integrated into Windows 10 Creators Update, promising a standalone installer for other Windows versions. An update for Visual Studio 2017 was released on this date to add support for targeting .NET Framework 4.7. The promised standalone installer for Windows 7 and later was released on 2 May 2017, but it had prerequisites not included with the package. New features in .NET Framework 4.7 include: Enhanced cryptography with elliptic curve cryptography Improve TLS support, especially for version 1.2 Support for High-DPI awareness support in Windows Forms More support for touch and stylus in Windows Presentation Foundation (WPF) New print APIs for WPF .NET Framework 4.7 is supported on Windows 7, Server 2008 R2, Server 2012, 8.1, Server 2012 R2, 10, Server 2016 and Server 2019. .NET Framework 4.7 is also shipped as a Windows container image. .NET Framework 4.7.1 .NET Framework 4.7.1 was released on 17 October 2017. Amongst the fixes and new features, it corrects a d3dcompiler dependency issue. It also adds compatibility with the .NET Standard 2.0 out of the box. .NET Framework 4.7.1 is also shipped as a Windows container image. .NET Framework 4.7.2 .NET Framework 4.7.2 was released on 30 April 2018. Amongst the changes are improvements to ASP.NET, BCL, CLR, ClickOnce, Networking, SQL, WCF, Windows Forms, Workflow and WPF. This version is included with Server 2019. .NET Framework 4.7.2 is also shipped as a Windows container image. .NET Framework 4.8 .NET Framework 4.8 was released on 18 April 2019. It was the final version of .NET Framework, all future work going into the .NET Core platform that will eventually become .NET 5 and onwards. This release included JIT enhancements ported from .NET Core 2.1, High DPI enhancements for WPF applications, accessibility improvements, performance updates, and security enhancements. It supported Windows 7, Server 2008 R2, Server 2012, 8.1, Server 2012 R2, 10, Server 2016 and Server 2019 and also shipped as a Windows container image. The most-recent release is 4.8.0 Build 4115, with an offline installer size of 115 MB and a digital signature date of May 1, 2021. References 2002 software Microsoft application programming interfaces Microsoft development tools Software version histories
3407145
https://en.wikipedia.org/wiki/Out%20of%20memory
Out of memory
Out of memory (OOM) is an often undesired state of computer operation where no additional memory can be allocated for use by programs or the operating system. Such a system will be unable to load any additional programs, and since many programs may load additional data into memory during execution, these will cease to function correctly. This usually occurs because all available memory, including disk swap space, has been allocated. History Historically, the out of memory condition was more common than it is now, since early computers and operating systems were limited to small amounts of physical random-access memory (RAM) due to the inability of early processors to address large amounts of memory, as well as cost considerations. Since the advent of virtual memory opened the door for the usage of swap space, the condition is less frequent. Almost all modern programs expect to be able to allocate and deallocate memory freely at run-time, and tend to fail in uncontrolled ways (crash) when that expectation is not met; older ones often allocated memory only once, checked whether they got enough to do all their work, and then expected no more to be forthcoming. Therefore, they would either fail immediately with an "out of memory" error message, or work as expected. Early operating systems such as MS-DOS lacked support for multitasking. Programs were allocated physical memory that they could use as they needed. Physical memory was often a scarce resource, and when it was exhausted by applications such as those with Terminate and Stay Resident functionality, no further applications could be started until running applications were closed. Modern operating systems provide virtual memory, in which processes are given a range of memory, but where the memory does not directly correspond to actual physical RAM. Virtual memory can be backed by physical RAM, a disk file via mmap (on Unix-derivatives) or MapViewOfFile (on Windows), or swap space, and the operating system can move virtual memory pages around as it needs. Because virtual memory does not need to be backed by physical memory, exhaustion of it is rare, and usually there are other limits imposed by the operating system on resource consumption. As predicted by Moore's law, the amount of physical memory in all computers has grown almost exponentially, although this is offset to some degree by programs and files themselves becoming larger. In some cases, a computer with virtual memory support where the majority of the loaded data resides on the hard disk may run out of physical memory but not virtual memory, thus causing excessive paging. This condition, known as thrashing, usually renders the computer unusable until some programs are closed or the machine is rebooted. Due to these reasons, an out-of-memory message is rarely encountered by applications with modern computers. It is, however, still possible to encounter an OOM condition with a modern computer. The typical OOM case in modern computers happens when the operating system is unable to create any more virtual memory, because all of its potential backing devices have been filled or the end-user has disabled them. The condition may arise because of copy-on-write after fork(). Out of memory management The kernels of operating systems such as Linux will attempt to recover from this type of OOM condition by terminating one or more processes, a mechanism known as the OOM Killer. Linux 4.6 (released in May 2016) introduced changes in OOM situations, improving detection and reliability., cgroup awareness in OOM killer was implemented in Linux kernel 4.19 released in October 2018, which adds an ability to kill a cgroup as a single unit. Due to late activation of OOM Killer on some Linux systems, there are several daemons and kernel patches that help to recover memory from OOM condition before it was too late. earlyoom nohang PSI (pressure stall information) kernel patches and the accompanying oomd daemon, the patches are merged in Linux kernel 4.20. Per-process memory limits Apart from the system-wide physical memory limits, some systems limit the amount of memory each process can use. Usually a matter of policy, such a limitation can also happen when the OS has a larger address space than is available at the process level. Some high-end 32-bit systems (such as those with Physical Address Extension enabled) come with 8 gigabytes or more of system memory, even though any single process can only access 4 GB of it in a 32-bit flat memory model. A process that exceeds its per-process limit and then attempts to allocate further memory will encounter an error condition. For example, the C standard function for allocating memory, malloc(), will return NULL and a well-behaved application should handle this situation. References External links Linux OOM Killer Out of Memory handling Article "Minimizing Memory Usage for Creating Application Subprocesses" by Greg Nakhimovsky Article "Taming the OOM killer" by Goldwyn Rodrigues Article "When Linux Runs Out of Memory" by Mulyadi Santosa Paper "Handling “Out Of Memory” Errors" by John Boyland Memory management Computer error messages
386803
https://en.wikipedia.org/wiki/Chodzie%C5%BC
Chodzież
Chodzież () is a town in northwestern Poland with 20,400 inhabitants (1995). Situated in the Chodzież County, Greater Poland Voivodeship (since 1999), previously in Piła Voivodeship (1975–1998). Geography Chodzież is located in the northern part of Greater Poland (western Poland), in the Chodzieskie lakelands. The most important characteristics of this lakeland area are its typical postglacial landforms, forests of pines and mixed woodlands, and lakes. For this reason, the city's surroundings are known as "the Switzerland of Chodzież". Five kilometers west of Chodzież, at the edge of the Chodzieskie lakelands, Mt. Gontyniec rises 192 meters above sea level as the highest peak in a chain of moraine hills; at the same time it has the highest elevation in northern Poland. Deep valleys and ridges covered with a 100-year-old beech forest ensure diversified surroundings. Within the five square miles (13 km2) of city area, there are three lakes: Miejskie, 1 km2 (English: Town lake, 0.4 mile²), Karczewnik, , and Strzeleckie, , which make up about 13% of the total town area. History A burial mound, estimated to date from 2000 B.C., is located in the area of the town where today's Słoneczna street lies. From about 1500 BC, tribes belonging to the Lusatian culture dominated the area for ten centuries. In 1904–1914 two burial grounds were discovered in the area of old Rzeźnicka street, that date to those times. In the early Middle Ages (400–700 AD), a little settlement existed on the south part of Lake Miejskie. Chodzież's beginnings go back at least to the 15th century. First written mention is from 1403. The name Chodzies is mentioned with that of the priest of the local Catholic parish. Researchers believe however, that town roots go back to the 13th century, when it already had its first church. 3 March 1434, King Władysław II Jagiello issued a privilege that vested Chodzież with Magdeburg town rights for Trojan of Łękno. For many centuries it was a privately owned city, located within the Kalisz Voivodeship (from 1768 in the Gniezno Voivodeship) in the Greater Poland Province of the Polish Crown. The LÍkiscy–Granowski family were the first owners, then from the mid of the 15th century, Chodzież belonged to the Potulicki family. From 1648 to 1830 the Grudziński family were the owners of Chodzież. The family's Grzymała coat of arms has been the town's crest since that time. St. Florian's church situated at the Market Square, is the oldest monument in Chodzież. Its probable founder was the first owner of the settlement, Trojan of Lekno. During the 17th century, various parts of Poland were invaded by Swedish troops. The arrival of a group of German clothmakers from Leszno, which had suffered a fire, around 1656, influenced the development of Chodzież. A new town was erected in the mid 18th century, next to the old medieval site in the city, which contained the Market, as the home of weavers and clothmakers. Today, this part of the city (Kościuszki Street) is marked by the characteristic gables of houses situated on narrow, rectangular plots of land. Each lot formerly had wooden sheds in the rear to store wool and cloth. As the result of the First Partition of Poland in 1772, the town was annexed by the Kingdom of Prussia and became a part of the newly established Netze River District. In 1805, Chodzież's weavers imported a weaving machine from Berlin. Shortly after, Napoleon defeated Prussia (1807) and out of the Treaty of Tilsit, this part of Poland became part of the Duchy of Warsaw. In 1815, Prussia and its allies defeated Napoleon, and this area became Prussian again as the Grand Duchy of Posen. It was supposed to be a Polish province within Prussian Kingdom. In reality, it was essentially a Prussian province. The local weaving industry declined about 1812–1815, when a frontier customs post between the ?(Duchy of Poznań and the Warsaw Kingdom)? was demarcated. The tariff priced the Posen weavers out of their major eastern markets, so they either migrated to other textile producing areas (e.g. Łódź) or turned to other types of work, like farming. In 1818, Chodzież became the administrative center of a county-like district (German: Kreis) (see Kreis Kolmar in Posen) that was formed from parts of the following these Kreise: Wirsitz, Wongrowitz, Obornik and Czarnikau (Polish: Wyrzysk, Wągrowiec, Oborniki and Czarnków). Over the years, it gained the character of a local administrative center, which it remained until 1975, when the division of Poland was reorganized into larger units. In 1849, the Duchy was formalized as the Prussian province of Posen. Chodzież's important place in the ceramics industry began when two German businessmen, Ludwig Schnorr and Hermann Müller from Frankfurt an der Oder, purchased the ruins of the burned out manor house from Otto Königsmarck in 1855 and built the first faience factory. In 1897 the merchant Hein, a former faience factory owner, built a porcelain factory. Since then, Chodzież has always been an important and significant center of pottery industry. The German Empire was created in 1871, and in October 1874 a system of civil registration offices were created. Chodziesen was chosen for its area. (See Standesamt Kolmar). In 1879, the railroad line Poznań — Chodzież — Piła was opened and the name of Chodzież was changed from "Chodzeisen" to "Kolmar in Posen". This name was in honor of Axel von Colmar — Meyenburg, who was extremely influential in the building of the railway, which was beneficial to the town's economy. Interbellum After World War I, Poland regained independence and the Greater Poland Uprising, the aim of which was to reintegrate the region with Poland, spread to Chodzież. Polish insurgents captured the town on January 6, 1919, and then, despite prior agreements, the Germans recaptured it the next day. After bloody fighting the insurgents again captured the town on January 8, 1919. The Versailles Treaty in 1919 eventually confirmed the restoration of Chodzież to Poland. On 19 January 1920, Polish military and political authorities marched into the city and a Polish administration was established. Unemployment and living conditions deteriorated, leading to a wave of strikes, starting in 1921. In the 1930s, the years of the great world economic crisis, workers from Chodzież porcelain factory started a new series of protests. In the period between the two world wars, Chodzież was considered as an important administrative center in the border area between Poland and Germany. It had a working class character, which was related to the development of the faience factory. Since the town was located near the border, 16% of the population was German, while 83% was Polish, as of 1939. In the 1920s, a tuberculosis sanatorium was established here because of the special climate. It was converted recently into a hospital for railroadmen. In 1921 Stanisław Mańczak bought the porcelain fabric from the Annaburger Steinguttfabrik firm. World War II In September 1939 the town was invaded by Nazi Germany. Early on, the SS-Totenkopf-Standarte Brandenburg entered the town to commit various atrocities against the Polish population. The German occupation, both in Chodzież and the whole country, was a period of terror directed against Polish citizens. In one notable example, on 7 November 1939, 44 Polish men, including the town's mayor Tadeusz Koppe and the gmina's wójt Marian Weyhan, were killed on the Morzewskie Hills near the village of Morzewo. Germans carried out mass arrests of Poles as part of the Intelligenzaktion, who were then imprisoned in the local prison. Local Poles were also subjected to expulsions and deportations to forced labour in Germany, and a transit camp for Poles expelled from the region was located in the town. Hundreds of Poles were expelled already in December 1939. Houses of expelled Poles were handed over to German colonists as part of the Lebensraum policy, and as a result, Germans formed 56% of the town's populace in 1943. Under Nazi German occupation, the town under the Germanized name Kolmar was made part of Reichsgau Wartheland, and the seat of the county (kreis) of Kolmar. The Rynek (Market Square) was renamed the Adolf Hitler Square. Despite such circumstances, the Polish resistance movement was still formed and operated in the town and area. Among its local leaders were Marcel Krzycki and pre-war Polish mayor Bronisław Maron. In August 1944, the Germans carried out mass arrests of local members of the Home Army, the leading Polish underground resistance organization. Local Polish resistance leaders were imprisoned and tortured in the local prison and in the Gestapo station in Poznań. Bronisław Maron, his wife, daughter, and Marcel Krzycki were then imprisoned in the Nazi prison camp in Żabikowo (present-day district of Luboń). Bronisław Maron, tortured, died there in 1944, while his wife and daughter were deported to the Ravensbrück concentration camp, where his wife was also murdered. Krzycki's fate is unknown, although he probably also died in Żabikowo. The population of Chodzież during the war years was reduced by almost half. Liberation came on the night of 22/23 January 1945, when Soviet troops captured the town. Recent period The first years after World War II were a period of restoration and an intensive development of the pottery industry. In 1946 Chodziez had a population of 7,694. The city administration has received prizes and awards on several occasions to recognize the city's cleanliness and aesthetics. In 1974, the city was awarded the title of "the Polish Master of Economics." Later, in 1979, it was awarded the Labor Medal, 1st Class, by the Council of State for the city's achievements in production. The current construction of an urban purification plant will help transform Chodzież into an ecologically clean center for tourism and relaxation. In recent years, the rate of economic development in the city have decreased somewhat, with industry playing a smaller role and the economic development of Chodzież and the region becoming more associated with recreation. Chodzież's natural environment attracts tourists. Sports The local football club is Polonia Chodzież. The town's sports facilities include an indoor swimming pool, a football stadium and tennis courts. Sailing and motorboat contests take place each year on the municipal lakes. These lakes have European and world-class rank: in 1993, motorboat contests took place in the class 0..350. In addition, every May, the Grzmylita Run promotes sport for the masses. Culture A brass orchestra was founded right after the end of the German occupation. First it was connected to the ceramics factory, but currently it works with the Chodzież cultural institute. In the 1970s, the annual jazz workshops began, which allowed new talents to be discovered through encounters between young people and artists from Poland and abroad. The annual National Children's Song Festivals began in 1991. In 1995, Chodzież was the co-organizer of the XIII National Voluntary Fire Department Brass Orchestra Festival. Twin towns Nottuln, Germany People Trojan of Łękno, chief judge for the province of Kalisz between 1434 and 1450. Dagobert Friedländer (1826-1904), Jewish banker and member of the House of Lords of Prussia representing Bromberg. Hugo Friedlander (1850-1928), mayor of Ashburton, New Zealand 1879-1881, 1890-1892, and 1898-1901. Leo Maximilian Baginski (1891-1964), German entrepreneur, inventor and marketing specialist Adam Harasiewicz (born 1932), Polish classical pianist Zdzisław Szlapkin (born 1961), Polish former Olympic racewalker References External links Current official website Cities and towns in Greater Poland Voivodeship Chodzież County Poznań Voivodeship (1921–1939) Nazi war crimes in Poland
36410653
https://en.wikipedia.org/wiki/1974%20USC%20Trojans%20baseball%20team
1974 USC Trojans baseball team
The 1974 USC Trojans baseball team represented the University of Southern California in the 1974 NCAA Division I baseball season. The team was coached Rod Dedeaux in his 33rd season. The Trojans won the College World Series, defeating the Miami Hurricanes in the championship game, completing their run of five consecutive national championships. Roster Schedule ! style="background:#FFCC00;color:#990000;"| Regular Season |- |- align="center" bgcolor="ddffdd" | February 10 || at || 6–1 || 1–0 || – |- align="center" bgcolor="ddffdd" | February 18 || || 5–4 || 2–0 || – |- align="center" bgcolor="ddffdd" | February 18 || San Diego State || 5–3 || 3–0 || – |- align="center" bgcolor="ddffdd" | February 20 || at || 18–5 || 4–0 || – |- align="center" bgcolor="ddffdd" | February 23 || vs. || 5–1 || 5–0 || – |- align="center" bgcolor="ddffdd" | February 24 || at || 9–7 || 6–0 || – |- align="center" bgcolor="ddffdd" | February 24 || at UNLV || 10–2 || 7–0 || – |- align="center" bgcolor="ddffdd" | February 26 || || 9–2 || 8–0 || – |- |- align="center" bgcolor="#ffdddd" | March 1 || || 3–9 || 8–1 || – |- align="center" bgcolor="ddffdd" | March 5 || at || 5–3 || 9–1 || – |- align="center" bgcolor="ddffdd" | March 10 || at || 6–3 || 10–1 || – |- align="center" bgcolor="#ddffdd" | March 12 || at UC Irvine || 10–4 || 11–1 || – |- align="center" bgcolor="ddffdd" | March 13 || at || 9–6 || 12–1 || – |- align="center" bgcolor="ddffdd" | March 14 || at || 13–7 || 13–1 || – |- align="center" bgcolor="ddffdd" | March 15 || || 19–3 || 14–1 || – |- align="center" bgcolor="ddffdd" | March 15 || Air Force || 16–8 || 15–1 || – |- align="center" bgcolor="#ddffdd" | March 19 || || 8–6 || 16–1 || – |- align="center" bgcolor="#ffdddd" | March 20 || Santa Clara || 2–10 || 16–2 || – |- align="center" bgcolor="ffdddd" | March 22 || || 10–12 || 16–3 || – |- align="center" bgcolor="ddffdd" | March 23 || vs. Arizona State || 14–9 || 17–3 || – |- align="center" bgcolor="ddffdd" | March 25 || at || 6–3 || 18–3 || – |- align="center" bgcolor="ddffdd" | March 26 || || 9–8 || 19–3 || – |- align="center" bgcolor="ddffdd" | March 30 || || 7–0 || 20–3 || 1–0 |- align="center" bgcolor="ddffdd" | March 30 || California || 11–4 || 21–3 || 2–0 |- align="center" bgcolor="ddffdd" | March 31 || California || 3–1 || 22–3 || 3–0 |- |- align="center" bgcolor="ffdddd" | April 5 || at || 2–3 || 22–4 || 3–1 |- align="center" bgcolor="ddffdd" | April 6 || at Stanford || 7–5 || 23–4 || 4–1 |- align="center" bgcolor="ffdddd" | April 6 || at Stanford || 5–6 || 23–5 || 4–2 |- align="center" bgcolor="ddffdd" | April 7 || at Arizona State || 6–4 || 24–5 || – |- align="center" bgcolor="#ffdddd" | April 9 || at || 6–7 || 24–6 || – |- align="center" bgcolor="#ffdddd" | April 9 || at Oklahoma || 4–5 || 24–7 || – |- align="center" bgcolor="#ddffdd" | April 12 || at || 8–2 || 25–7 || – |- align="center" bgcolor="#ffdddd" | April 13 || at Tulsa || 4–5 || 25–8 || – |- align="center" bgcolor="#ffdddd" | April 13 || at Tulsa || 7–11 || 25–9 || – |- align="center" bgcolor="#ddffdd" | April 14 || at Tulsa || 11–4 || 26–9 || – |- align="center" bgcolor="ddffdd" | April 15 || || 5–4 || 27–9 || – |- align="center" bgcolor="#ddffdd" | April 16 || Cal State Northridge || 7–6 || 28–9 || – |- align="center" bgcolor="#ddffdd" | April 19 || at || 8–2 || 29–9 || 5–2 |- align="center" bgcolor="#ddffdd" | April 20 || UCLA || 11–5 || 30–9 || 6–2 |- align="center" bgcolor="#ddffdd" | April 20 || UCLA || 6–5 || 31–9 || 7–2 |- align="center" bgcolor="#ffdddd" | April 23 || Pepperdine || 5–6 || 31–10 || – |- align="center" bgcolor="#ddffdd" | April 24 || || 9–6 || 32–10 || – |- align="center" bgcolor="#ddffdd" | April 26 || at California || 4–1 || 33–10 || 8–2 |- align="center" bgcolor="#ffdddd" | April 27 || at California || 2–4 || 33–11 || 8–3 |- align="center" bgcolor="#ddffdd" | April 27 || at California || 8–3 || 34–11 || 9–3 |- align="center" bgcolor="#ddffdd" | April 28 || at Santa Clara || 9–2 || 35–11 || – |- align="center" bgcolor="#ffdddd" | April 30 || Cal Poly Pomona || 4–6 || 35–12 || – |- |- align="center" bgcolor="#ffdddd" | May 3 || Stanford || 1–4 || 35–13 || 9–4 |- align="center" bgcolor="#ffdddd" | May 4 || Stanford || 0–7 || 35–14 || 9–5 |- align="center" bgcolor="#ffdddd" | May 4 || Stanford || 5–6 || 35–15 || 9–6 |- align="center" bgcolor="#ddffdd" | May 6 || Chapman || 5–2 || 36–15 || – |- align="center" bgcolor="#ffdddd" | May 7 || Loyola Marymount || 4–6 || 36–16 || – |- align="center" bgcolor="#ddffdd" | May 8 || at Long Beach State || 7–5 || 37–16 || – |- align="center" bgcolor="#ddffdd" | May 10 || UCLA || 10–0 || 38–16 || 10–6 |- align="center" bgcolor="#ffdddd" | May 11 || at UCLA || 5–6 || 38–17 || 10–7 |- align="center" bgcolor="#ddffdd" | May 11 || at UCLA || 22–2 || 39–17 || 11–7 |- |- ! style="background:#FFCC00;color:#990000;"| Post–Season |- |- |- align="center" bgcolor="ddffdd" | May 18 || vs. || Dedeaux Field || 11–6 || 40–17 |- align="center" bgcolor="ddffdd" | May 18 || vs. Oregon || Dedeaux Field || 14–1 || 41–17 |- |- align="center" bgcolor="ddffdd" | May 25 || vs. Cal State Los Angeles || Reeder Field || 9–2 || 42–17 |- align="center" bgcolor="ffdddd" | May 26 || vs. Cal State Los Angeles || Reeder Field || 6–7 || 42–18 |- align="center" bgcolor="ddffdd" | May 26 || vs. Cal State Los Angeles || Reeder Field || 11–9 || 43–18 |- align="center" bgcolor="ffdddd" | June 1 || vs. Pepperdine || Dedeaux Field || 2–4 || 43–19 |- align="center" bgcolor="ddffdd" | June 2 || vs. Pepperdine || Dedeaux Field || 4–1 || 44–19 |- align="center" bgcolor="ddffdd" | June 2 || vs. Pepperdine || Dedeaux Field || 12–1 || 45–19 |- |- align="center" bgcolor="ddffdd" | June 8 || vs. Texas || Rosenblatt Stadium || 9–2 || 46–19 |- align="center" bgcolor="ddffdd" | June 10 || vs. || Rosenblatt Stadium || 5–3 || 47–19 |- align="center" bgcolor="ffdddd" | June 12 || vs. Miami (FL) || Rosenblatt Stadium || 3–7 || 47–20 |- align="center" bgcolor="ddffdd" | June 13 || vs. Texas || Rosenblatt Stadium || 5–3 || 48–20 |- align="center" bgcolor="ddffdd" | June 14 || vs. Southern Illinois || Rosenblatt Stadium || 7–2 || 49–20 |- align="center" bgcolor="ddffdd" | June 15 || vs. Miami (FL) || Rosenblatt Stadium || 7–3 || 50–20 |- Awards and honors Rob Adolph College World Series All-Tournament Team Mike Barr College World Series All-Tournament Team Marvin Cobb College World Series All-Tournament Team Rich Dauer All-America First Team College World Series All-Tournament Team All-Pacific-8 First Team Russ McQueen All-Pacific-8 First Team George Milke College World Series Most Outstanding Player Bob Mitchell College World Series All-Tournament Team Creighton Tevlin All-Pacific-8 First Team Trojans in the 1974 MLB Draft The following members of the USC baseball program were drafted in the 1974 Major League Baseball Draft. June regular draft References USC USC Trojans baseball seasons College World Series seasons NCAA Division I Baseball Championship seasons Pac-12 Conference baseball champion seasons USC Trojans baseball
35542603
https://en.wikipedia.org/wiki/Pebble%20%28watch%29
Pebble (watch)
Pebble is a discontinued smartwatch developed by Pebble Technology Corporation. Funding was conducted through a Kickstarter campaign running from April 11, 2012, to May 18, 2012, which raised $10.3 million; it was the most funded project in Kickstarter history, at the time. Pebble began shipping watches to Kickstarter backers in January 2013. Pebble watches can be connected to Android and iOS devices to show notifications and messages. An online app store distributes Pebble-compatible apps from many developers including ESPN, Uber, Runkeeper, and GoPro. A steel-bodied variant to the original Pebble, the Pebble Steel, was announced at CES 2014 and released in February 2014. It has a thinner body, tactile metal buttons, and a Corning Gorilla Glass screen. It comes in 2 variations: a black matte finish and a brushed stainless steel finish, with both a black leather band and a matching steel band. In 2015, Pebble launched its second generation of smartwatches: the Pebble Time and Time Steel. The devices were similarly funded through Kickstarter, raising $20.3 million from over 75,000 backers and breaking records for the site. In 2016, Pebble shut down their subsequent Time 2 series watches and refunded Kickstarter backers, citing financial issues. On December 7, 2016, Pebble officially announced that the company would be shut down and would no longer manufacture or continue support for any devices, nor honor any existing warranties. Pebble's intellectual property was purchased by Fitbit, a wearable technology company specializing in fitness tracking, who also hired some of the Pebble staff. Further clarification on the transition timeline and efforts to render Pebble OS and its watchfaces/apps more self-sufficient was posted to the Pebble Dev Blog on December 14, 2016. Support for the Pebble app store, online forum, cloud development tool, voice recognition, and voice replies ceased in June 2018, although support for some online services was restored by the unofficial "Rebble" community. History Development The original Pebble Smartwatch was designed based on a concept by Eric Migicovsky describing a watch that could display messages from a smartphone and select Android devices. Migicovsky successfully took his idea through the Y Combinator business incubator program, and unusually for a startup company at Y Combinator, Migicovsky's business actually generated revenue during the program. Migicovsky was able to raise US$375,000 from angel investors such as Tim Draper of Draper Fisher Jurvetson, but was unable to raise additional funds. Discussing his inability to raise further funds, Migicovsky told the Los Angeles Times, "I wasn't extremely surprised... hardware is much harder to raise money for. We were hoping we could convince some people to our vision, but it didn't work out." Funding After raising venture capital for the product under their former name, Allerta (which had already developed and sold the inPulse smartwatch for BlackBerry devices), the company failed to attract traditional investors under their new Pebble brand name, so the company pursued crowd funding in April 2012. Migicovsky's company, Pebble Technology, launched a Kickstarter campaign on , with an initial fundraising target of $100,000. Backers spending $115 would receive a Pebble when they became available ($99 for the first 200), effectively pre-ordering the $150 Pebble at a discounted price. Within two hours of going live, the project had met its $100,000 goal, and within six days, the project had become the most funded project in the history of Kickstarter to that point, raising over $4.7 million with 30 days left in the campaign. On , Pebble Technology announced they were limiting the number of pre-orders. On , funding closed with $10,266,845 pledged by 68,929 people. At the time, the product held the world record for the most money raised for a Kickstarter project. Production Pebble worked with consulting firm Dragon Innovation to identify suppliers and manufacturers. After overcoming manufacturability difficulties with the prototype design, Pebble started mass production with manufacturer Foxlink Group in January 2013 with an initial production of 15,000 watches per week. Shipping was originally expected to begin September 2012, but Pebble Technology encountered manufacturing difficulties and began shipping units on . Pebble shipped 300,000 units by during its first year of production, over 400,000 by , 450,000 , 1 million by and 2 million by December 7, 2017. Features Hardware The watch has a 144 × 168 pixel black and white memory LCD using an ultra low-power "transflective LCD" manufactured by Sharp; it contains a backlight, vibrating motor, magnetometer, ambient light sensors, and three-axis accelerometer. It can communicate with an Android or iOS device using both Bluetooth 2.1 and Bluetooth 4.0 (Bluetooth Low Energy) through Stonestreet One's Bluetopia+MFi software stack. Bluetooth 4.0 with low energy (LE) was not initially supported, but was later added through a firmware update in November 2013. The watch is charged through a modified USB-cable that attaches magnetically to the watch to maintain water resistance capability, with a reported seven-day battery life. Water-resistance was added during development based on feedback from Kickstarter backers. The Pebble has a waterproof rating of 5 atm, which means it can be submerged down to , and has been tested in both fresh and salt water, allowing the user to shower, dive or swim while wearing the watch. Software , the Pebble app store contained over 1,000 applications. Applications included notification support for emails, calls, text messages and social media activity; stock prices; activity tracking (movement, sleep, estimates of calories burned); remote controls for smartphones, cameras and home appliances; turn-by-turn directions (using the GPS receiver in a smartphone or tablet); display of RSS or JSON feeds; and hundreds of custom watch faces. The Pebble was originally slated to ship with its apps pre-installed, including a cycling app to measure speed, distance, and pace through GPS, and a golf rangefinder app supporting more than 25,000 courses. These apps use data received from a connected phone for distance, speed and range information. More apps are downloadable via a mobile phone or tablet, and an SDK is freely available. Not all apps were pre-installed when the watch was originally shipped, but CEO Eric Migicovsky announced on January 9, 2013, that updates for the watch's operating system would be released every 2–3 weeks until all features were added. Pebble integrates with any phone or tablet application that sends out native iOS or Android notifications including from any app, including text messaging and phone call apps. The watch's firmware operating system is based on a FreeRTOS kernel and uses Newlib, the STM32 Peripheral Lib, the Ragel state machine compiler, and an unnamed UTF-8 Decoder. Gadgetbridge is an alternative companion application for Android. It is open source, does not require account creation, and supports features such as notifications, music playback and watch application installation/removal. Linux users can access the Pebble using libpebble's tools enabling experimental alpha level services with several Linux distros including the Maemo OS Nokia N900. There is also a commercial app called Rockwatch for the Meego Linux OS Nokia N9 that provides services including managing the Pebble's firmware and apps running on the watch. Pebble SDK Pebble Technology announced that an open Pebble software development kit (SDK) would be released before shipment of the watches began. A proof-of-concept watchface SDK and documentation were released on April 12, 2013. The released SDK was limited to development for watch faces, simple applications, and games. The second release of the SDK (renamed PebbleKit) was released on May 17, 2013, and added support for two-way communication between Pebbles and smartphones running iOS or Android via the AppMessage framework. The 2.0 Pebble SDK, , included APIs to access bluetooth messaging, background workers, the accelerometer, the compass, and Javascript apps. Applications written with the second PebbleKit SDK were not backwards compatible with 1.x apps, and developers were required to port their apps to the second-gen firmware. Reception The original Pebble Smartwatch was released to mixed reviews. The design was acclaimed for being innovative. CNET praised the design, readability, and water-resistance of the Pebble Steel, but criticized the limit of eight user-installed apps and the lack of a heart-rate monitor. Later watches in the Pebble series were described similarly: as simple and effective but lacking some features of competitors like the Apple Watch. Later generations Pebble Time On February 24, 2015, Pebble announced the Pebble Time, their second-generation Pebble smartwatch via its Kickstarter campaign. The Pebble Time Steel is a stainless steel variant of the Pebble Time smartwatch, available in multiple finishes: silver, black or gold with either a leather or steel band. Pebble claims it has a 10-day battery life. The Pebble Time Round is also made of stainless steel and 2.5d gorilla glass with five finishes. Pebble claims it has a 2-day battery life, dramatically decreased because of the shape and size but still significantly longer-lasting than the Apple Watch's 16-hour life. Hardware Pebble's second generation comes with various improvements over its predecessors, such as a 64-colour e-paper display with Gorilla Glass a thinner and more ergonomic chassis, plastic casing and a microphone. The Pebble Time retains the seven-day battery life and water resistance found on the previous two Pebble watches. It has a 150mAh battery. Alongside the Pebble Time Steel, Pebble announced its open hardware platform called "Smartstraps". This lets developers develop new third-party straps that connects to a special port at the back of the watch and can add new features like GPS, heart rate monitors, extended battery life and other things to the watch. This new platform prevents smartwatch bloat and making the watch bulky like most of its competitors' smartwatches. Software The Pebble Time also included a new interface designed around a timeline, which is similar to what is found in Google Now on Android Wear. In December 2015, all remaining Pebbles got a firmware update, enabling support for the timeline and removing the maximum of 8 apps-restriction, letting additional apps load directly from the connected phone. It is backwards compatible with all previous apps and watch faces. Third parties have created apps for Pebble Time, such as contactless payment (tap to pay). Funding and records The Pebble Time retailed for $199. The project reached its Kickstarter funding goal of $500,000 in 17 minutes. The project took 49 minutes to reach $1 million, which is a Kickstarter record. The project raised $10.3 million in 48 hours, another Kickstarter record. On March 3, 2015, Pebble Time became the most funded Kickstarter ever with nearly $14 million funded, while having 24 days left in its campaign. At the end of the funding, March 27, 2015, Pebble Time received pledges of $20,338,986 from 78,471 backers. Pebble 2 Pebble 2, the company's 3rd generation smartwatch, launched on Kickstarter on May 24, 2016, with an offer period of 36 days at discounted introductory pricing, and shipment of the new models anticipated in the October–November 2016 timeframe. Among the new features was a heart rate monitor (On +HR models), microphone, and water resistance rated for 30 m (98 ft) depth, Which was 10m less than the original Pebble because of the Pebble 2's Microphone. Many new features were documented as part of the Kickstarter prospectus, while other technical specifications of the forthcoming products are not yet disclosed. The Pebble 2 product line added a new device called the Pebble Core, "a tiny wearable computer with Android 5.0" featuring a 3G modem, GPS, and Spotify integration backed by an open development community. Pebble 2 was officially released in September 2016 with a new design and functions at $129. When Pebble sold parts of its company to Fitbit in late 2016, Gizmodo criticized the company for collecting $12.8 million in the product's Kickstarter and delaying shipments for half a year without being forthright with their supporters. Kickstarter backers who have not received the product were expected to receive refunds in 2017. Closing of Pebble On December 7, 2016, Pebble Technology filed for insolvency with Fitbit acquiring much of the company's assets and employees. The selling of Pebble brand to Fitbit was credited to Charles River Ventures who invested $15 million in the company in 2013. The purchase excluded Pebble's hardware, as stated by Fitbit. The deal was focused on Pebble's software engineers and testers, and the acquisition of intellectual property such as the Pebble watch's operating system, watch apps, cloud services, and its patents. Fitbit paid $23 million for Pebble's intellectual property, despite Pebble's debt and other obligations exceeding that. Fitbit did not take on Pebble's debt. The remainder of Pebble's assets, including product inventory and server equipment, was set to be sold off separately. Following the acquisition, Pebble's offices were closed and Fitbit held control over the use of the Pebble brand. The former Pebble engineers were relocated to Fitbit's offices in San Francisco. As a result, Pebble was forced to cancel shipments for its Pebble 2, Time 2, and Pebble Core smartwatches, refunding Kickstarter backers. Rebble An unofficial developer group called Rebble was created to extend support for the Pebble watches' online services that were discontinued on June 30, 2018. Pebble users and enthusiasts created the Rebble.io website in December 2016 following the announcement of Pebble's shutdown. Users can switch their devices from the original Pebble web services to the Rebble Web Services to restore some of the lost features; some features require a US$3 monthly subscription to cover the costs. See also Pebble Time Wearable computer References External links Kickstarter-funded products Products introduced in 2013 Products and services discontinued in 2016 Smartwatches Watch brands Wearable devices
5771
https://en.wikipedia.org/wiki/Christopher%20Marlowe
Christopher Marlowe
Christopher Marlowe, also known as Kit Marlowe (; baptised 26 February 156430 May 1593), was an English playwright, poet and translator of the Elizabethan era. Marlowe is among the most famous of the Elizabethan playwrights. Based upon the "many imitations" of his play Tamburlaine, modern scholars consider him to have been the foremost dramatist in London in the years just before his mysterious early death. Some scholars also believe that he greatly influenced William Shakespeare, who was baptised in the same year as Marlowe and later succeeded him as the pre-eminent Elizabethan playwright. Marlowe was the first to achieve critical reputation for his use of blank verse, which became the standard for the era. His plays are distinguished by their overreaching protagonists. Themes found within Marlowe's literary works have been noted as humanistic with realistic emotions, which some scholars find difficult to reconcile with Marlowe's "anti-intellectualism" and his catering to the prurient tastes of his Elizabethan audiences for generous displays of extreme physical violence, cruelty, and bloodshed. Events in Marlowe's life were sometimes as extreme as those found in his plays. Differing sensational reports of Marlowe's death in 1593 abounded after the event and are contested by scholars today owing to a lack of good documentation. There have been many conjectures as to the nature and reason for his death, including a vicious bar-room fight, blasphemous libel against the church, homosexual intrigue, betrayal by another playwright, and espionage from the highest level: the Privy Council of Elizabeth I. An official coroner's account of Marlowe's death was revealed only in 1925, and it did little to persuade all scholars that it told the whole story, nor did it eliminate the uncertainties present in his biography. Early life Christopher Marlowe, the second of 9 children, and oldest child after the death of his sister Mary in 1568, was born to Canterbury shoemaker John Marlowe and his wife Katherine, daughter of William Arthur of Dover. He was baptised at St George's Church, Canterbury, on 26 February 1564 (1563 in the old style dates in use at the time, which placed the new year on 25 March). Marlowe's birth was likely to have been a few days before, making him about two months older than William Shakespeare, who was baptised on 26 April 1564 in Stratford-upon-Avon. By age 14, Marlowe attended The King's School, Canterbury on scholarship and two years later Corpus Christi College, Cambridge, where he also studied on scholarship and received his Bachelor of Arts degree in 1584. Marlowe mastered Latin during his schooling, reading and translating the works of Ovid. In 1587, the university hesitated to award his Master of Arts degree because of a rumour that he intended to go to the English seminary at Rheims in northern France, presumably to prepare for ordination as a Roman Catholic priest. If true, such an action on his part would have been a direct violation of royal edict issued by Queen Elizabeth I in 1585 criminalising any attempt by an English citizen to be ordained in the Roman Catholic Church. Large-scale violence between Protestants and Catholics on the European continent has been cited by scholars as the impetus for the Protestant English Queen's defensive anti-Catholic laws issued from 1581 until her death in 1603. Despite the dire implications for Marlowe, his degree was awarded on schedule when the Privy Council intervened on his behalf, commending him for his "faithful dealing" and "good service" to the Queen. The nature of Marlowe's service was not specified by the Council, but its letter to the Cambridge authorities has provoked much speculation by modern scholars, notably the theory that Marlowe was operating as a secret agent for Privy Council member Sir Francis Walsingham. The only surviving evidence of the Privy Council's correspondence is found in their minutes, the letter being lost. There is no mention of espionage in the minutes, but its summation of the lost Privy Council letter is vague in meaning, stating that "it was not Her Majesties pleasure" that persons employed as Marlowe had been "in matters touching the benefit of his country should be defamed by those who are ignorant in th'affaires he went about." Scholars agree the vague wording was typically used to protect government agents, but they continue to debate what the "matters touching the benefit of his country" actually were in Marlowe's case and how they affected the 23-year-old writer as he launched his literary career in 1587. Adult life and legend As with other Elizabethans, little is known about Marlowe's adult life. All available evidence, other than what can be deduced from his literary works, is found in legal records and other official documents. This has not stopped writers of fiction and non-fiction from speculating about his professional activities, private life and character. Marlowe has often been described as a spy, a brawler and a heretic, as well as a "magician", "duellist", "tobacco-user", "counterfeiter" and "rakehell". While J. A. Downie and Constance Kuriyama have argued against the more lurid speculations, it is the usually circumspect J. B. Steane who remarked, "it seems absurd to dismiss all of these Elizabethan rumours and accusations as 'the Marlowe myth. To understand his brief adult life, from 1587 to 1593, much has been written, including speculation of: his involvement in royally sanctioned espionage; his vocal declaration as an atheist; his private, and possibly same-sex, sexual interests; and the puzzling circumstances surrounding his death. Spying Marlowe is alleged to have been a government spy. Park Honan and Charles Nicholl speculate that this was the case and suggest that Marlowe's recruitment took place when he was at Cambridge. In 1587, when the Privy Council ordered the University of Cambridge to award Marlowe his degree as Master of Arts, it denied rumours that he intended to go to the English Catholic college in Rheims, saying instead that he had been engaged in unspecified "affaires" on "matters touching the benefit of his country". Surviving college records from the period also indicate that, in the academic year 1584–1585, Marlowe had had a series of unusually lengthy absences from the university which violated university regulations. Surviving college buttery accounts, which record student purchases for personal provisions, show that Marlowe began spending lavishly on food and drink during the periods he was in attendance; the amount was more than he could have afforded on his known scholarship income. It has been speculated that Marlowe was the "Morley" who was tutor to Arbella Stuart in 1589. This possibility was first raised in a Times Literary Supplement letter by E. St John Brooks in 1937; in a letter to Notes and Queries, John Baker has added that only Marlowe could have been Arbella's tutor owing to the absence of any other known "Morley" from the period with an MA and not otherwise occupied. If Marlowe was Arbella's tutor, it might indicate that he was there as a spy, since Arbella, niece of Mary, Queen of Scots, and cousin of James VI of Scotland, later James I of England, was at the time a strong candidate for the succession to Elizabeth's throne. Frederick S. Boas dismisses the possibility of this identification, based on surviving legal records which document Marlowe's "residence in London between September and December 1589". Marlowe had been party to a fatal quarrel involving his neighbours and the poet Thomas Watson in Norton Folgate and was held in Newgate Prison for a fortnight. In fact, the quarrel and his arrest occurred on 18 September, he was released on bail on 1 October and he had to attend court, where he was acquitted on 3 December, but there is no record of where he was for the intervening two months. In 1592 Marlowe was arrested in the English garrison town of Flushing (Vlissingen) in the Netherlands, for alleged involvement in the counterfeiting of coins, presumably related to the activities of seditious Catholics. He was sent to the Lord Treasurer (Burghley), but no charge or imprisonment resulted. This arrest may have disrupted another of Marlowe's spying missions, perhaps by giving the resulting coinage to the Catholic cause. He was to infiltrate the followers of the active Catholic plotter William Stanley and report back to Burghley. Philosophy Marlowe was reputed to be an atheist, which held the dangerous implication of being an enemy of God and the state, by association. With the rise of public fears concerning The School of Night, or "School of Atheism" in the late 16th century, accusations of atheism were closely associated with disloyalty to the Protestant monarchy of England. Some modern historians consider that Marlowe's professed atheism, as with his supposed Catholicism, may have been no more than a sham to further his work as a government spy. Contemporary evidence comes from Marlowe's accuser in Flushing, an informer called Richard Baines. The governor of Flushing had reported that each of the men had "of malice" accused the other of instigating the counterfeiting and of intending to go over to the Catholic "enemy"; such an action was considered atheistic by the Church of England. Following Marlowe's arrest in 1593, Baines submitted to the authorities a "note containing the opinion of one Christopher Marly concerning his damnable judgment of religion, and scorn of God's word". Baines attributes to Marlowe a total of eighteen items which "scoff at the pretensions of the Old and New Testament" such as, "Christ was a bastard and his mother dishonest [unchaste]", "the woman of Samaria and her sister were whores and that Christ knew them dishonestly", "St John the Evangelist was bedfellow to Christ and leaned always in his bosom" (cf. John 13:23–25) and "that he used him as the sinners of Sodom". He also implied that Marlowe had Catholic sympathies. Other passages are merely sceptical in tone: "he persuades men to atheism, willing them not to be afraid of bugbears and hobgoblins". The final paragraph of Baines's document reads: Similar examples of Marlowe's statements were given by Thomas Kyd after his imprisonment and possible torture (see above); Kyd and Baines connect Marlowe with the mathematician Thomas Harriot's and Sir Walter Raleigh's circle. Another document claimed about that time that "one Marlowe is able to show more sound reasons for Atheism than any divine in England is able to give to prove divinity, and that ... he hath read the Atheist lecture to Sir Walter Raleigh and others". Some critics believe that Marlowe sought to disseminate these views in his work and that he identified with his rebellious and iconoclastic protagonists. Plays had to be approved by the Master of the Revels before they could be performed and the censorship of publications was under the control of the Archbishop of Canterbury. Presumably these authorities did not consider any of Marlowe's works to be unacceptable other than the Amores. Sexuality It has been claimed that Marlowe was homosexual. Some scholars argue that the identification of an Elizabethan as gay or homosexual in the modern sense is "anachronistic," claiming that for the Elizabethans the terms were more likely to have been applied to sexual acts rather than to what we currently understand to be exclusive sexual orientations and identities. Other scholars argue that the evidence is inconclusive and that the reports of Marlowe's homosexuality may be rumours produced after his death. Richard Baines reported Marlowe as saying: "all they that love not Tobacco & Boies were fools". David Bevington and Eric C. Rasmussen describe Baines's evidence as "unreliable testimony" and "These and other testimonials need to be discounted for their exaggeration and for their having been produced under legal circumstances we would now regard as a witch-hunt". J. B. Steane considered there to be "no evidence for Marlowe's homosexuality at all". Other scholars point to the frequency with which Marlowe explores homosexual themes in his writing: in Hero and Leander, Marlowe writes of the male youth Leander: "in his looks were all that men desire..." Edward the Second contains the following passage enumerating homosexual relationships: Marlowe wrote the only play about the life of Edward II up to his time, taking the humanist literary discussion of male sexuality much further than his contemporaries. The play was extremely bold, dealing with a star-crossed love story between Edward II and Piers Gaveston. Though it was a common practice at the time to reveal characters as gay to give audiences reason to suspect them as culprits in a crime, Christopher Marlowe's Edward II is portrayed as a sympathetic character. The decision to start the play Dido, Queen of Carthage with a homoerotic scene between Jupiter and Ganymede that bears no connection to the subsequent plot has long puzzled scholars. Arrest and death In early May 1593, several bills were posted about London threatening Protestant refugees from France and the Netherlands who had settled in the city. One of these, the "Dutch church libel", written in rhymed iambic pentameter, contained allusions to several of Marlowe's plays and was signed, "Tamburlaine". On 11 May the Privy Council ordered the arrest of those responsible for the libels. The next day, Marlowe's colleague Thomas Kyd was arrested, his lodgings were searched and a three-page fragment of a heretical tract was found. In a letter to Sir John Puckering, Kyd asserted that it had belonged to Marlowe, with whom he had been writing "in one chamber" some two years earlier. In a second letter, Kyd described Marlowe as blasphemous, disorderly, holding treasonous opinions, being an irreligious reprobate and "intemperate & of a cruel hart". They had both been working for an aristocratic patron, probably Ferdinando Stanley, Lord Strange. A warrant for Marlowe's arrest was issued on 18 May, when the Privy Council apparently knew that he might be found staying with Thomas Walsingham, whose father was a first cousin of the late Sir Francis Walsingham, Elizabeth's principal secretary in the 1580s and a man more deeply involved in state espionage than any other member of the Privy Council. Marlowe duly presented himself on 20 May but there apparently being no Privy Council meeting on that day, was instructed to "give his daily attendance on their Lordships, until he shall be licensed to the contrary". On Wednesday, 30 May, Marlowe was killed. Various accounts of Marlowe's death were current over the next few years. In his Palladis Tamia, published in 1598, Francis Meres says Marlowe was "stabbed to death by a bawdy serving-man, a rival of his in his lewd love" as punishment for his "epicurism and atheism". In 1917, in the Dictionary of National Biography, Sir Sidney Lee wrote that Marlowe was killed in a drunken fight and this is still often stated as fact today. The official account came to light only in 1925, when the scholar Leslie Hotson discovered the coroner's report of the inquest on Marlowe's death, held two days later on Friday 1 June 1593, by the Coroner of the Queen's Household, William Danby. Marlowe had spent all day in a house in Deptford, owned by the widow Eleanor Bull and together with three men: Ingram Frizer, Nicholas Skeres and Robert Poley. All three had been employed by one or other of the Walsinghams. Skeres and Poley had helped snare the conspirators in the Babington plot and Frizer would later describe Thomas Walsingham as his "master" at that time, although his role was probably more that of a financial or business agent, as he was for Walsingham's wife Audrey a few years later. These witnesses testified that Frizer and Marlowe had argued over payment of the bill (now famously known as the 'Reckoning') exchanging "divers malicious words" while Frizer was sitting at a table between the other two and Marlowe was lying behind him on a couch. Marlowe snatched Frizer's dagger and wounded him on the head. In the ensuing struggle, according to the coroner's report, Marlowe was stabbed above the right eye, killing him instantly. The jury concluded that Frizer acted in self-defence and within a month he was pardoned. Marlowe was buried in an unmarked grave in the churchyard of St. Nicholas, Deptford immediately after the inquest, on 1 June 1593. The complete text of the inquest report was published by Leslie Hotson in his book, The Death of Christopher Marlowe, in the introduction to which Prof. George Kittredge said "The mystery of Marlowe's death, heretofore involved in a cloud of contradictory gossip and irresponsible guess-work, is now cleared up for good and all on the authority of public records of complete authenticity and gratifying fullness" but this confidence proved fairly short-lived. Hotson had considered the possibility that the witnesses had "concocted a lying account of Marlowe's behaviour, to which they swore at the inquest, and with which they deceived the jury" but came down against that scenario. Others began to suspect that this was indeed the case. Writing to the TLS shortly after the book's publication, Eugénie de Kalb disputed that the struggle and outcome as described were even possible and Samuel A. Tannenbaum insisted the following year that such a wound could not have possibly resulted in instant death, as had been claimed. Even Marlowe's biographer John Bakeless acknowledged that "some scholars have been inclined to question the truthfulness of the coroner's report. There is something queer about the whole episode" and said that Hotson's discovery "raises almost as many questions as it answers". It has also been discovered more recently that the apparent absence of a local county coroner to accompany the Coroner of the Queen's Household would, if noticed, have made the inquest null and void. One of the main reasons for doubting the truth of the inquest concerns the reliability of Marlowe's companions as witnesses. As an agent provocateur for the late Sir Francis Walsingham, Robert Poley was a consummate liar, the "very genius of the Elizabethan underworld" and is on record as saying "I will swear and forswear myself, rather than I will accuse myself to do me any harm". The other witness, Nicholas Skeres, had for many years acted as a confidence trickster, drawing young men into the clutches of people in the money-lending racket, including Marlowe's apparent killer, Ingram Frizer, with whom he was engaged in such a swindle. Despite their being referred to as "generosi" (gentlemen) in the inquest report, the witnesses were professional liars. Some biographers, such as Kuriyama and Downie, take the inquest to be a true account of what occurred but in trying to explain what really happened if the account was not true, others have come up with a variety of murder theories. Jealous of her husband Thomas's relationship with Marlowe, Audrey Walsingham arranged for the playwright to be murdered. Sir Walter Raleigh arranged the murder, fearing that under torture Marlowe might incriminate him. With Skeres the main player, the murder resulted from attempts by the Earl of Essex to use Marlowe to incriminate Sir Walter Raleigh. He was killed on the orders of father and son Lord Burghley and Sir Robert Cecil, who thought that his plays contained Catholic propaganda. He was accidentally killed while Frizer and Skeres were pressuring him to pay back money he owed them. Marlowe was murdered at the behest of several members of the Privy Council who feared that he might reveal them to be atheists. The Queen ordered his assassination because of his subversive atheistic behaviour. Frizer murdered him because he envied Marlowe's close relationship with his master Thomas Walsingham and feared the effect that Marlowe's behaviour might have on Walsingham's reputation. Marlowe's death was faked to save him from trial and execution for subversive atheism. Since there are only written documents on which to base any conclusions and since it is probable that the most crucial information about his death was never committed to paper, it is unlikely that the full circumstances of Marlowe's death will ever be known. Reputation among contemporary writers For his contemporaries in the literary world, Marlowe was above all an admired and influential artist. Within weeks of his death, George Peele remembered him as "Marley, the Muses' darling"; Michael Drayton noted that he "Had in him those brave translunary things / That the first poets had" and Ben Jonson wrote of "Marlowe's mighty line". Thomas Nashe wrote warmly of his friend, "poor deceased Kit Marlowe," as did the publisher Edward Blount in his dedication of Hero and Leander to Sir Thomas Walsingham. Among the few contemporary dramatists to say anything negative about Marlowe was the anonymous author of the Cambridge University play The Return from Parnassus (1598) who wrote, "Pity it is that wit so ill should dwell, / Wit lent from heaven, but vices sent from hell". The most famous tribute to Marlowe was paid by Shakespeare in As You Like It, where he not only quotes a line from Hero and Leander ("Dead Shepherd, now I find thy saw of might, 'Who ever lov'd that lov'd not at first sight?) but also gives to the clown Touchstone the words "When a man's verses cannot be understood, nor a man's good wit seconded with the forward child, understanding, it strikes a man more dead than a great reckoning in a little room". This appears to be a reference to Marlowe's murder which involved a fight over the "reckoning", the bill, as well as to a line in Marlowe's Jew of Malta; "Infinite riches in a little room". Shakespeare was much influenced by Marlowe in his work, as can be seen in the use of Marlovian themes in Antony and Cleopatra, The Merchant of Venice, Richard II and Macbeth (Dido, Jew of Malta, Edward II and Doctor Faustus, respectively). In Hamlet, after meeting with the travelling actors, Hamlet requests the Player perform a speech about the Trojan War, which at 2.2.429–32 has an echo of Marlowe's Dido, Queen of Carthage. In Love's Labour's Lost Shakespeare brings on a character "Marcade" (three syllables) in conscious acknowledgement of Marlowe's character "Mercury", also attending the King of Navarre, in Massacre at Paris. The significance, to those of Shakespeare's audience who were familiar with Hero and Leander, was Marlowe's identification of himself with the god Mercury. Shakespeare authorship theory An argument has arisen about the notion that Marlowe faked his death and then continued to write under the assumed name of William Shakespeare. Academic consensus rejects alternative candidates for authorship of Shakespeare's plays and sonnets, including Marlowe. Literary career Plays Six dramas have been attributed to the authorship of Christopher Marlowe either alone or in collaboration with other writers, with varying degrees of evidence. The writing sequence or chronology of these plays is mostly unknown and is offered here with any dates and evidence known. Among the little available information we have, Dido is believed to be the first Marlowe play performed, while it was Tamburlaine that was first to be performed on a regular commercial stage in London in 1587. Believed by many scholars to be Marlowe's greatest success, Tamburlaine was the first English play written in blank verse and, with Thomas Kyd's The Spanish Tragedy, is generally considered the beginning of the mature phase of the Elizabethan theatre. Works (The dates of composition are approximate).: Dido, Queen of Carthage (c. 1585–1587; possibly co-written with Thomas Nashe; printed 1594) Tamburlaine; Part I (c. 1587), Part II (c. 1587–1588; printed 1590) The Jew of Malta (c. 1589–1590; printed 1633) Doctor Faustus (c. 1588–1592; printed 1604 & 1616) Edward II (c. 1592; printed 1594) The Massacre at Paris (c. 1593; printed c. 1594) The play Lust's Dominion was attributed to Marlowe upon its initial publication in 1657, though scholars and critics have almost unanimously rejected the attribution. He may also have written or co-written Arden of Faversham. Poetry and translations Publication and responses to the poetry and translations credited to Marlowe primarily occurred posthumously, including: Amores, first book of Latin elegiac couplets by Ovid with translation by Marlowe (c. 1580s); copies publicly burned as offensive in 1599. The Passionate Shepherd to His Love, by Marlowe. (c. 1587–1588); a popular lyric of the time. Hero and Leander, by Marlowe (c. 1593, unfinished; completed by George Chapman, 1598; printed 1598). Pharsalia, Book One, by Lucan with translation by Marlowe. (c. 1593; printed 1600) Collaborations Modern scholars still look for evidence of collaborations between Marlowe and other writers. In 2016, one publisher was the first to endorse the scholarly claim of a collaboration between Marlowe and the playwright William Shakespeare: Henry VI by William Shakespeare is now credited as a collaboration with Marlowe in the New Oxford Shakespeare series, published in 2016. Marlowe appears as co-author of the three Henry VI plays, though some scholars doubt any actual collaboration. Contemporary reception Marlowe's plays were enormously successful, possibly because of the imposing stage presence of his lead actor, Edward Alleyn. Alleyn was unusually tall for the time and the haughty roles of Tamburlaine, Faustus and Barabas were probably written for him. Marlowe's plays were the foundation of the repertoire of Alleyn's company, the Admiral's Men, throughout the 1590s. One of Marlowe's poetry translations did not fare as well. In 1599, Marlowe's translation of Ovid was banned and copies were publicly burned as part of Archbishop Whitgift's crackdown on offensive material. Chronology of dramatic works This is a possible chronology of composition for the dramatic works of Christopher Marlowe based upon dates previously cited. The dates of composition are approximate. There are other chronologies for Marlowe, including one based upon dates of printing, as was used in the 2004 Cambridge Companion to Christopher Marlowe, edited by Patrick Cheney. Dido, Queen of Carthage (c. 1585–1587) First official record: 1594. First published: 1594; posthumously. First recorded performance: between 1587 and 1593 by the Children of the Chapel, a company of boy actors in London. Additional information (title and synopsis): Full title The Tragedie of Dido, Queene of Carthage; 17-character cast plus other additional Trojans, Carthaginians, servants and attendants. In this short play, believed to be based on books 1, 2 and 4 of Virgil's Aeneid, the Trojan soldier Aeneas leaves the fallen city of Troy to the conquering Greeks and finds shelter for his fellow Trojan survivors with Dido, Queen of Carthage. The gods interfere with the love lives of Dido and Aeneas, with Venus using Cupid to trick Dido into falling in love with Aeneas, rather than Iarbas, her Carthaginian suitor. Dido and Aeneas pledge their love to each other, but the Trojans warn Aeneas that their future is in Italy, which is also where Mercury and the other gods order Aeneas to go. The play ends when Aeneas leaves for Italy with the Trojans and as Dido sets off a triple suicide by throwing herself on a funeral pyre in despair, followed by her despairing suitor Iarbus and then by Anna, who loves Iarbus. Additional information (significance): This play is believed by many scholars to be the first play by Christopher Marlowe to be performed. Additional information (attribution): The title page attributes the play to Marlowe and Thomas Nashe, yet some scholars question how much of a contribution Nashe made to the play. Evidence: No manuscripts by Marlowe exist for this play. Tamburlaine, Part I (c. 1587); Part II (c. 1587–1588) First official record: 1587, Part I. First published: 1590, Parts I and II in one octavo, London. No author named. First recorded performance: 1587, Part I, by the Admiral's Men, London. Additional information (title and synopsis): Full title, as it appears on the 1590 octavo for Part I, Tamburlaine the Great. Who, from a Scythian Shephearde, by his rare and woonderfull Conquests, became a most puissant and mightye Monarque. And (for his tyranny, and terrour in Warre) was tearmed, The Scourge of God., and for Part II, The Second Part of The bloody Conquests of mighty Tamburlaine. With his impassionate fury, for the death of his Lady and loue faire Zenocrate; his fourme of exhortacion and discipline to his three sons, and the maner of his own death.; large 26-character cast for each of the two parts. Part I concerns the conqueror Timur (Tamerlane), as he rises from nomadic shepherd and bandit to warlord and emperor of Persia, conquering the Persians, the Turks, the Egyptians, and all of Africa in the process. Part II concerns Tamerlaine as he raises his sons to become conquerors like himself through acts of extreme and heartless savagery against everyone, including the killing of one of his own sons who disappoints him. After he visits extraordinary barbarism upon the Babylonians, Tamerlaine burns the Quran with contempt and later falls ill and dies. Additional information (significance): Tamburlaine is the first example of blank verse used in the dramatic literature of the Early Modern English theatre. Additional information (attribution): Author name is missing from first printing in 1590. Attribution of this work by scholars to Marlowe is based upon comparison to his other verified works. Passages and character development in Tamburlane are similar to many other Marlowe works. Evidence: No manuscripts by Marlowe exist for this play. Parts I and II were entered into the Stationers' Register on 14 August 1590. The two parts were published together by the London printer, Richard Jones, in 1590; a second edition in 1592, and a third in 1597. The 1597 edition of the two parts were published separately in quarto by Edward White; part I in 1605, and part II in 1606. The Jew of Malta (c. 1589–1590) First official record: 1592. First published: 1592; earliest extant edition, 1633. First recorded performance: 26 February 1592, by Lord Strange's acting company. Additional information (title and synopsis): First published as The Famous Tragedy of the Rich Jew of Malta; a large 25-character cast plus other additional citizens of Malta, Turkish janizaries, guards, attendants and slaves. The play begins with the ghost of a fictionalised Machiavelli, who introduces Barabas, the Jew of Malta, in his counting house. The Governor of Malta has seized the wealth of all Jewish citizens to pay the Turks not to invade. As a consequence, Barabas designs and executes a homicidal tirade of events in retaliation against the governor and is assisted by his slave, Ithamore. Barabas' murderous streak includes: the governor's son dying in a duel; frightening his own daughter, who joins a nunnery for safety but is afterward poisoned by her father; the strangling of an old friar and the framing of another friar for the murder; and, the death of Ithamore, a prostitute and her friend, who had threatened to expose him. Finally, Barabas betrays Malta by planning another invasion by the Turks, but is outwitted when the Christians and Turks resolve the conflict and leave him to burn alive in a trap he has set for others, but has mistakenly fallen into himself. Additional information (significance): The performances of the play were a success and it remained popular for the next fifty years. This play helps to establish the strong theme of "anti-authoritarianism" that is found throughout Marlowe's works. Evidence: No manuscripts by Marlowe exist for this play. The play was entered in the Stationers' Register on 17 May 1594 but the earliest surviving printed edition is from 1633. Doctor Faustus (c. 1588–1592) First official record: 1594–1597. First published: 1601, no extant copy; first extant copy, 1604 (A text) quarto; 1616 (B text) quarto. First recorded performance: 1594–1597; 24 revival performances occurred between these years by the Lord Admiral's Company, Rose Theatre, London; earlier performances probably occurred around 1589 by the same company. Additional information (title and synopsis): Full title, The Tragical History of the Life and Death of Doctor Faustus; a very large 35-character cast, plus other additional scholars, cardinals, soldiers, and devils. Based on the German Faustbuch, which itself can be traced to a fourth-century tale known as "The Devil's Pact," Marlowe's play opens with a Prologue, where the Chorus introduces Doctor Faustus and his story. Faustus is a brilliant scholar who leaves behind the study of logic, medicine, law and divinity to study magic and necromancy, the art of speaking to the dead. When he is approached by a Good and Bad Angel, it is the Bad Angel who wins his attentions by promising that he will become a great magician. Faustus ignores his other scholarly duties and attempts to summon a devil. By revoking his own baptism he attracts the attention of Lucifer, Mephistopheles and other devils. Faustus strikes a pact with Lucifer, allowing him 24 years with Mephistopheles as his assistant, but after the pact begins Mephistopheles will not answer Faustus' questions. The two angels return, but even though Faustus waffles, coercion from the devils has him again swear allegiance to Lucifer. Faustus achieves nothing worthwhile with his pact, warns other scholars of his folly, and the play ends with Faustus dragged off to Hell by Mephistopheles as the Chorus attempts a moral summation of events with an Epilogue. Additional information (significance): This is the first dramatised version of the Faust legend of a scholar's dealing with the devil. Marlowe deviates from earlier versions of "The Devil's Pact" significantly: Marlowe's protagonist is unable to "burn his books" or repent to a merciful God to have his contract annulled at the end of the play; he is carried off by demons; and, in the 1616 quarto, his mangled corpse is found by the scholar characters. Additional information (attribution): The 'B text' was highly edited and censored, owing in part to the shifting theatre laws regarding religious words onstage during the seventeenth-century. Because it contains several additional scenes believed to be the additions of other playwrights, particularly Samuel Rowley and William Bird (alias Borne), a recent edition attributes the authorship of both versions to "Christopher Marlowe and his collaborator and revisers." This recent edition has tried to establish that the 'A text' was assembled from Marlowe's work and another writer, with the 'B text' as a later revision. Evidence: No manuscripts by Marlowe exist for this play. The two earliest-printed extant versions of the play, A and B, form a textual problem for scholars. Both were published after Marlowe's death and scholars disagree which text is more representative of Marlowe's original. Some editions are based on a combination of the two texts. Late-twentieth-century scholarly consensus identifies 'A text' as more representative because it contains irregular character names and idiosyncratic spelling, which are believed to reflect the author's handwritten manuscript or "foul papers". In comparison, 'B text' is highly edited with several additional scenes possibly written by other playwrights. Edward the Second (c. 1592) First official record: 1593. First published: 1590; earliest extant edition 1594 octavo. First recorded performance: 1592, performed by the Earl of Pembroke's Men. Additional information (title and synopsis): Full title of the earliest extant edition, The troublesome reigne and lamentable death of Edward the second, King of England, with the tragicall fall of proud Mortimer; a very large 35-character cast plus other additional lords, monks, poor men, mower, champion, messengers, soldiers, ladies and attendants. An English history play partly based on Holinshed's Chronicles of England, Scotland, and Ireland (1577; revised 1587) about the deposition of King Edward II by his barons and the Queen, who resent the undue influence the king's favourites have in court and state affairs. Additional information (significance): Considered by recent scholars as Marlowe's "most modern play" because of its probing treatment of the private life of a king and unflattering depiction of the power politics of the time. The 1594 editions of Edward II and of Dido are the first published plays with Marlowe's name appearing as the author. Additional information (attribution): Earliest extant edition of 1594. Evidence: The play was entered into the Stationers' Register on 6 July 1593, five weeks after Marlowe's death. The Massacre at Paris (c. 1589–1593) First official record: c. 1593, alleged foul sheet by Marlowe of "Scene 15"; although authorship by Marlowe is contested by recent scholars, the manuscript is believed written while the play was first performed and with an unknown purpose.: First published: undated, c. 1594 or later, octavo, London; while this is the most complete surviving text, it is near half the length of Marlowe's other works and possibly a reconstruction. The printer and publisher credit, "E.A. for Edward White," also appears on the 1605/06 printing of Marlowe's Tamburlaine. First recorded performance: 26 Jan 1593, by Lord Strange's Men, at Henslowe's Rose Theatre, London, under the title The Tragedy of the Guise; 1594, in the repertory of the Admiral's Men . Additional information (title and synopsis): Full title, The Massacre at Paris: With the Death of the Duke of Guise; very large 36-character cast, plus other additional guards, Protestants, schoolmasters, soldiers, murderers, attendants, etc. A short play that compresses the events prior to and following the Saint Bartholomew's Day Massacre in 1572, into a "curious comic strip history" that reduces seventeen years of religious war into twelve. Considered by some to be Protestant propaganda, English Protestants of the time invoked these events as the blackest example of Catholic treachery. Generally, the extant text is in two parts: the Massacre; and the murder of the Duke of Guise. The prelude to the Massacre begins with a wedding between the sister of France's Catholic king, Charles IX, to the Protestant King of Navarre, which places a Protestant in line for the crown of France. Navarre knows Guise "seeks to murder all the Protestants" in Paris for the wedding, but he trusts the protections promised by Charles IX and the Queen Mother, Catharine (de Medici). The Queen Mother, however, is secretly funding the homicidal plots of Guise, shown to us in murder vignettes executed by Guise henchman. In a soliloquy, Guise tells how all Catholics—even priests—will help murder Protestants. After the first deaths, Charles IX is persuaded to support Guise out of fear of Protestant retaliation. Catholic killers at the Massacre will wear visored helmets marked with a white cross and murder Protestants until the bells cease ringing. Charles IX feels great guilt for the Massacre. As the bells toll, Protestants are chased by soldiers, murder vignettes reveal cruelties and offstage massacres are retold by their killers. • The death of Guise is a series of intrigues. Queen Mother Catherine vows to kill and replace her unreliable son Charles IX, with her son Henry. When Charles IX dies of a broken heart (historically, of tuberculosis), a series of events unfold: Henry III is crowned king of France, but his Queen Mother will replace him as well if he dares to stop the killing of "Puritans"; Henry III makes Duke Joyeux the General of his army against Navarre, whose army is outside Paris and will later slay Joyeux; meanwhile, Guise becomes an unhinged, jealous husband who brings his army and popularity to Paris, whereupon the King has him assassinated for treason; with Guise gone, Navarre pledges his support to Henry III; the Queen Mother mourns the loss of Guise as his brother, the Cardinal, is assassinated; and finally, Henry III is stabbed with a poisoned knife by a friar sent by Guise's other brother, the Duke of Dumaine. The final scene is of the death of Henry III and the rise of Navarre as the first Protestant King of France. Additional information (significance): The Massacre at Paris is considered Marlowe's most dangerous play, as agitators in London seized on its theme to advocate the murders of refugees from the low countries of the Spanish Netherlands, and it warns Elizabeth I of this possibility in its last scene. It features the silent "English Agent", whom tradition has identified with Marlowe and his connexions to the secret service. Highest grossing play for Lord Strange's Men in 1593. Additional information (attribution): A 1593 loose manuscript sheet of the play, called a foul sheet, is alleged to be by Marlowe and has been claimed by some scholars as the only extant play manuscript by the author. It could also provide an approximate date of composition for the play. When compared with the extant printed text and his other work, other scholars reject the attribution to Marlowe. The only surviving printed text of this play is possibly a reconstruction from memory of Marlowe's original performance text. Current scholarship notes that there are only 1147 lines in the play, half the amount of a typical play of the 1590s. Other evidence that the extant published text may not be Marlowe's original is the uneven style throughout, with two-dimensional characterisations, deteriorating verbal quality and repetitions of content. Evidence: Never appeared in the Stationer's Register. Memorials A Marlowe Memorial in the form of a bronze sculpture of The Muse of Poetry by Edward Onslow Ford was erected by subscription in Buttermarket, Canterbury in 1891. In July 2002, a memorial window to Marlowe, a gift of the Marlowe Society, was unveiled in Poets' Corner in Westminster Abbey. Controversially, a question mark was added to the generally accepted date of death. On 25 October 2011 a letter from Paul Edmondson and Stanley Wells was published by The Times newspaper, in which they called on the Dean and Chapter to remove the question mark on the grounds that it "flew in the face of a mass of unimpugnable evidence". In 2012, they renewed this call in their e-book Shakespeare Bites Back, adding that it "denies history" and again the following year in their book Shakespeare Beyond Doubt. The Marlowe Theatre in Canterbury, Kent, UK, was named after the town's "most famous" resident in 1949. Originally housed in a former 1920s cinema on St. Margaret's Street, the Marlowe Theatre later moved to a newly converted 1930's era Odeon Cinema in the city. After a 2011 reopening with a newly enhanced state-of-the-art theatre facility, the Marlowe now enjoys some of the country's finest touring companies including, Glyndebourne Opera, the Royal Shakespeare Company, the Royal National Theatre as well as many major West End musicals. Marlowe in fiction Marlowe has been used as a character in books, theatre, film, television and radio. Modern compendiums There are at least two major modern scholarly editions of the collected works of Christopher Marlowe: The Complete Works of Christopher Marlowe (edited by Roma Gill in 1986; Clarendon Press published in partnership with Oxford University Press) The Complete Plays of Christopher Marlowe (edited by J. B. Steane in 1969; edited by Frank Romany and Robert Lindsey, Revised Edition, 2004, Penguin) There are also notable scholarly collections of essays concerning the collected works of Christopher Marlowe, including: The Cambridge Companion to Christopher Marlowe (edited by Patrick Cheney in 2004; Cambridge University Press) Works of Marlowe in performance Modern productions of the plays of Christopher Marlowe have increased in frequency throughout the twentieth and twenty-first centuries, including the following notable productions: BBC Radio Broadcast of all six Marlowe plays, May to October, 1993. Royal Shakespeare Company, Stratford-on-Avon Dido, Queen of Carthage, directed by Kimberly Sykes, with Chipo Chung as Dido, Swan Theatre, 2017. Tamburlaine the Great, directed by Terry Hands, with Anthony Sher as Tamburlaine, Swan Theatre, 1992; Barbican Theatre (London), 1993. directed by Michael Boyd, with Jude Owusu as Tamburlaine, Swan Theatre, 2018. The Jew of Malta, directed by Barry Kyle, with Jasper Britton as Barabas, Swan Theatre, 1987; People's Theatre (Newcastle-upon-Tyne) and Barbican Theatre (London), 1988. directed by Justin Audibert, with Jasper Britton as Barabas, Swan Theatre, 2015. Edward II, directed by Gerard Murphy, with Simon Russell Beale as Edward, Swan Theatre, 1990. Doctor Faustus, directed by John Barton, with Ian McKellen as Faustus, Nottingham Playhouse (Nottingham) and Aldwych Theatre (London), 1974; Royal Shakespeare Theatre, 1975. directed by Barry Kyle with Gerard Murphy as Faustus, Swan Theatre and Pit Theatre (London), 1989. directed by Maria Aberg with Sandy Grierson and Oliver Ryan sharing the roles of Faustus and Mephistophilis, Swan Theatre and Barbican Theatre (London), 2016. Royal National Theatre, London Tamburlaine, directed by Peter Hall, with Albert Finney as Tamburlaine, Olivier Theatre premier production, 1976. Dido, Queen of Carthage, directed by James McDonald with Anastasia Hille as Dido, Cottesloe Theatre, 2009. Edward II, directed by Joe Hill-Gibbins, with John Heffernan as Edward, Olivier Theatre, 2013. Shakespeare's Globe, London Dido, Queen of Carthage, directed by Tim Carroll, with Rakie Ayola as Dido, 2003. Edward II, directed by Timothy Walker, with Liam Brennan as Edward, 2003. Other noteworthy productions Tamburlaine, performed at Yale University, New Haven, US, 1919; directed by Tyrone Guthrie, with Donald Wolfit as Tamburlaine, Old Vic, London, 1951. Doctor Faustus, co-directed by Orson Welles and John Houseman, with Welles as Faustus and Jack Carter as Mephistopheles, New York, 1937; directed by Adrian Noble, Royal Exchange, Manchester, 1981. Edward II directed by Toby Robertson, with John Barton as Edward, Cambridge, 1951; directed by Toby Robertson, with Derek Jacobi as Edward, Cambridge, 1958; directed by Toby Robertson, with Ian McKellen as Edward, Assembly Hall, Edinburgh International Festival, 1969; directed by Jim Stone, Washington Stage Company, US, 1993; directed by Jozsef Ruszt, Budapest, 1998; directed by Michael Grandage, with Joseph Fiennes as Edward, Sheffield Crucible Theatre, UK, 2001. The Massacre in Paris, directed by Patrice Chéreau, France, 1972. Adaptations Edward II, Phoenix Society, London, 1923. Leben Eduards des Zweiten von England, by Bertolt Brecht (the first play he directed), Munich Chamber Theatre, Germany, 1924. The Life of Edward II of England, by Marlowe and Brecht, directed by Frank Dunlop, National Theatre, UK, 1968. Edward II, adapted as a ballet, choreographed by David Bintley, Stuttgart Ballet, Germany, 1995. Doctor Faustus, additional text by Colin Teevan, directed by Jamie Lloyd, with Kit Harington as Faustus, Duke of York's Theatre London, 2016. Faustus, That Damned Woman by Chris Bush, directed by Caroline Byrne, at Lyric Theatre, Hammersmith, London, 2020. Film Doctor Faustus, based on Nevill Coghill's 1965 production, adapted for Richard Burton and Elizabeth Taylor, 1967. Edward II, directed by Derek Jarman, 1991. Faust, with some Marlowe dialogue, directed by Jan Švankmajer, 1994. Notes References Sources Via Google Books Further reading Bevington, David, and Eric Rasmussen, eds. Doctor Faustus and Other Plays. Oxford English Drama. Oxford University Press, 1998. Brooke, C. F. Tucker. "The Life of Marlowe and 'The Tragedy of Dido, Queen of Carthage,'" The works and life of Christopher Marlowe. Vol. 1, ed. R.H. Case, London: Methuen, 1930. (pp. 107, 114, 99, 98) Chambers, E. K. The Elizabethan Stage. 4 Volumes, Oxford: Clarendon Press, 1923. Conrad, B. Der wahre Shakespeare: Christopher Marlowe. (German non-Fiction book) 5th Edition, 2016. Cornelius R. M. Christopher Marlowe's Use of the Bible. New York: P. Lang, 1984. Downie J. A.; Parnell J. T., eds. Constructing Christopher Marlowe, Cambridge 2000. Honan, Park. Christopher Marlowe Poet and Spy. Oxford University Press, 2005. Kuriyama, Constance. Christopher Marlowe: A Renaissance Life. Cornell University Press, 2002. Logan, Robert A. Shakespeare's Marlowe: The Influence of Christopher Marlowe on Shakespeare's Artistry. Aldershot, Hants: Ashgate, 2007. Logan, Terence P., and Denzell S. Smith, eds. The Predecessors of Shakespeare: A Survey and Bibliography of Recent Studies in English Renaissance Drama. Lincoln, Nebraska: University of Nebraska Press, 1973. Marlowe, Christopher. Complete Works. Vol. 3: Edward II., ed. R. Rowland. Oxford: Clarendon Press, 1994. (pp. xxii–xxiii) Nicholl, Charles. The Reckoning: The Murder of Christopher Marlowe, Vintage, 2002 (revised edition). Oz, Avraham, ed. Marlowe. New Casebooks. Houndmills, Basingstoke and London: Palgrave/Macmillan, 2003. Parker, John. The Aesthetics of Antichrist: From Christian Drama to Christopher Marlowe. Ithaca: Cornell University Press, 2007. Riggs, David. The World of Christopher Marlowe, Henry Holt and Co., 2005. Shepard, Alan. Marlowe's Soldiers: Rhetorics of Masculinity in the Age of the Armada, Ashgate, 2002. Sim, James H. Dramatic Uses of Biblical Allusions in Marlowe and Shakespeare, Gainesville: University of Florida Press, 1966. Trow, M. J., and Taliesin Trow. Who Killed Kit Marlowe?: a contract to murder in Elizabethan England, Stroud: Sutton, 2002. Wraight A. D.; Stern, Virginia F. In Search of Christopher Marlowe: A Pictorial Biography, London: Macdonald, 1965. External links The Marlowe Society The works of Marlowe at Perseus Project The complete works, with modernised spelling, on Peter Farey's Marlowe page. BBC audio file. In Our Time Radio 4 discussion programme on Marlowe and his work The Marlowe Studies an online library of books claiming that Marlowe was Shakespeare The Marlowe Bibliography Online is an initiative of the Marlowe Society of America and the University of Melbourne. Its purpose is to facilitate scholarship on the works of Christopher Marlowe by providing a searchable annotated bibliography of relevant scholarship. The only true Shakespeare: Christopher Marlowe (German Non-Fiction Book:) English Summary:: 1564 births 1593 crimes 1593 deaths 16th-century English dramatists and playwrights 16th-century English poets 16th-century spies Pre–17th-century atheists 16th-century translators Alumni of Corpus Christi College, Cambridge Deaths by stabbing in England English male dramatists and playwrights English male poets English murder victims English Renaissance dramatists English spies People educated at The King's School, Canterbury People from Canterbury People murdered in England People of the Elizabethan era University Wits Latin–English translators
11042797
https://en.wikipedia.org/wiki/Dynamic%20simulation
Dynamic simulation
Dynamic simulation (or dynamic system simulation) is the use of a computer program to model the time-varying behavior of a dynamical system. The systems are typically described by ordinary differential equations or partial differential equations. A simulation run solves the state-equation system to find the behavior of the state variables over a specified period of time. The equation is solved through numerical integration methods to produce the transient behavior of the state variables. Simulation of dynamic systems predicts the values of model-system state variables, as they are determined by the past state values. This relationship is found by creating a model of the system. Overview Simulation models are commonly obtained from discrete-time approximations of continuous-time mathematical models. As mathematical models incorporate real-world constraints, like gear backlash and rebound from a hard stop, equations become nonlinear. This requires numerical methods to solve the equations. A numerical simulation is done by stepping through a time interval and calculating the integral of the derivatives through numerical integration. Some methods use a fixed step through the interval, and others use an adaptive step that can shrink or grow automatically to maintain an acceptable error tolerance. Some methods can use different time steps in different parts of the simulation model. There are two types of system models to be simulated: difference-equation models, and differential-equation models. Classical physics is usually based on differential equation models. This is why most old simulation programs are simply differential equation solvers and delegate solving difference-equations to “procedural program segments.”Some dynamic systems are modeled with differential equations that can only be presented in an implicit form. These differential-algebraic-equation systems require special mathematical methods for simulation. Some complex systems’ behavior can be quite sensitive to initial conditions, which could lead to large errors from the correct values. To avoid these possible errors, a rigorous approach can be applied, where an algorithm is found which can compute the value up to any desired precision. For example, the constant e is a computable number because there is an algorithm that is able to produce the constant up to any given precision. Applications The first applications of computer simulations for dynamic systems was in the aerospace industry. Commercial uses of dynamic simulation are many and range from nuclear power, steam turbines, 6 degrees of freedom vehicle modeling, electric motors, econometric models, biological systems, robot arms, mass-spring-damper systems, hydraulic systems, and drug dose migration through the human body to name a few. These models can often be run in real time to give a virtual response close to the actual system. This is useful in process control and mechatronic systems for tuning the automatic control systems before they are connected to the real system, or for human training before they control the real system. Simulation is also used in computer games and animation and can be accelerated by using a physics engine, the technology used in many powerful computer graphics software programs, like 3ds Max, Maya, Lightwave, and many others to simulate physical characteristics. In computer animation, things like hair, cloth, liquid, fire, and particles can be easily modeled, while the human animator animates simpler objects. Computer-based dynamic animation was first used at a very simple level in the 1989 Pixar short film Knick Knack to move the fake snow in the snowglobe and pebbles in a fish tank. Example of dynamic simulation This animation was made with a software system dynamics, with a 3D modeler. The calculated values are associated with parameters of the rod and crank. In this example the crank is driving, we vary both the speed of rotation, its radius, and the length of the rod, the piston follows. See also Comparison of system dynamics software — includes packages not listed below Simulink — A MATLAB-based graphical programming environment for modeling, simulating and analyzing dynamical systems MSC Adams — A multibody dynamics simulation software SimulationX— Software for simulating multi-domain dynamic systems AMESim — Software for simulating multi-domain dynamic systems AGX Multiphysics — A physics engine for simulating multi-domain dynamic systems EcosimPro — A simulation tool for modeling continuous-discrete systems Hopsan — Software for simulating multi-domain dynamic systems MapleSim — Software for simulating multi-domain dynamic systems Modelica — A non-proprietary, object-oriented, equation-based language for dynamic simulation Physics engine VisSim — A visual language for nonlinear dynamic simulation EICASLAB — A software suite allowing nonlinear dynamic simulation PottersWheel — A Matlab toolbox to calibrate parameters of dynamic systems Simcad Pro — A dynamic and interactive discrete event simulation software References External links Textbook and lectures on dynamic simulation Dynamic System Simulation Computer physics engines Control theory Electromechanical engineering Embedded systems Gears Ordinary differential equations Partial differential equations Systems engineering Industrial automation Simulation software
8736181
https://en.wikipedia.org/wiki/Electronic%20nose
Electronic nose
An electronic nose is an electronic sensing device intended to detect odors or flavors. The expression "electronic sensing" refers to the capability of reproducing human senses using sensor arrays and pattern recognition systems. Since 1982, research has been conducted to develop technologies, commonly referred to as electronic noses, that could detect and recognize odors and flavors. The stages of the recognition process are similar to human olfaction and are performed for identification, comparison, quantification and other applications, including data storage and retrieval. Some such devices are used for industrial purposes. Other techniques to analyze odors In all industries, odor assessment is usually performed by human sensory analysis, by chemosensors, or by gas chromatography. The latter technique gives information about volatile organic compounds but the correlation between analytical results and mean odor perception is not direct due to potential interactions between several odorous components. In the Wasp Hound odor detector, the mechanical element is a video camera and the biological element is five parasitic wasps who have been conditioned to swarm in response to the presence of a specific chemical. History Scientist Alexander Graham Bell popularized the notion that it was difficult to measure a smell, and in 1914 said the following: In the decades since Bell made this observation, no such science of odor materialised, and it was not until the 1950s and beyond that any real progress was made. A common problem for odor-detecting is that it does not involve measuring energy, but physical particles. Working principle The electronic nose was developed in order to mimic human olfaction that functions as a non-separative mechanism: i.e. an odor / flavor is perceived as a global fingerprint. Essentially the instrument consists of head space sampling, a chemical sensor array, and pattern recognition modules, to generate signal patterns that are used for characterizing odors. Electronic noses include three major parts: a sample delivery system, a detection system, a computing system. The sample delivery system enables the generation of the headspace (volatile compounds) of a sample, which is the fraction analyzed. The system then injects this headspace into the detection system of the electronic nose. The sample delivery system is essential to guarantee constant operating conditions. The detection system, which consists of a sensor set, is the "reactive" part of the instrument. When in contact with volatile compounds, the sensors react, which means they experience a change of electrical properties. In most electronic noses, each sensor is sensitive to all volatile molecules but each in their specific way. However, in bio-electronic noses, receptor proteins which respond to specific odor molecules are used. Most electronic noses use chemical sensor arrays that react to volatile compounds on contact: the adsorption of volatile compounds on the sensor surface causes a physical change of the sensor. A specific response is recorded by the electronic interface transforming the signal into a digital value. Recorded data are then computed based on statistical models. Bio-electronic noses use olfactory receptors - proteins cloned from biological organisms, e.g. humans, that bind to specific odor molecules. One group has developed a bio-electronic nose that mimics the signaling systems used by the human nose to perceive odors at a very high sensitivity: femtomolar concentrations. The more commonly used sensors for electronic noses include metal–oxide–semiconductor (MOSFET) devices - a transistor used for amplifying or switching electronic signals. This works on the principle that molecules entering the sensor area will be charged either positively or negatively, which should have a direct effect on the electric field inside the MOSFET. Thus, introducing each additional charged particle will directly affect the transistor in a unique way, producing a change in the MOSFET signal that can then be interpreted by pattern recognition computer systems. So essentially each detectable molecule will have its own unique signal for a computer system to interpret. conducting polymers - organic polymers that conduct electricity. polymer composites - similar in use to conducting polymers but formulated of non-conducting polymers with the addition of conducting material such as carbon black. quartz crystal microbalance (QCM) - a way of measuring mass per unit area by measuring the change in frequency of a quartz crystal resonator. This can be stored in a database and used for future reference. surface acoustic wave (SAW) - a class of microelectromechanical systems (MEMS) which rely on the modulation of surface acoustic waves to sense a physical phenomenon. Mass spectrometers can be miniaturised to form general purpose gas analysis device. Some devices combine multiple sensor types in a single device, for example polymer coated QCMs. The independent information leads to vastly more sensitive and efficient devices. Studies of airflow around canine noses, and tests on lifesize models have indicated that a cyclic 'sniffing action' similar to that of a real dog is beneficial in terms of improved range and speed of response In recent years, other types of electronic noses have been developed that utilize mass spectrometry or ultra-fast gas chromatography as a detection system. The computing system works to combine the responses of all of the sensors, which represents the input for the data treatment. This part of the instrument performs global fingerprint analysis and provides results and representations that can be easily interpreted. Moreover, the electronic nose results can be correlated to those obtained from other techniques (sensory panel, GC, GC/MS). Many of the data interpretation systems are used for the analysis of results. These systems include artificial neural network (ANN), fuzzy logic, pattern recognition modules, etc. Artificial intelligence, included artificial neural network (ANN), is a key technique for the environmental odour management. Performing an analysis As a first step, an electronic nose needs to be trained with qualified samples so as to build a database of reference. Then the instrument can recognize new samples by comparing a volatile compound's fingerprint to those contained in its database. Thus they can perform qualitative or quantitative analysis. This however may also provide a problem as many odors are made up of multiple different molecules, which may be wrongly interpreted by the device as it will register them as different compounds, resulting in incorrect or inaccurate results depending on the primary function of a nose. The example of e-nose dataset is also available. This dataset can be used as a reference for e-nose signal processing, notably for meat quality studies. The two main objectives of this dataset are multiclass beef classification and microbial population prediction by regression. Applications Electronic nose instruments are used by research and development laboratories, quality control laboratories and process & production departments for various purposes: In quality control laboratories Conformity of raw materials, intermediate and final products Batch to batch consistency Detection of contamination, spoilage, adulteration Origin or vendor selection Monitoring of storage conditions Meat quality monitoring. In process and production departments Managing raw material variability Comparison with a reference product Measurement and comparison of the effects of manufacturing process on products Following-up cleaning in place process efficiency Scale-up monitoring Cleaning in place monitoring. In product development phases Sensory profiling and comparison of various formulations or recipes Benchmarking of competitive products Evaluation of the impact of a change of process or ingredient on sensory features. Possible and future applications in the fields of health and security The detection of dangerous and harmful bacteria, such as software that has been specifically developed to recognise the smell of the MRSA (Methicillin-resistant Staphylococcus aureus). It is also able to recognise methicillinsusceptible S. aureus (MSSA) among many other substances. It has been theorised that if carefully placed in hospital ventilation systems, it could detect and therefore prevent contamination of other patients or equipment by many highly contagious pathogens. The detection of lung cancer or other medical conditions by detecting the VOC's (volatile organic compounds) that indicate the medical condition. The detection of viral and bacterial infections in COPD Exacerbations. The quality control of food products as it could be conveniently placed in food packaging to clearly indicate when food has started to rot or used in the field to detect bacterial or insect contamination. Nasal implants could warn of the presence of natural gas, for those who had anosmia or a weak sense of smell. The Brain Mapping Foundation used the electronic nose to detect brain cancer cells. Possible and future applications in the field of crime prevention and security The ability of the electronic nose to detect odorless smells makes it ideal for use in the police force, such as the ability to detect bomb odors despite other airborne odors capable of confusing police dogs. However this is unlikely in the near term as the cost of the electronic nose is quite high. It may also be used as a drug detection method in airports. Through careful placement of several or more electronic noses and effective computer systems, one could triangulate the location of drugs to within a few metres of their location in less than a few seconds. Demonstration systems that detect the vapours given off by explosives exist, but are currently some way behind a well trained sniffer dog. In environmental monitoring For identification of volatile organic compounds in air, water and soil samples. For environmental protection. Various application notes describe analysis in areas such as flavor and fragrance, food and beverage, packaging, pharmaceutical, cosmetic and perfumes, and chemical companies. More recently they can also address public concerns in terms of olfactive nuisance monitoring with networks of on-field devices. Since emission rates on a site can be extremely variable for some sources, the electronic nose can provide a tool to track fluctuations and trends and assess the situation in real time. It improves understanding of critical sources, leading to pro-active odor management. Real time modeling will present the current situation, allowing the operator to understand which periods and conditions are putting the facility at risk. Also, existing commercial systems can be programmed to have active alerts based on set points (odor concentration modeled at receptors/alert points or odor concentration at a nose/source) to initiate appropriate actions. See also Chemical field-effect transistor Chemiresistor Electronic skin Electronic tongue Fluctuation-enhanced sensing Machine olfaction Olfactometer References External links NASA researchers are developing an exquisitely sensitive artificial nose for space exploration BBC News on Electronic Nose based bacterial detection Laboratory equipment Olfaction Emerging technologies Nose
70370
https://en.wikipedia.org/wiki/Linus%27s%20law
Linus's law
In software development, Linus's law is the assertion that "given enough eyeballs, all bugs are shallow". The law was formulated by Eric S. Raymond in his essay and book The Cathedral and the Bazaar (1999), and was named in honor of Linus Torvalds. A more formal statement is: "Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone." Presenting the code to multiple developers with the purpose of reaching consensus about its acceptance is a simple form of software reviewing. Researchers and practitioners have repeatedly shown the effectiveness of reviewing processes in finding bugs and security issues. Validity In Facts and Fallacies about Software Engineering, Robert Glass refers to the law as a "mantra" of the open source movement, but calls it a fallacy due to the lack of supporting evidence and because research has indicated that the rate at which additional bugs are uncovered does not scale linearly with the number of reviewers; rather, there is a small maximum number of useful reviewers, between two and four, and additional reviewers above this number uncover bugs at a much lower rate. While closed-source practitioners also promote stringent, independent code analysis during a software project's development, they focus on in-depth review by a few and not primarily the number of "eyeballs". The persistence of the Heartbleed security bug in a critical piece of code for two years has been considered as a refutation of Raymond's dictum. Larry Seltzer suspects that the availability of source code may cause some developers and researchers to perform less extensive tests than they would with closed source software, making it easier for bugs to remain. In 2015, the Linux Foundation's executive director Jim Zemlin argued that the complexity of modern software has increased to such levels that specific resource allocation is desirable to improve its security. Regarding some of 2014's largest global open source software vulnerabilities, he says, "In these cases, the eyeballs weren't really looking". Large scale experiments or peer-reviewed surveys to test how well the mantra holds in practice have not been performed. Empirical support of the validity of Linus's law was obtained by comparing popular and unpopular projects of the same organization. Popular projects are projects with the top 5% of GitHub stars (7,481 stars or more). Bug identification was measured using the corrective commit probability, the ratio of commits determined to be related to fixing bugs. The analysis showed that popular projects had a higher ratio of bug fixes (e.g., Google's popular projects had a 27% higher bug fix rate than Google's less popular projects). Since it is unlikely that Google lowered its code quality standards in more popular projects, this is an indication of increased bug detection efficiency in popular projects. See also Code audit Crowdsourcing List of eponymous laws Software peer review Wisdom of the crowd References Further reading Computer architecture statements Computer-related introductions in 1999 Computing culture Free software culture and documents Linus Torvalds Linux
39266078
https://en.wikipedia.org/wiki/Sheelagh%20Carpendale
Sheelagh Carpendale
Sheelagh Carpendale is a Canadian artist and computer scientist working in the field of Information Visualization and Human-Computer Interaction. Profession Carpendale is a Professor at the School of Computing Science at Simon Fraser University where she holds an NSERC/SMART Industrial Research Chair in Interactive Technologies. She was previously a professor at University of Calgary where she held a Canada Research Chair in Information Visualization and an NSERC/AITF/SMART Industrial Research Chair in Interactive Technologies. She directs the Innovations in Visualization (InnoVis.) research group. At University of Calgary, she founded the interdisciplinary graduate group, Computational Media Design. Her research on information visualization, large interactive displays, and new media art draws on her dual background in Computer Science (BSc. and Ph.D. Simon Fraser University) and Visual Arts (Sheridan College, School of Design and Emily Carr, College of Art). College Carpendale left high school with science scholarships but instead initially opted for fine arts, attending Sheridan College, School of Design and Emily Carr, Institute of Art and Design. For ten years she worked professionally in the arts. During this time she was part of establishing the Harbourfront Arts Centre at York Quay, in Toronto. Subsequently, she has reconnected with her interests in math and science and studied computer science at Simon Fraser University. Her research expertise focuses on information visualization, interaction design, and qualitative empirical work and includes such projects as: visualizing ecological dynamics, using visualization to integrate scientific results and sounds from Antarctica to create a tool to inspire musical composition, visualizing uncertainty, visualizing social activities, and multi-touch and tabletop interaction. She has found the combined visual arts and computer science background invaluable in her research. Recognition She is the recipient of several major awards including the British Academy of Film and Television Arts Award (BAFTA) for Off-line Learning as well as academic and industrial grants from Natural Sciences and Engineering Research Council, Intel Inc., Canada Foundation for Innovation, and Forest Renewal British Columbia. In 2012 she was awarded the NSERC E.W.R. Steacie Memorial Fellowship, which is given to the six top scientists nationally across all NSERC research areas and within 12 years of their PhD. She will also be featured in Canada's Science, Technology and Innovation Council State of the Nation 2012 report. In 2013, she was awarded the Canadian Human Computer Communications Society (CHCCS) Achievement Award, which is presented periodically to a Canadian researcher who has made a substantial contribution to the fields of computer graphics, visualization, or human- computer interaction. She was elected to the CHI Academy and received the IEEE VGTC Visualization Career Award in 2018. In 2021, she was inducted as a fellow of the Royal Society of Canada, a top honour for Canadian researchers. References External links Sheelagh Carpendale's website Living people Canadian computer scientists Canadian women computer scientists Human–computer interaction researchers Year of birth missing (living people) Fellows of the Royal Society of Canada
6855405
https://en.wikipedia.org/wiki/Hz-program
Hz-program
Hz-program was a proprietary, patented typographic composition computer program, created by German typeface designer Hermann Zapf. The goal of this program was "to produce the perfect grey type area without the rivers and holes of too-wide word spacing." History In a 1993 essay, Zapf explained the history of Hz-program, which included work at Harvard University prior to his current work at the Rochester Institute of Technology, the first university in the world to establish a chair for research and development on the basic structures of typographic computer programs. He cited the development of the Macintosh as a big step: ... in 1984 Steve Jobs with his Macintosh started in a completely new direction. New software was needed, and typographic presentation on the screen could be more varied and easier to handle. The possibility of getting various typefaces without any big investment enlarged the typographic palette very quickly in the following years. More and more quality was wanted, and plenty of computer space was now available and cheap for everybody. Software was offered for all kinds of solutions from many new companies. This was the time formed to begin work again on a high-level typographic computer program. People now took such ideas seriously and not just as the dreams of a perfectionist. What was tailored at RIT in the seventies has been refined in a final version together with URW in Hamburg since 1988. Our goal was to include all the digital developments available. How it works Little is known about the composition algorithm created by Zapf and implemented in Hz-program; in the same essay, Zapf stated it is "partly based on a typographically acceptable expansion or condensing of letters, called scaling. Connected with this is a kerning program, which calculates kerning values at 100 pairs per second. The kerning is not limited only to negative changes of space between two critical characters, but also allows in some cases positive kerning, which means the addition of space." The Hz-program was patented by URW (the patent expired in July 2010). Later, it was acquired by Adobe Systems for inclusion as the composition engine in Adobe InDesign application. It is not known if the Hz-program algorithm is still included in latest releases of InDesign. According to Zapf, Hàn Thế Thành made a detailed analysis of the Hz-program for microtypography extensions to the TeX typesetting system and implemented them in pdfTeX. These are available as part of the LaTeX and ConTeXt typesetting packages. Myth The quality of the text composition produced by Hz-program, together with the lack of details of its inner working, created some mythology about it. Zapf greatly contributed to this, claiming to have reached the same level as Johannes Gutenberg. The particular technique of condensing and expanding characters (glyph scaling) which is an essential part of the Hz-program, and which is now an option in Adobe InDesign and pdfTeX, has aroused critique from well-known designers like Ari Rafaeli. Typographer Torbjørn Eng has raised serious doubts about the validity of referencing the glyph scaling to Gutenberg. References Typography
51555039
https://en.wikipedia.org/wiki/Meizu%20U20
Meizu U20
The Meizu U20 is a smartphone designed and produced by the Chinese manufacturer Meizu, which runs on Flyme OS, Meizu's modified Android operating system. It was introduced together with the Meizu U10 as part of the new U series of Meizu on August 24, 2016. Release There was little information known to the public before the launch of the device on August 24, 2016. Unlike previous Meizu devices, the U series was silently launched through the official Meizu website without an actual launch event. Features Flyme The Meizu U20 was released with an updated version of Flyme OS, a modified operating system based on Android Marshmallow. It features an alternative, flat design and improved one-handed usability. Hardware and design The Meizu U20 features a MediaTek Helio P10 system-on-a-chip with an array of eight ARM Cortex-A53 CPU cores, an ARM Mali-T860 MP2 GPU and 2 or 3 GB of RAM. The U20 is available in four different colors (white, black, champagne gold and rose gold) and comes with either 2 GB of RAM and 16 GB of internal storage or with 3 GB of RAM and 32 GB of internal storage. The Meizu U20 has a full-metal frame, while the front and the back are made out of glass. The U20 measures x x and weighs . It has a slate form factor, being rectangular with rounded corners and has only one central physical button at the front. Unlike most other Android smartphones, the U10 doesn't have capacitive buttons nor on-screen buttons. The functionality of these keys is implemented using a technology called mBack, which makes use of gestures with the physical button. The U10 further extends this button by a fingerprint sensor called mTouch. The U20 features a fully laminated 5.5-inch multi-touch capacitive touchscreen display with a HD resolution of 1080 by 1920 pixels. The pixel density of the display is 400 ppi. In addition to the touchscreen input and the front key, the device has volume/zoom control buttons and the power/lock button on the right side, a 3.5mm TRS audio jack on the top and a microUSB (Micro-B type) port on the bottom for charging and connectivity. The Meizu U20 has two cameras. The rear camera has a resolution of 13 MP, a ƒ/2.2 aperture, a 5-element lens, phase-detection autofocus and an LED flash. The front camera has a resolution of 5 MP, a ƒ/2.0 aperture and a 4-element lens. See also Meizu Meizu U10 Comparison of smartphones References External links Official product page Meizu Android (operating system) devices Mobile phones introduced in 2016 Meizu smartphones Discontinued smartphones
51708172
https://en.wikipedia.org/wiki/Yahoo%21%20data%20breaches
Yahoo! data breaches
The Internet service company Yahoo! was subject to the largest data breach on record. Two major data breaches of user account data to hackers were revealed during the second half of 2016. The first announced breach, reported in September 2016, had occurred sometime in late 2014, and affected over 500 million Yahoo! user accounts. A separate data breach, occurring earlier around August 2013, was reported in December 2016. Initially believed to have affected over 1 billion user accounts, Yahoo! later affirmed in October 2017 that all 3 billion of its user accounts were impacted. Both breaches are considered the largest discovered in the history of the Internet. Specific details of material taken include names, email addresses, telephone numbers, encrypted or unencrypted security questions and answers, dates of birth, and hashed passwords. Further, Yahoo! reported that the late 2014 breach likely used manufactured web cookies to falsify login credentials, allowing hackers to gain access to any account without a password. Yahoo! has been criticized for their late disclosure of the breaches and their security measures, and is currently facing several lawsuits as well as investigation by members of the United States Congress. The breaches impacted Verizon Communications's July 2016 plans to acquire Yahoo! for about $4.8 billion, which resulted in a decrease of $350 million in the final price on the deal closed in June 2017. Description July 2016 discovery Around July 2016, account names and passwords for about 200 million Yahoo! accounts were presented for sale on the darknet market site, "TheRealDeal". The seller, known as "Peace_of_Mind" or simply "Peace", stated in confidential interviews with Vice and Wired, that he had the data for some time and had been selling it privately since about late 2015. Peace has previously been connected to sales of similar private information data from other hacks including that from the 2012 LinkedIn hack. Peace stated the data likely dates back to 2012, and security experts believed it may have been parts of other data hacks at that time; while some of the sample accounts were still active, they lacked necessary information to fully login properly, reflecting their age. Experts believe that Peace is only a broker of the information that hackers obtain and sell through him. Yahoo! stated they were aware of the data and were evaluating it, cautioning users about the situation but did not reset account passwords at that time. Late 2014 breach The first reported data breach in 2016 had taken place sometime in late 2014, according to Yahoo! The hackers had obtained data from over 500 million user accounts, including account names, email addresses, telephone numbers, dates of birth, hashed passwords, and in some cases, encrypted or unencrypted security questions and answers. Security experts noted that the majority of Yahoo!'s passwords used the bcrypt hashing algorithm, which is considered difficult to crack, with the rest using the older MD5 algorithm, which can be broken rather quickly. Such information, especially security questions and answers, could help hackers break into victims' other online accounts. Computer security experts cautioned that the incident could have far-reaching consequences involving privacy, potentially including finance and banking as well as personal information of people's lives, including information pulled from any other accounts that can be hacked with the gained account data. Experts also noted that there may be millions of people with Flickr, Sky and/or BT accounts who do not realize that they indirectly have a Yahoo! account as a result of past acquisitions and agreements made with Yahoo!, or even Yahoo! users who stopped using their accounts years earlier. Yahoo! reported the breach to the public on September 22, 2016. Yahoo! believes the breach was committed by "state-sponsored" hackers, but did not name any country. Yahoo! affirmed the hacker was no longer in their systems and that the company was fully cooperating with law enforcement. The Federal Bureau of Investigation (FBI) confirmed that it was investigating the affair. In its November 2016 SEC filing, Yahoo! reported they had been aware of an intrusion into their network in 2014, but had not understood the extent of the breach until it began investigation of a separate data breach incident around July 2016. Wired believes this separate data breach involved the Peace data from July 2016. Yahoo!'s previous SEC filing on September 9, prior to the breach announcement, had stated that it was not aware of any "security breaches" or "loss, theft, unauthorized access or acquisition" of user data. The November 2016 SEC filing noted that the company believed the data breach had been conducted through a cookie-based attack that allowed hackers to authenticate as any other user without their password. Yahoo! and its outside security analysts confirmed this was the method of intrusion in their December 2016 announcement of the August 2013 data breach, and had invalidated all previous cookies to eliminate this route. In a regulatory filing in 2017, Yahoo! reported that 32 million accounts were accessed through this cookie-based attack through 2015 and 2016. Multiple experts believe that the security breach was the largest such incident made public in the history of the Internet at the time. August 2013 breach The first data breach occurred on Yahoo! servers in August 2013; Yahoo! stated this was a separate breach from the late 2014 one and was conducted by an "unauthorized third party". Similar data as from the late 2014 breach had been taken from over 1 billion user accounts, including unencrypted security questions and answers. Yahoo! reported the breach on December 14, 2016, and forced all affected users to change passwords, and to reenter any unencrypted security questions and answers to make them encrypted in the future. In February 2017, Yahoo! notified some users that data from the breach and forged cookies could have been used to access these accounts. This breach is now considered the largest known breach of its kind on the Internet. In October 2017, Yahoo! updated its assessment of the hack, and stated that it believes all of its 3 billion accounts at the time of the August 2013 breach were affected. According to Yahoo! this new breach was discovered while it was reviewing data given to them from law enforcement from an unnamed third-party hacker about a month prior. They had been able to identify the method by which data were taken from the last 2014 hack using fake cookies during this investigation, but the method of the August 2013 breach was not clear to them upon their announcement. Andrew Komarov, chief intelligence officer of the cybersecurity firm InfoArmor, had been helping Yahoo! and law enforcement already in response to the Peace data. In trying to track down the source of Peace's data, he discovered evidence of this latest breach from a dark web seller offering a list of more than one billion Yahoo! accounts for about $300,000 in August 2015. While two of the three buyers of this data were found to be underground spammers, the third buyer had specifically asked the seller of the Yahoo! data to affirm if ten names of United States and foreign government officials were on the offered list and information associated with them. Suspecting that this buyer may have been related to a foreign intelligence agency, Komarov discovered that the offered data included the accounts of over 150,000 names of people working for the United States government and military, as well as additional accounts associated with European Union, Canadian, British, and Australian governments. Komarov alerted the appropriate agencies about this new data set and began working with them directly. Komarov noted that while U.S. government policies have changed to keep key intelligence employees as low-key as possible, these affected users likely set up Yahoo! accounts for personal use well before such policies were in place, and included their work details as part of their profiles, making this information highly valuable for foreign intelligence groups. Komarov had opted not to go to Yahoo! about the data, as they had previously been dismissive of InfoArmor's services in the past, and Komarov believed that Yahoo! would not thoroughly investigate the situation as it would threaten their Verizon buyout. In addition to government issues, Komarov and other security firms warned that the data from this breach can be used to attempt access to other accounts, since it included backup email contact addresses and security questions. Such data, these experts warn, could be used to create phishing attacks to lure users into revealing sensitive information which can then be used for malicious purposes. Hold Security, another cybersecurity firm, observed that some darkweb sellers were still selling this database for up to $200,000 as late as October 2016; Komarov found that the data continues to be available at a much lower price since the passwords have been forced changed, but the data can still be valuable for phishing attacks and gaining access to other accounts. Attribution and motivation According to Yahoo!, the 2014 breach was carried out by a "state-sponsored actor" and the organization claims that such "intrusions and thefts by state-sponsored actors have become increasingly common across the technology industry". While Yahoo! did not name any country, some suspect China or Russia to be behind the hack, while others doubt Yahoo's claim of any state actor. U.S. intelligence officials, who declined to give their names to the media, highlighted similarities between the attack and previous breaches linked to the Russian government. Yahoo! in fall 2014 detected what it believed was a small breach "involving 30 to 40 accounts", carried out by hackers believed to be "working on behalf of the Russian government", according to Yahoo! executives, because it was launched from computers in that country. Yahoo! reported the incident to the FBI in late 2014 and notified affected users. Sean Sullivan, a security adviser at cyber security firm F-Secure Labs, declared China to be his top suspect and said that "there have been no past cases of a service provider like Yahoo! being targeted [by Russia]," whose hackers tend to perpetrate targeted attacks, either in areas important for their economy, such as the energy sector, or to undermine politicians, while "China likes to vacuum up all kinds of information" and "has a voracious appetite for personal information". Examples of state-sponsored data breaches with China in suspicion include the massive data breach of 18 million people from the United States Office of Personnel Management and the attacks on Google in 2010, dubbed Operation Aurora. Others expressed doubt about Yahoo's claim of the attack being state-sponsored, as it would be less embarrassing for Yahoo! to attribute an attack to a nation state, which typically have the most sophisticated hacking capabilities, than to attribute it to a cybercriminal group or individual—particularly as Yahoo! was in the middle of being acquired by Verizon. Senior research scientist Kenneth Geers from Comodo, however, noted that "Yahoo! is a strategic player on the World Wide Web, which makes it a good—and valid—target for nation-state intelligence collection". One of the effects, if not the direct goal, of the breaches was the use of the stolen usernames and passwords for credential stuffing attacks. InfoArmor issued a report that challenged Yahoo's claim that a nation-state orchestrated the heist after reviewing a small sample of compromised accounts. InfoArmor had been able to obtain the list of affected accounts for analysis. InfoArmor determined that the breach was likely the work of an Eastern European criminal gang that later sold the entire hacked database to at least three clients, including one state-sponsored group. According to InfoArmor, by early 2015, the group no longer offered to sell the full database, but sought "to extract something from the dump for significant amounts of money." The report noted that it was difficult to determine who the ultimate mastermind of a hack might be, as criminal hackers sometimes provide information to government intelligence agencies or offer their services for hire. Komarov said the hackers may be related to Group E, who have had a track record of selling stolen personal data on the dark web, primarily to underground spammers, and were previously linked to breaches at LinkedIn, Tumblr, and MySpace. InfoArmor had linked Group E as the source of the data that were offered by Peace, and believed that Group E was brokering the data to dark web sellers. While InfoArmor did not believe a state-sponsored agency committed the breach, they warned of implications on foreign intelligences, as the breaches "opens the door to significant opportunities for cyber-espionage and targeted attacks," and may be the key in several targeted attacks against U.S. government personnel, which resulted after the disclosed contacts of the affected high-level officials of intelligence community in October 2015. Yahoo! stated that the 2013 breach is connected "to the same state-sponsored actor believed to be responsible for the data theft the company disclosed on September 22, 2016." White House spokespersons stated that the FBI is currently investigating this breach, though the scope of its impact is unclear. A United States official, speaking to CBS News, says that government investigators agree with Yahoo! that the hack was sponsored by a foreign state, possibly Russia. Security experts speculate that because little of the data from this 2013 breach have been made available on the black market, the breach was likely targeted to find information on specific people. Prosecution On March 15, 2017, the FBI officially charged the 2014 breach to four men, including two that work for Russia's Federal Security Service (FSB). In its statement, the FBI said "The criminal conduct at issue, carried out and otherwise facilitated by officers from an FSB unit that serves as the FBI's point of contact in Moscow on cybercrime matters, is beyond the pale." The four men accused include Alexsey Belan, a hacker on the FBI Ten Most Wanted Fugitives list, FSB agents Dmitry Dokuchaev and Igor Sushchin who the FBI accused of paying Belan and other hackers to conduct the hack, and Canadian hacker Karim Baratov who the FBI claimed was paid by Dokuchaev and Sushchin to use data obtained by the Yahoo! breaches to breach into about 80 non-Yahoo! accounts of specific targets. Baratov, the only man currently arrested, was extradited to the United States, though had claimed not guilty to the charges in August 2017. However, he later pled guilty, admitting to hacking into at least 80 email accounts on behalf of Russian contacts. He was charged with nine counts of hacking, and in May 2018 sentenced to 5 years in prison and ordered to pay and restitution to his victims. Legal and commercial responses Yahoo! Yahoo!'s delay in discovering and reporting these breaches, as well as implementing improved security features, has become a point of criticism. Yahoo! has been taken to task for having a seemingly lax attitude towards security: the company reportedly does not implement new security features as fast as other Internet companies, and after Yahoo! was identified by Edward Snowden as a frequent target for state-sponsored hackers in 2013, it took the company a full year before hiring a dedicated chief information security officer, Alex Stamos. While Stamos' hiring was praised by technology experts as showing Yahoo!'s commitment towards better security, Yahoo! CEO Marissa Mayer had reportedly denied Stamos and his security team sufficient funds to implement recommended stronger security measures, and he departed the company by 2015. Experts have pointed out that Yahoo!, only until the most recent breaches, had not forced affected users to change their passwords, a move that Mayer and her team believed would drive users away from the service. Some experts stated that implementing stronger security measures does take monetary resources, and Yahoo!'s financial situation has not allowed the company to invest in cybersecurity. Yahoo!'s internal review of the situation found that Mayer and other key executives knew of the intrusions but failed to inform the company or take steps to prevent further breaches. The review led to the resignation of the company's principle lawyer, Ronald S. Bell by March 2017, and Mayer's equity compensation bonus for 2016 and 2017 was pulled. Verizon Communications merger deal In July 2016, prior to the announcement of the breaches Verizon Communications had entered into negotiations and approval to purchase a portion of the Yahoo! properties for $4.8 billion, with the deal set to close in March 2017. Verizon had only become aware of the 2014 breach just two days prior to the Yahoo! September announcement. CEO Lowell McAdam said he wasn't shocked by the hack, saying "we all live in an internet world, it's not a question of if you're going to get hacked but when you are going to get hacked". He left the door open to possibly renegotiate the $4.83 billion price tag. Craig Silliman, Verizon's general counsel told reporters in Washington Verizon has "a reasonable basis to believe right now that the impact is material" and that they're "looking to Yahoo to demonstrate [...] the full impact". The company's reputation has suffered online in the last few months, according to an analysis by marketing firm Spredfast: about 90 percent of the Twitter comments about Yahoo! were negative in October, up from 68 percent in August, before news of the hack. Following the announcement of the August 2013 breach, Verizon was reportedly seeking to change terms of the deal to reflect on the impact of these breaches, including lowering their offer or potentially seeking court action as to terminate the deal. Verizon stated that they will "review the impact of this new development before reaching any final conclusions". In February 2017, Verizon and Yahoo! announced that the deal will still go forward, but dropping the sale price by $350 million, down to $4.48 billion. The deal officially closed at this reduced price in June 2017, with Mayer stepping down as CEO following the closure. Verizon and Yahoo! will share jointly in the ongoing costs for the government investigation of the breaches under this new term. The remaining properties of Yahoo! not purchased by Verizon, which included the Alibaba Group, were renamed to Altaba in June 2017. United States government Members of the U.S. Government have been critical of Yahoo!'s reactions to these breaches. In a letter to Yahoo! CEO Marissa Mayer, six Democratic U.S. Senators (Elizabeth Warren, Patrick Leahy, Al Franken, Richard Blumenthal, Ron Wyden and Ed Markey) demanded answers on when Yahoo! discovered the last 2014 breach, and why it took so long to disclose it to the public, calling the time lag between the security breach and its disclosure 'unacceptable'. On September 26, 2016 democratic senator Mark Warner asked the U.S. Securities and Exchange Commission (SEC) to investigate whether Yahoo! and its senior executives fulfilled their obligations under federal securities laws to properly disclose the attack. In his letter, Warner also asked the SEC to evaluate whether the current disclosure regime was adequate. Jacob Olcott, who helped develop the SEC data breach disclosure rules and former Senate Commerce Committee counsel, noted that due to the size of the breach, intense public scrutiny and uncertainty over the timing of Yahoo's discovery, the hack could become a test case of the SEC's guidelines. Following the announcement of the August 2013 breach, Sen. Warner called for a full investigation of the situation, asking "why its cyber defenses have been so weak as to have compromised over a billion users". In April 2018, the SEC announced that it had reached a deal with Altaba, the company that holds the assets of Yahoo! not purchased by Verizon, for for failure to disclose the 2014 breach in a timely manner. Class action lawsuits By November 9, 2016, it was reported that 23 lawsuits related to the late 2014 breach had been filed against Yahoo! so far. In one lawsuit, filed in the U.S. District Court for the Southern District of California in San Diego, the plaintiffs contend that the hack caused an "intrusion into personal financial matters." In another lawsuit, filed in the U.S. District Court for the Northern District of California in San Jose, the plaintiff contends that Yahoo! acted with gross negligence in dealing with and reporting the security breach. Yahoo! declined to comment on ongoing litigation. Five of these 23 cases were combined into a single suit in early December 2016 to be heard in San Jose in March 2017. The presiding judge authorized the class-action lawsuit to go forward in August 2017, citing that those affected by the breach had the right to sue Yahoo! for breach of contract and unfair competition claims made in the original filing. The case was later amended to include the updated breach information following Yahoo!'s announcement about the August 2013. By March 2018, Verizon, which had completed its acquisition of Yahoo!, sought to dismiss much of the case, but Judge Lucy H. Koh refused, allowing claims related to breach of contract and negligence to be tried in the trial. Before trial could commence, Verizon and Altaba agreed to split the cost of a settlement in October 2018 with those in the class action (an estimated 200 million total users), along with providing two years of free credit monitoring through AllClear ID, pending approval by Judge Koh. In the settlement, those that can document identity theft damage from the breach can seek up to from the settlement, otherwise, those with known affected Yahoo accounts can seek up to . Judge Koh rejected the settlement offer, questioning the lack of transparency of the details of the settlements, as well as high costs recouped by the lawyers through the settlement. Yahoo! eventually agreed to settle for $117.5 million in April 2019, again offering affected users credit monitoring or a cash payout dependent on the number of respondents in the class. Following the December 14 announcement of the August 2013 hacks, another class-action lawsuit was filed against Yahoo! in New York state on behalf of all affected United States residents, stating that "Yahoo! failed, and continues to fail, to provide adequate protection of its users' personal and confidential information." International Foreign governments have also shown concerns on the several data breaches. On October 28, the European privacy regulators "Article 29 Working Party" outlined concerns about the 2014 data breach as well as allegations that the company built a system that scanned customers' incoming emails at the request of U.S. intelligence services in a letter to Yahoo. They asked Yahoo! to communicate all aspects of the data breach to the EU authorities, to notify the affected users of the "adverse effects" and to cooperate with all "upcoming national data protection authorities' enquiries and/or investigations". In late November, Ireland's Data Protection Commissioner (DPC), the lead European regulator on privacy issues for Yahoo! whose European headquarters are in Dublin, said that it had stepped up its examination of the breach, that it was awaiting information from Yahoo! on allegations that it helped the U.S. government scan users' emails, and that Yahoo! was not investigating the breach but just examining it. Germany's Federal Office for Information Security criticized Yahoo! following the December 2016 announcement, stating "security is not a foreign concept", and warned government and other German users to seek email and Internet solutions from companies with better security approaches. See also 2012 Yahoo! Voices hack Corporate warfare Database security Information security Internet security List of data breaches Multi-factor authentication Security hacker Web literacy References External links Yahoo's Account Security Issue FAQs 2013 crimes in the United States 2014 crimes in the United States 2015 crimes in the United States 2016 crimes in the United States Data breaches in the United States Hacking in the 2010s Identity theft incidents Internet privacy August 2013 crimes September 2016 events in the United States December 2016 events in the United States Email hacking es:Yahoo!#Filtración de datos fr:Yahoo!#Vol de données
1642046
https://en.wikipedia.org/wiki/AlphaSmart
AlphaSmart
The AlphaSmart was a brand of portable, battery powered, word-processing keyboards manufactured by NEO Direct, Inc. (formerly Renaissance Learning, Inc, formerly AlphaSmart, Inc., formerly Intelligent Peripheral Devices, Inc.). Originally released in 1993, the first AlphaSmart models were intended for writing on-the-go and could be plugged into a computer to transfer saved written text. The units' portability and long battery life made them valuable to journalists, writers, and students. Later models expanded functionality by spell-checking, running applications, and accessing wireless printers. The last model, Neo 2, was released in 2007, and production was discontinued by the company in late September 2013. AlphaSmart devices have a cult following amongst writers, who claim to use them for distraction-free writing. Background The AlphaSmart was a keyboarding device that enabled a person to work on the go, much like a laptop computer, but it was strictly for word processing, as it functioned essentially like a simple digital typewriter. The Dana (one of the last devices made by AlphaSmart) was an exception, as this device also ran Palm OS applications. Since the AlphaSmart, Dana, and NEO were specialized for limited purposes, they were generally much cheaper than a standard laptop computer. All of these devices were meant to be plugged into an ADB, PS/2, or USB port for transferring the written text into a computer's word processing document for further editing (such as indentation and font preference) or printing if so desired. The AlphaSmart saved every keystroke directly to the machine's RAM, which was maintained by a battery backup even when powered down. AlphaSmarts could transfer data either by a special program that communicated with the AlphaSmart or by the simpler method of transmitting the keystrokes of the written text as if it were the computer's keyboard. When not transferring text, the AlphaSmart could be used as a standard keyboard. AlphaSmarts were very popular in schools for their affordability and durability. Elementary schools and high schools used them; and they were particularly popular among special education departments for use by students with graphomotor challenges. The machines were also popular among journalists and writers, who found them easy to carry and appreciated the full-size keyboard and long battery life. AlphaSmarts continue to be popular with small groups of writers, despite attempts by other companies as early as 2014 to produce other low-distraction writing tools. Company Intelligent Peripheral Devices, Inc. was founded in 1992 by two previous Apple Computer engineers, Ketan Kothari and Joe Barrus, with the mission to "develop and market affordable, portable personal learning solutions for the classroom" and to "deliver affordable, lightweight, rugged portable computing devices that are expandable, easy to use and manage, and provide exceptional battery life." Shortly after its founding they were joined by Ketan's brother Manish. Later, they changed the name of the company to AlphaSmart, Inc. Barrus and Kothari also hold a US patent on a "portable keyboard computer", applied for in 1992 and granted in 1995. AlphaSmart, Inc. completed its initial public offering (IPO) on the NASDAQ on February 6, 2004 and started trading under the symbol ALSM. In June 2005, it was acquired by Renaissance Learning (NASDAQ: RLRN). The name changed again in the Spring of 2009, this time to NEO Direct, Inc. They went on to release the Neo2 and 2Know Responder hardware products. AlphaSmart products AlphaSmart The original AlphaSmart computer companion was shipped in August 1993, and worked only with Apple Macintosh and Apple IIGS computers, plugging into the Apple Desktop Bus (ADB) port. This model provided customers with 16 "pages" of memory (32,000 bytes) for eight separate files (2 pages per file), that were accessed by pressing the corresponding function key. The AlphaSmart took on the aesthetics of the computer it was intended to be partnered with — it had a boxy, durable beige plastic case like the IIGS and Macintoshes of that era. It had a four-line LCD character display similar to what one would find on some appliances. Each character was displayed in its own LCD "box," making the point size and font type fixed. The AlphaSmart could not display graphics, except for ASCII art. It ran on 2 AA batteries and could be used for days at a time due to a power-saving technique, that effectively allowed it to "sleep" in between keystrokes. There was a rechargeable nickel-cadmium battery (NiCad) pack add-on that a customer could purchase separately. The early AlphaSmart models included a couple of jokes, including a reference to The Hitchhiker's Guide to the Galaxy. If, while using the calculator, the answer is 42, the words "The answer to Life, the Universe, and Everything" appear. Or, if the input was 1+1, the calculator would say, "That's too easy." AlphaSmart Pro In February 1995, the AlphaSmart Pro was launched. This looked almost identical to the original but had a PS/2 port as well as an ADB port, making it compatible with both Windows PCs, as well as the Apple IIGS and Macintoshes. Second, the Pro had a "find" feature to search stored text. Third, the AlphaSmart Pro was able to receive text from a computer through "Get Utility" software installed on a Mac or Windows PC. Lastly, it included a password feature for securing content. The Pro model was able to store up to 64 pages of text (128,000 bytes), holding 16 pages in the first file, 8 pages in files two through five, 6 in files six & seven, and 4 pages on file number eight. The original rechargeable NiCad battery pack could also be used in the Pro model. AlphaSmart 2000 In October 1997, AlphaSmart introduced the third generation of the AlphaSmart family, the AlphaSmart 2000. Along with a more ergonomic design, the case of the AlphaSmart 2000 was curvy and blue. New features added were spell-checking, direct printing (allowing a user to plug into a printer directly, bypassing a computer), auto-off power save, and a keyboarding timer. A year later, the company added infrared capability to the 2000, enabling users to transfer text to a computer or another AlphaSmart without a cable. This model needed 3 AA batteries, but could still use the original rechargeable NiCd battery pack. Like the AlphaSmart Pro, it had a 128 kB memory. AlphaSmart 3000 In January 2000, the AlphaSmart 3000 was released. The 3000 used the same chassis as the AlphaSmart 2000, but it was now encased in translucent bondi-blue plastic, matching Apple Computer's first generation iMac. This was meant to be a visual indication that the AlphaSmart 3000 was a USB native device, as many other USB devices were patterned using the iMac's design in the same way. Designers removed the ADB and PS/2 ports, replacing them with a USB port and a mini-DIN-8 serial port. Also new was the SmartApplet architecture that was capable of extending the simple functionality of an AlphaSmart with the inclusion of SmartApplets—miniature software applications that extend the AlphaSmart's functionality to give it features beyond basic word processing. For example, it included a simple 5-function calculator. Additionally, the battery life and memory were increased (although it still ran on 3 AA batteries), and cut/copy/paste functions were introduced. The original rechargeable NiCd battery pack was not compatible with this model. Instead, it used a new optional nickel metal hydride battery (NiMH) pack that lasted longer and eliminated the memory effect of NiCd batteries. The AlphaSmart 3000 had the customary 8 files, each with a capacity of 12.5 pages (about 25 kilobytes), for a total of 100 pages altogether. AlphaSmart announced the discontinuation of the AlphaSmart on April 30, 2006. Dana In June 2002, AlphaSmart released the Dana product which was a radical departure from their standard product line. Similar to Apple Computer's 1997 Newton eMate 300 (a laptop running the Newton PDA operating system), the Dana, FCC ID KV2DANA001, was a fully fledged Palm OS device complete with a touch-screen, allowing a user to write directly on the screen via Graffiti in addition to typing on the built-in, full-size keyboard. The Dana's screen had a backlight and was capable of displaying complex graphics (though only in 4 bit grayscale), unlike the original AlphaSmart line. It had 8 mebibyte (MiB) of storage and two expansion slots for cards in Secure Digital (SD) or Multimedia Card formats. It was compatible with nearly every Palm OS application, and some Palm apps could take advantage of the Dana's extra-wide screen, which was 3.5 times the norm (560 x 160 pixels). The Dana's primary software was the built-in Alphawrite word processor. This is a licensed version of Wordsmith for Palm OS by Blue Nomad, customized for the Dana's wider screen. Up to eight Alphawrite documents could be resident at one time, each instantly accessible via the Dana's eight function keys. It was a simple matter to also switch between the Alphawrite documents and any of the four standing built-in application native to Palm OS (Memo Pad, Datebook, Todo, Address Book). Larger size fonts could be selected from within Alphawrite to compensate for the low-contrast screen display's being somewhat difficult to read. The screen could be used in either landscape mode or portrait mode, though there is no auto-detection of how the Dana is positioned; the user had to tap a menu selection to choose the mode. The screen was taller than that of the original AlphaSmart products, and the Dana's casing was made from opaque dark-blue plastic—a change from the iMac-esque clear blue of the AlphaSmart 3000. It used either a Ni-MH rechargeable battery or 3 AA batteries for up to 25 hours of usage. Danas produced near the end of its production run were modified because many users complained that their Danas were frequently turning themselves on when carried in a container, such as a backpack, depleting the battery charge. This was because the on-off switch was getting depressed and switching the device on. Version 1.5 of the Dana OS provided a way to require both the Enter and On/Off keys to power it up, making it less likely that both keys would be depressed accidentally. This was accessed through the system Keyboard App. The Dana had an IRDA compatible infrared port for transferring documents and files. This was a convenient way to back up files for those who had access to multiple Danas. Dana Wireless One year later, in 2003, AlphaSmart added the Dana Wireless model (FCC ID KV2DANA002) which added built-in Wi-Fi connectivity for internet use & interaction with other Danas, doubled the RAM capacity from 8 MiB to 16 MiB, doubled flash ROM from 4 MiB to 8 MiB, quadrupled the number of shades of gray displayed from 4 to 16, and added SDIO support to the SD card slots. It used 3 AA batteries (standard or Ni-MH or Ni-Cad) for up to 20 hours of usage. Neo The Neo model was introduced in August 2004 and could hold more than 200 pages of text. Its LCD was 50% larger than the AlphaSmart 3000's display. Unlike the 3000, it didn't use fixed blocks for each character and therefore, could display different font/point sizes, along with simple graphics. The Neo also ran a newer operating system that allowed for modular control of SmartApplets and a new version of AlphaWord (the word processing SmartApplet), which allowed dynamic file resizing. The CPU was a 33MHz DragonballVZ, which is a 68000-based processor made by Freescale/Motorola. The Neo's chassis was a dark opaque shade of green with its form factor based on the Dana. It used the same optional NiMH battery pack as the AlphaSmart Dana. Initially, the Neo had several software bugs, such as a hard-to-see cursor and a text-stacking file corruption problem. In 2007, the Neo 2 added several minor upgrades to the original Neo and was the first unit released after AlphaSmart was acquired by Renaissance Learning. It added quiz functionality, using the 2Know! Toolbar, which was developed for the 2Know! Classroom Response System. Teachers could create, distribute, and score quizzes using the Neo 2. Neo 2 could also access Accelerated Reader quizzes and allow students to use network printers, when using the Renaissance Receiver accessory. Both the Neo and Neo 2 were discontinued by Renaissance Learning in late September 2013, although the company still offers support and software to existing users . References External links Renaissance Learning, Inc., parent company Dana hacking FAQ AlphaSmart Forum - flickr AlphaSmart Photos - flickr AlphaSmart Neo review Portable computers Palm OS devices Educational hardware Products introduced in 1993
17175853
https://en.wikipedia.org/wiki/EJBCA
EJBCA
EJBCA is a free software public key infrastructure (PKI) certificate authority software package maintained and sponsored by the Swedish for-profit company PrimeKey Solutions AB, which holds the copyright to most of the codebase. The project's source code is available under the terms of the Lesser GNU General Public License (LGPL). The EJBCA software package is used to install a privately operated certificate authority. This is in contrast to commercial certificate authorities that are operated by a trusted third party. Since its inception EJBCA has been used as certificate authority software for different use cases, including eGovernment, endpoint management, research, energy, eIDAS, telecom, networking, and for usage in SMEs. See also Public Key Infrastructure References Further reading Research and application of EJBCA based on J2EE; Liyi Zhang, Qihua Liu and Min Xu; IFIP International Federation for Information Processing Volume 251/2008; External links Public key infrastructure Cryptographic software Free security software Software using the LGPL license Products introduced in 2001 Java enterprise platform Java platform software
2365049
https://en.wikipedia.org/wiki/GPRS%20Tunnelling%20Protocol
GPRS Tunnelling Protocol
GPRS Tunnelling Protocol (GTP) is a group of IP-based communications protocols used to carry general packet radio service (GPRS) within GSM, UMTS and LTE networks. In 3GPP architectures, GTP and Proxy Mobile IPv6 based interfaces are specified on various interface points. GTP can be decomposed into separate protocols, GTP-C, GTP-U and GTP'. GTP-C is used within the GPRS core network for signaling between gateway GPRS support nodes (GGSN) and serving GPRS support nodes (SGSN). This allows the SGSN to activate a session on a user's behalf (PDP context activation), to deactivate the same session, to adjust quality of service parameters, or to update a session for a subscriber who has just arrived from another SGSN. GTP-U is used for carrying user data within the GPRS core network and between the radio access network and the core network. The user data transported can be packets in any of IPv4, IPv6, or PPP formats. GTP' (GTP prime) uses the same message structure as GTP-C and GTP-U, but has an independent function. It can be used for carrying charging data from the charging data function (CDF) of the GSM or UMTS network to the charging gateway function (CGF). In most cases, this should mean from many individual network elements such as the GGSNs to a centralized computer that delivers the charging data more conveniently to the network operator's billing center. Different GTP variants are implemented by RNCs, SGSNs, GGSNs and CGFs within 3GPP networks. GPRS mobile stations (MSs) are connected to a SGSN without being aware of GTP. GTP can be used with UDP or TCP. UDP is either recommended or mandatory, except for tunnelling X.25 in version 0. GTP version 1 is used only on UDP. General features All variants of GTP have certain features in common. The structure of the messages is the same, with a GTP header following the UDP/TCP header. Header GTP version 1 GTPv1 headers contain the following fields: Version It is a 3-bit field. For GTPv1, this has a value of 1. Protocol Type (PT) a 1-bit value that differentiates GTP (value 1) from GTP' (value 0). Reserved a 1-bit reserved field (must be 0). Extension header flag (E) a 1-bit value that states whether there is an extension header optional field. Sequence number flag (S) a 1-bit value that states whether there is a Sequence Number optional field. N-PDU number flag (PN) a 1-bit value that states whether there is a N-PDU number optional field. Message Type an 8-bit field that indicates the type of GTP message. Different types of messages are defined in 3GPP TS 29.060 section 7.1 Message Length a 16-bit field that indicates the length of the payload in bytes (rest of the packet following the mandatory 8-byte GTP header). Includes the optional fields. Tunnel endpoint identifier (TEID) A 32-bit(4-octet) field used to multiplex different connections in the same GTP tunnel. Sequence number an (optional) 16-bit field. This field exists if any of the E, S, or PN bits are on. The field must be interpreted only if the S bit is on. N-PDU number an (optional) 8-bit field. This field exists if any of the E, S, or PN bits are on. The field must be interpreted only if the PN bit is on. Next extension header type an (optional) 8-bit field. This field exists if any of the E, S, or PN bits are on. The field must be interpreted only if the E bit is on. Next Extension Headers are as follows: Extension length an 8-bit field. This field states the length of this extension header, including the length, the contents, and the next extension header field, in 4-octet units, so the length of the extension must always be a multiple of 4. Contents extension header contents. Next extension header an 8-bit field. It states the type of the next extension, or 0 if no next extension exists. This permits chaining several next extension headers. GTP version 2 It is also known as evolved-GTP or eGTP. GTPv2-C headers contain the following fields: There is no GTPv2-U protocol, GTP-U in LTE also uses GTPv1-U. Version It is a 3-bit field. For GTPv2, this has a value of 2. Piggybacking flag If this bit is set to 1 then another GTP-C message with its own header shall be present at the end of the current message. There are restrictions as to what type of message can be piggybacked depending on what the toplevel GTP-C message is. TEID flag If this bit is set to 1 then the TEID field will be present between the message length and the sequence number. All messages except Echo and Echo reply require TEID to be present. Message length This field shall indicate the length of the message in octets excluding the mandatory of the GTP-C header (the first 4 octets). The TEID (if present) and the Sequence Number shall be included in the length count. Connectivity mechanisms Apart from the common message structure, there is also a common mechanism for verifying connectivity from one GSN to another GSN. This uses two messages. echo request echo response As often as every 60 seconds, a GSN can send an echo request to every other GSN with which it has an active connection. If the other end does not respond it can be treated as down and the active connections to it will be deleted. Apart from the two messages previously mentioned, there are no other messages common across all GTP variants meaning that, for the most part, they effectively form three completely separate protocols. GTP-C - GTP control The GTP-C protocol is the control section of the GTP standard. When a subscriber requests a PDP context, the SGSN will send a create PDP context request GTP-C message to the GGSN giving details of the subscriber's request. The GGSN will then respond with a create PDP context response GTP-C message which will either give details of the PDP context actually activated or will indicate a failure and give a reason for that failure. This is a UDP message on port 2123. The eGTP-C (or, GTPv2-C) protocol is responsible for creating, maintaining and deleting tunnels on multiple Sx interfaces. It is used for the control plane path management, tunnel management and mobility management. It also controls forwarding relocation messages; SRNS context and creating forward tunnels during inter LTE handovers. GTP-U - GTP user data tunneling GTP-U is, in effect a relatively simple IP based tunnelling protocol which permits many tunnels between each set of end points. When used in the UMTS, each subscriber will have one or more tunnel, one for each PDP context that they have active, as well as possibly having separate tunnels for specific connections with different quality of service requirements. The separate tunnels are identified by a TEID (Tunnel Endpoint Identifier) in the GTP-U messages, which should be a dynamically allocated random number. If this random number is of cryptographic quality, then it will provide a measure of security against certain attacks. Even so, the requirement of the 3GPP standard is that all GTP traffic, including user data should be sent within secure private networks, not directly connected to the Internet. This happens on UDP port 2152. The GTPv1-U protocol is used to exchange user data over GTP tunnels across the Sx interfaces. An IP packet for a UE (user endpoint) is encapsulated in an GTPv1-U packet and tunnelled between the P-GW and the eNodeB for transmission with respect to a UE over S1-U and S5/S8 interfaces. GTP' - charging transfer The GTP' protocol is used to transfer charging data to the Charging Gateway Function. GTP' uses TCP/UDP port 3386. Within the GPRS core network GTP is the primary protocol used in the GPRS core network. It is the protocol which allows end users of a GSM or UMTS network to move from place to place whilst continuing to connect to the Internet as if from one location at the GGSN. It does this by carrying the subscriber's data from the subscriber's current SGSN to the GGSN which is handling the subscriber's session. Three forms of GTP are used by the GPRS core network. GTP-U for transfer of user data in separated tunnels for each PDP context GTP-C for control reasons including: setup and deletion of PDP contexts verification of GSN reachability updates; e.g., as subscribers move from one SGSN to another. GTP' for transfer of charging data from GSNs to the charging function. GGSNs and SGSNs (collectively known as GSNs) listen for GTP-C messages on UDP port 2123 and for GTP-U messages on port 2152. This communication happens within a single network or may, in the case of international roaming, happen internationally, probably across a GPRS roaming exchange (GRX). The Charging Gateway Function (CGF) listens to GTP' messages sent from the GSNs on TCP/UDP port 3386. The core network sends charging information to the CGF, typically including PDP context activation times and the quantity of data which the end user has transferred. However, this communication which occurs within one network is less standardized and may, depending on the vendor and configuration options, use proprietary encoding or even an entirely proprietary system. Use on the IuPS interface GTP-U is used on the IuPS between the GPRS core network and the RAN, however the GTP-C protocol is not used. In this case, RANAP is used as a control protocol and establishes GTP-U tunnels between the SGSN and the radio network controller (RNC). Protocol Stack GTP can be used with UDP or TCP. GTP version 1 is used only on UDP. there are three versions defined, versions 0, 1 and 2. Version 0 and version 1 differ considerably in structure. In version 0, the signalling protocol (the protocol which sets up the tunnels by activating the PDP context) is combined with the tunnelling protocol on one port. Versions 1 and 2 are each effectively two protocols, one for control (called GTP-C) and one for user data tunneling (called GTP-U). GTP version 2 is different to version 1 only in GTP-C. This is due to 3GPP defining enhancements to GTP-C for EPS in version 2 to improve bearer handling. GTP-U is also used to transport user data from the RNC to the SGSN in UMTS networks. However, in this case signalling is done using RANAP instead of GTP-C. Historical GTP versions The original version of GTP (version 0) had considerable differences from the current versions (versions 1,2): the tunnel identification was non-random; options were provided for transporting X.25; the fixed port number 3386 was used for all functions (not just charging as in GTPv1); TCP was allowed as a transport option instead of UDP, but support for this was optional; subscription-related fields such as quality of service were more limited. The non-random TEID in version 0 represented a security problem if an attacker had access to any roaming partner's network, or could find some other way to remotely send packets to the GPRS backbone. Version 0 is going out of use and being replaced by version 1 in almost all networks. Fortunately, however the use of different port numbers allows easy blocking of version 0 through simple IP access lists. GTP standardization GTP was originally standardized within ETSI (GSM standard 09.60 ). With the creation of the UMTS standards this was moved over to the 3GPP which, maintains it as 3GPP standard 29.060. GTP' uses the same message format, but its special uses are covered in standard 32.295 along with the standardized formats for the charging data it transfers. Later versions of TS 29.060 deprecate GTPv1/v0 interworking such that there is no fallback in the event that the GSN does not support the higher version. GTPv2 (for evolved packet services) went into draft in early 2008 and was released in December of that year. GTPv2 offers fallback to GTPv1 via the earlier "Version Not Supported" mechanism but explicitly offers no support for fallback to GTPv0. See also Proxy Mobile IPv6 Mobile IP PFCP RANAP Notes References GSM standard 09.60, ETSI, 1996–98, this standard covers the original version 0 of GTP. 3GPP TS 29.060 V6.9.0 (2005-06), 3rd Generation Partnership Project, 650 Route des Lucioles - Sophia Antipolis, Valbonne - FRANCE, 2005-06. This is the primary standard defining all of the GTP variants for GTP version 1. 3GPP TS 32.295 V6.1.0 (2005-06), 3rd Generation Partnership Project, 650 Route des Lucioles - Sophia Antipolis, Valbonne - FRANCE, 2005-06. This standard covers using GTP for charging. 3GPP TS 29.274 V8.1.0 (2009-03), 3rd Generation Partnership Project, 650 Route des Lucioles - Sophia Antipolis, Valbonne - FRANCE, 2009-03. GTPv2 for evolved GPRS. External links The 3GPP web site, home of the GTP standard Free and open source implementation of GPRS Tunnelling Protocol version 2 (GTPv2) or Evolved GTP (eGTP) Network protocols Mobile_telecommunications_standards GSM_standard 3GPP_standards Tunneling protocols
23916629
https://en.wikipedia.org/wiki/Ebook
Ebook
An ebook (short for electronic book), also known as an e-book or eBook, is a book publication made available in digital form, consisting of text, images, or both, readable on the flat-panel display of computers or other electronic devices. Although sometimes defined as "an electronic version of a printed book", some e-books exist without a printed equivalent. E-books can be read on dedicated e-reader devices, but also on any computer device that features a controllable viewing screen, including desktop computers, laptops, tablets and smartphones. In the 2000s, there was a trend of print and e-book sales moving to the Internet, where readers buy traditional paper books and e-books on websites using e-commerce systems. With print books, readers are increasingly browsing through images of the covers of books on publisher or bookstore websites and selecting and ordering titles online; the paper books are then delivered to the reader by mail or another delivery service. With e-books, users can browse through titles online, and then when they select and order titles, the e-book can be sent to them online or the user can download the e-book. By the early 2010s, e-books had begun to overtake hardcover by overall publication figures in the U.S. The main reasons for people buying e-books are possibly lower prices, increased comfort (as they can buy from home or on the go with mobile devices) and a larger selection of titles. With e-books, "electronic bookmarks make referencing easier, and e-book readers may allow the user to annotate pages." "Although fiction and non-fiction books come in e-book formats, technical material is especially suited for e-book delivery because it can be digitally searched" for keywords. In addition, for programming books, code examples can be copied. The amount of e-book reading is increasing in the U.S.; by 2014, 28% of adults had read an e-book, compared to 23% in 2013; and by 2014, 50% of American adults had an e-reader or a tablet, compared to 30% owning such devices in 2013. Terminology E-books are also referred to as "ebooks", "eBooks", "Ebooks", "e-Books", "e-journals", "e-editions", or "digital books". A device that is designed specifically for reading e-books is called an "e-reader", "ebook device", or "eReader". History The Readies (1930) Some trace the concept of an e-reader, a device that would enable the user to view books on a screen, to a 1930 manifesto by Bob Brown, written after watching his first "talkie" (movie with sound). He titled it The Readies, playing off the idea of the "talkie". In his book, Brown says movies have outmaneuvered the book by creating the "talkies" and, as a result, reading should find a new medium: Brown's notion, however, was much more focused on reforming orthography and vocabulary, than on medium ("It is time to pull out the stopper" and begin "a bloody revolution of the word."): introducing huge numbers of portmanteau symbols to replace normal words, and punctuation to simulate action or movement; so it is not clear whether this fits into the history of "e-books" or not. Later e-readers never followed a model at all like Brown's; however, he correctly predicted the miniaturization and portability of e-readers. In an article, Jennifer Schuessler writes, "The machine, Brown argued, would allow readers to adjust the type size, avoid paper cuts and save trees, all while hastening the day when words could be 'recorded directly on the palpitating ether.'" Brown believed that the e-reader (and his notions for changing text itself) would bring a completely new life to reading. Schuessler correlates it with a DJ spinning bits of old songs to create a beat or an entirely new song, as opposed to just a remix of a familiar song. Inventor The inventor of the first e-book is not widely agreed upon. Some notable candidates include the following: Roberto Busa (1946–1970) The first e-book may be the Index Thomisticus, a heavily annotated electronic index to the works of Thomas Aquinas, prepared by Roberto Busa, S.J. beginning in 1946 and completed in the 1970s. Although originally stored on a single computer, a distributable CD-ROM version appeared in 1989. However, this work is sometimes omitted; perhaps because the digitized text was a means for studying written texts and developing linguistic concordances, rather than as a published edition in its own right. In 2005, the Index was published online. Ángela Ruiz Robles (1949) In 1949, Ángela Ruiz Robles, a teacher from Ferrol, Spain, patented the Enciclopedia Mecánica, or the Mechanical Encyclopedia, a mechanical device which operated on compressed air where text and graphics were contained on spools that users would load onto rotating spindles. Her idea was to create a device which would decrease the number of books that her pupils carried to school. The final device was planned to include audio recordings, a magnifying glass, a calculator and an electric light for night reading. Her device was never put into production but a prototype is kept in the National Museum of Science and Technology in A Coruña. Douglas Engelbart and Andries van Dam (1960s) Alternatively, some historians consider electronic books to have started in the early 1960s, with the NLS project headed by Douglas Engelbart at Stanford Research Institute (SRI), and the Hypertext Editing System and FRESS projects headed by Andries van Dam at Brown University. FRESS documents ran on IBM mainframes and were structure-oriented rather than line-oriented; they were formatted dynamically for different users, display hardware, window sizes, and so on, as well as having automated tables of contents, indexes, and so on. All these systems also provided extensive hyperlinking, graphics, and other capabilities. Van Dam is generally thought to have coined the term "electronic book", and it was established enough to use in an article title by 1985. FRESS was used for reading extensive primary texts online, as well as for annotation and online discussions in several courses, including English Poetry and Biochemistry. Brown's faculty made extensive use of FRESS; for example the philosopher Roderick Chisholm used it to produce several of his books. Thus in the Preface to Person and Object (1979) he writes "The book would not have been completed without the epoch-making File Retrieval and Editing System..." Brown University's work in electronic book systems continued for many years, including US Navy funded projects for electronic repair-manuals; a large-scale distributed hypermedia system known as InterMedia; a spinoff company Electronic Book Technologies that built DynaText, the first SGML-based e-reader system; and the Scholarly Technology Group's extensive work on the Open eBook standard. Michael S. Hart (1971) Despite the extensive earlier history, several publications report Michael S. Hart as the inventor of the e-book. In 1971, the operators of the Xerox Sigma V mainframe at the University of Illinois gave Hart extensive computer-time. Seeking a worthy use of this resource, he created his first electronic document by typing the United States Declaration of Independence into a computer in plain text. Hart planned to create documents using plain text to make them as easy as possible to download and view on devices. Early implementations After Hart first adapted the U.S. Declaration of Independence into an electronic document in 1971, Project Gutenberg was launched to create electronic copies of more texts, especially books. Another early e-book implementation was the desktop prototype for a proposed notebook computer, the Dynabook, in the 1970s at PARC: a general-purpose portable personal computer capable of displaying books for reading. In 1980, the U.S. Department of Defense began concept development for a portable electronic delivery device for technical maintenance information called project PEAM, the Portable Electronic Aid for Maintenance. Detailed specifications were completed in FY 1981/82, and prototype development began with Texas Instruments that same year. Four prototypes were produced and delivered for testing in 1986, and tests were completed in 1987. The final summary report was produced in 1989 by the U.S. Army Research Institute for the Behavioral and Social Sciences, authored by Robert Wisher and J. Peter Kincaid. A patent application for the PEAM device, titled "Apparatus for delivering procedural type instructions", was submitted by Texas Instruments on December 4, 1985, listing John K. Harkins and Stephen H. Morriss as inventors. In 1992, Sony launched the Data Discman, an electronic book reader that could read e-books that were stored on CDs. One of the electronic publications that could be played on the Data Discman was called The Library of the Future. Early e-books were generally written for specialty areas and a limited audience, meant to be read only by small and devoted interest groups. The scope of the subject matter of these e-books included technical manuals for hardware, manufacturing techniques, and other subjects. In the 1990s, the general availability of the Internet made transferring electronic files much easier, including e-books. In 1993, Paul Baim released a freeware HyperCard stack, called EBook, that allowed easy import of any text file to create a pageable version similar to an electronic paperback book. A notable feature was automatic tracking of the last page read so that on returning to the 'book' you were taken back to where you had previously left off reading. The title of this stack may have been the first instance of the term 'ebook' used in the modern context. E-book formats As e-book formats emerged and proliferated, some garnered support from major software companies, such as Adobe with its PDF format that was introduced in 1993. Unlike most other formats, PDF documents are generally tied to a particular dimension and layout, rather than adjusting dynamically to the current page, window, or another size. Different e-reader devices followed different formats, most of them accepting books in only one or a few formats, thereby fragmenting the e-book market even more. Due to the exclusiveness and limited readerships of e-books, the fractured market of independent publishers and specialty authors lacked consensus regarding a standard for packaging and selling e-books. Meanwhile, scholars formed the Text Encoding Initiative, which developed consensus guidelines for encoding books and other materials of scholarly interest for a variety of analytic uses as well as reading, and countless literary and other works have been developed using the TEI approach. In the late 1990s, a consortium formed to develop the Open eBook format as a way for authors and publishers to provide a single source-document which many book-reading software and hardware platforms could handle. Several scholars from the TEI were closely involved in the early development of Open eBook . Focused on portability, Open eBook as defined required subsets of XHTML and CSS; a set of multimedia formats (others could be used, but there must also be a fallback in one of the required formats), and an XML schema for a "manifest", to list the components of a given e-book, identify a table of contents, cover art, and so on. This format led to the open format EPUB. Google Books has converted many public domain works to this open format. In 2010, e-books continued to gain in their own specialist and underground markets. Many e-book publishers began distributing books that were in the public domain. At the same time, authors with books that were not accepted by publishers offered their works online so they could be seen by others. Unofficial (and occasionally unauthorized) catalogs of books became available on the web, and sites devoted to e-books began disseminating information about e-books to the public. Nearly two-thirds of the U.S. Consumer e-book publishing market are controlled by the "Big Five". The "Big Five" publishers are: Hachette, HarperCollins, Macmillan, Penguin Random House and Simon & Schuster. Libraries U.S. libraries began to offer free e-books to the public in 1998 through their websites and associated services, although the e-books were primarily scholarly, technical or professional in nature, and could not be downloaded. In 2003, libraries began offering free downloadable popular fiction and non-fiction e-books to the public, launching an e-book lending model that worked much more successfully for public libraries. The number of library e-book distributors and lending models continued to increase over the next few years. From 2005 to 2008, libraries experienced a 60% growth in e-book collections. In 2010, a Public Library Funding and Technology Access Study by the American Library Association found that 66% of public libraries in the U.S. were offering e-books, and a large movement in the library industry began to seriously examine the issues relating to e-book lending, acknowledging a "tipping point" when e-book technology would become widely established. Content from public libraries can be downloaded to e-readers using application software like Overdrive and Hoopla. The U.S. National Library of Medicine has for many years provided PubMed, a comprehensive bibliography of medical literature. In early 2000, NLM set up the PubMed Central repository, which stores full-text e-book versions of many medical journal articles and books, through cooperation with scholars and publishers in the field. Pubmed Central also now provides archiving and access to over 4.1 million articles, maintained in a standard XML format known as the Journal Article Tag Suite (or "JATS"). Despite the widespread adoption of e-books, some publishers and authors have not endorsed the concept of electronic publishing, citing issues with user demand, copyright infringement and challenges with proprietary devices and systems. In a survey of interlibrary loan (ILL) librarians, it was found that 92% of libraries held e-books in their collections and that 27% of those libraries had negotiated ILL rights for some of their e-books. This survey found significant barriers to conducting interlibrary loan for e-books. Patron-driven acquisition (PDA) has been available for several years in public libraries, allowing vendors to streamline the acquisition process by offering to match a library's selection profile to the vendor's e-book titles. The library's catalog is then populated with records for all of the e-books that match the profile. The decision to purchase the title is left to the patrons, although the library can set purchasing conditions such as a maximum price and purchasing caps so that the dedicated funds are spent according to the library's budget. The 2012 meeting of the Association of American University Presses included a panel on the PDA of books produced by university presses, based on a preliminary report by Joseph Esposito, a digital publishing consultant who has studied the implications of PDA with a grant from the Andrew W. Mellon Foundation. Challenges Although the demand for e-book services in libraries has grown in the first two decades of the 21st century, difficulties keep libraries from providing some e-books to clients. Publishers will sell e-books to libraries, but in most cases they will only give libraries a limited license to the title, meaning that the library does not own the electronic text but is allowed to circulate it for either a certain period of time, or a certain number of check outs, or both. When a library purchases an e-book license, the cost is at least three times what it would be for a personal consumer. E-book licenses are more expensive than paper-format editions because publishers are concerned that an e-book that is sold could theoretically be read and/or checked out by a huge number of users, potentially damaging sales. However, some studies have found the opposite effect to be true (for example, Hilton and Wikey 2010). Archival storage The Internet Archive and Open Library offer more than six million fully accessible public domain e-books. Project Gutenberg has over 52,000 freely available public domain e-books. Dedicated hardware readers and mobile software An e-reader, also called an e-book reader or e-book device, is a mobile electronic device that is designed primarily for the purpose of reading e-books and digital periodicals. An e-reader is similar in form, but more limited in purpose than a tablet. In comparison to tablets, many e-readers are better than tablets for reading because they are more portable, have better readability in sunlight and have longer battery life. In July 2010, online bookseller Amazon.com reported sales of e-books for its proprietary Kindle outnumbered sales of hardcover books for the first time ever during the second quarter of 2010, saying it sold 140 e-books for every 100 hardcover books, including hardcovers for which there was no digital edition. By January 2011, e-book sales at Amazon had surpassed its paperback sales. In the overall US market, paperback book sales are still much larger than either hardcover or e-book; the American Publishing Association estimated e-books represented 8.5% of sales as of mid-2010, up from 3% a year before. At the end of the first quarter of 2012, e-book sales in the United States surpassed hardcover book sales for the first time. Until late 2013, use of an e-reader was not allowed on airplanes during takeoff and landing by the FAA. In November 2013, the FAA allowed use of e-readers on airplanes at all times if it is in Airplane Mode, which means all radios turned off, and Europe followed this guidance the next month. In 2014, The New York Times predicted that by 2018 e-books will make up over 50% of total consumer publishing revenue in the United States and Great Britain. Applications Some of the major book retailers and multiple third-party developers offer free (and in some third-party cases, premium paid) e-reader software applications (apps) for the Mac and PC computers as well as for Android, Blackberry, iPad, iPhone, Windows Phone and Palm OS devices to allow the reading of e-books and other documents independently of dedicated e-book devices. Examples are apps for the Amazon Kindle, Barnes & Noble Nook, iBooks, Kobo eReader and Sony Reader. Timeline Before the 1980s c. 1949 Ángela Ruiz Robles patents the idea of the electronic book, called the Mechanical Encyclopedia, in Galicia, Spain. Roberto Busa begins planning the Index Thomisticus. c. 1963 Douglas Engelbart starts the NLS (and later Augment) projects. c. 1965 Andries van Dam starts the HES (and later FRESS) projects, with assistance from Ted Nelson, to develop and use electronic textbooks for humanities and in pedagogy. 1971 Michael S. Hart types the US Declaration of Independence into a computer to create the first e-book available on the Internet and launches Project Gutenberg in order to create electronic copies of more books. 1978 The Hitchhiker's Guide to the Galaxy radio series launches (novel published in 1979), featuring an electronic reference book containing all knowledge in the Galaxy. This vast amount of data could be fit into something the size of a large paperback book, with updates received over the "Sub-Etha". c. 1979 Roberto Busa finishes the Index Thomisticus, a complete lemmatisation of the 56 printed volumes of Saint Thomas Aquinas and of a few related authors. 1980s and 1990s 1986 Judy Malloy writes and programmes the first online hypertext fiction, Uncle Roger, with links that take the narrative in different directions depending on the reader's choice. 1989 Franklin Computer releases an electronic edition of the Bible that can only be read with a stand-alone device. 1990 Eastgate Systems publishes the first hypertext fiction released on floppy disk, afternoon, a story, by Michael Joyce. Electronic Book Technologies releases DynaText, the first SGML-based system for delivering large-scale books such as aircraft technical manuals. It was later tested on a US aircraft carrier as replacement for paper manuals. Sony launches the Data Discman e-book player. 1991 Voyager Company develops Expanded Books, which are books on CD-ROM in a digital format. 1992 F. Crugnola and I. Rigamonti design and create the first e-reader, called Incipit, as a thesis project at the Polytechnic University of Milan. Apple starts using its DocViewer format "to distribute documentation to developers in an electronic form", which effectively meant Inside Macintosh books. 1993 Peter James publishes his novel Host on two floppy disks, which at the time was called the "world's first electronic novel"; a copy of it is stored at the Science Museum. Hugo Award and Nebula Award nominee works are included on a CD-ROM by Brad Templeton. Launch of Bibliobytes, a website for obtaining e-books, both for free and for sale on the Internet. Paul Baim releases the EBook 1.0 HyperCard stack that allows the user to easily convert any text file into a HyperCard based pageable book. 1994 C & M Online is founded in Raleigh, North Carolina and begins publishing e-books through its imprint, Boson Books; authors include Fred Chappell, Kelly Cherry, Leon Katz, Richard Popkin, and Robert Rodman. More than two dozen volumes of Inside Macintosh are published together on a single CD-ROM in Apple DocViewer format. Apple subsequently switches to using Adobe Acrobat. The popular format for publishing e-books changes from plain text to HTML. 1995 Online poet Alexis Kirke discusses the need for wireless internet electronic paper readers in his article "The Emuse". 1996 Project Gutenberg reaches 1,000 titles. Joseph Jacobson works at MIT to create electronic ink, a high-contrast, low-cost, read/write/erase medium to display e-books. 1997 E Ink Corporation is co-founded by MIT undergraduates J.D. Albert, Barrett Comiskey, MIT professor Joseph Jacobson, as well as Jeremy Rubin and Russ Wilcox to create an electronic printing technology. This technology is later used on the displays of the Sony Reader, Barnes & Noble Nook, and Amazon Kindle. 1998 NuvoMedia releases the first handheld e-reader, the Rocket eBook. SoftBook launches its SoftBook reader. This e-reader, with expandable storage, could store up to 100,000 pages of content, including text, graphics and pictures. The Cybook is sold and manufactured at first by Cytale (1998–2003) and later by Bookeen. 1999 The NIST releases the Open eBook format based on XML to the public domain; most future e-book formats derive from Open eBook. Publisher Simon & Schuster creates a new imprint called iBooks and becomes the first trade publisher to simultaneously publish some of its titles in e-book and print format. Oxford University Press makes a selection of its books available as e-books through netLibrary. Publisher Baen Books opens up the Baen Free Library to make available Baen titles as free e-books. Kim Blagg, via her company Books OnScreen, begins selling multimedia-enhanced e-books on CDs through retailers including Amazon, Barnes & Noble and Borders Books. 2000s 2000 Joseph Jacobson, Barrett O. Comiskey and Jonathan D. Albert are granted US patents related to displaying electronic books, these patents are later used in the displays for most e-readers. Stephen King releases his novella Riding the Bullet exclusively online and it became the first mass-market e-book, selling 500,000 copies in 48 hours. Microsoft releases the Microsoft Reader with ClearType for increased readability on PCs and handheld devices. Microsoft and Amazon work together to sell e-books that can be purchased on Amazon, and using Microsoft software downloaded to PCs and handhelds. A digitized version of the Gutenberg Bible is made available online at the British Library. 2001 Adobe releases Adobe Acrobat Reader 5.0 allowing users to underline, take notes and bookmark. 2002 Palm, Inc and OverDrive, Inc make Palm Reader e-books available worldwide, offering over 5,000 e-books in several languages; these could be read on Palm PDAs or using a computer application. Random House and HarperCollins start to sell digital versions of their titles in English. 2004 Sony Librie, the first e-reader using an E Ink display is released; it has a six-inch screen. Google announces plans to digitize the holdings of several major libraries, as part of what would later be called the Google Books Library Project. 2005 Amazon buys Mobipocket, the creator of the mobi e-book file format and e-reader software. Google is sued for copyright infringement by the Authors Guild for scanning books still in copyright. 2006 Sony Reader PRS-500, with an E Ink screen and two weeks of battery life, is released. LibreDigital launches BookBrowse as an online reader for publisher content. 2007 The International Digital Publishing Forum releases EPUB to replace Open eBook. In November, Amazon.com releases the Kindle e-reader with 6-inch E Ink screen in the US and it sells outs in 5.5 hours. Simultaneously, the Kindle Store opens, with initially more than 88,000 e-books available. Bookeen launches Cybook Gen3 in Europe; it can display e-books and play audiobooks. 2008 Adobe and Sony agree to share their technologies (Adobe Reader and DRM) with each other. Sony sells the Sony Reader PRS-505 in UK and France. 2009 Bookeen releases the Cybook Opus in the US and Europe. Sony releases the Reader Pocket Edition and Reader Touch Edition. Amazon releases the Kindle 2 that includes a text-to-speech feature. Amazon releases the Kindle DX that has a 9.7-inch screen in the U.S. Barnes & Noble releases the Nook e-reader in the US. Amazon releases the Kindle for PC application in late 2009, making the Kindle Store library available for the first time outside Kindle hardware. 2010s 2010 January – Amazon releases the Kindle DX International Edition worldwide. April – Apple releases the iPad bundled with an e-book app called iBooks. May – Kobo Inc. releases its Kobo eReader to be sold at Indigo/Chapters in Canada and Borders in the United States. July – Amazon reports that its e-book sales outnumbered sales of hardcover books for the first time during the second quarter of 2010. August – PocketBook expands its line with an Android e-reader. August – Amazon releases the third generation Kindle, available in Wi-Fi and 3G & Wi-Fi versions. October – Bookeen reveals the Cybook Orizon at CES. October – Kobo Inc. releases an updated Kobo eReader, which includes Wi-Fi capability. November – The Sentimentalists wins the prestigious national Giller Prize in Canada; due to the small scale of the novel's publisher, the book is not widely available in printed form, so the e-book edition becomes the top-selling title on Kobo devices for 2010. November – Barnes & Noble releases the Nook Color, a color LCD tablet. December – Google launches Google eBooks offering over 3 million titles, becoming the world's largest e-book store to date. 2011 May – Amazon.com announces that its e-book sales in the US now exceed all of its printed book sales. June – Barnes & Noble releases the Nook Simple Touch e-reader and Nook Tablet. August – Bookeen launches its own e-books store, BookeenStore.com, and starts to sell digital versions of titles in French. September – Nature Publishing releases the pilot version of Principles of Biology, a customizable, modular textbook, with no corresponding paper edition. June/November – As the e-reader market grows in Spain, companies like Telefónica, Fnac, and Casa del Libro launch their e-readers with the Spanish brand "bq readers". November – Amazon launches the Kindle Fire and Kindle Touch, both devices designed for e-reading. 2012 E-book sales in the US market collect over three billion in revenue. January – Apple releases iBooks Author, software for creating iPad e-books to be directly published in its iBooks bookstore or to be shared as PDF files. January – Apple opens a textbook section in its iBooks bookstore. February – Nature Publishing announces the worldwide release of Principles of Biology, following the success of the pilot version some months earlier. February – Library.nu (previously called ebooksclub.org and gigapedia.com, a popular linking website for downloading e-books) is accused of copyright infringement and closed down by court order. March – The publishing companies Random House, Holtzbrinck, and arvato bring to market an e-book library called Skoobe. March – US Department of Justice prepares anti-trust lawsuit against Apple, Simon & Schuster, Hachette Book Group, Penguin Group, Macmillan, and HarperCollins, alleging collusion to increase the price of books sold on Amazon. March – PocketBook releases the PocketBook Touch, an E Ink Pearl e-reader, winning awards from German magazines Tablet PC and Computer Bild. June – Kbuuk releases the cloud-based e-book self-publishing SaaS platform on the Pubsoft digital publishing engine. September – Amazon releases the Kindle Paperwhite, its first e-reader with built-in front LED lights. 2013 April – Kobo releases the Kobo Aura HD with a 6.8-inch screen, which is larger than the current models produced by its US competitors. May – Mofibo launches the first Scandinavian unlimited access e-book subscription service. June – Association of American Publishers announces that e-books now account for about 20% of book sales. Barnes & Noble estimates it has a 27% share of the US e-book market. June – Barnes & Noble announces its intention to discontinue manufacturing Nook tablets, but to continue producing black-and-white e-readers such as the Nook Simple Touch. June – Apple executive Keith Moerer testifies in the e-book price fixing trial that the iBookstore held approximately 20% of the e-book market share in the United States within the months after launch – a figure that Publishers Weekly reports is roughly double many of the previous estimates made by third parties. Moerer further testified that iBookstore acquired about an additional 20% by adding Random House in 2011. Five major US e-book publishers, as part of their settlement of a price-fixing suit, are ordered to refund about $3 for every electronic copy of a New York Times best-seller that they sold from April 2010 to May 2012. This could equal $160 million in settlement charges. Barnes & Noble releases the Nook Glowlight, which has a 6-inch touchscreen using E Ink Pearl and Regal, with built-in front LED lights. July – US District Court Judge Denise Cote finds Apple guilty of conspiring to raise the retail price of e-books and schedules a trial in 2014 to determine damages. August – Kobo releases the Kobo Aura, a baseline touchscreen six-inch e-reader. September – Oyster launches its unlimited access e-book subscription service. November – US District Judge Chin sides with Google in Authors Guild v. Google, citing fair use. The authors said they would appeal. December – Scribd launches the first public unlimited access subscription service for e-books. 2014 April – Kobo releases the Aura H₂0, the world's first waterproof commercially produced e-reader. June – US District Court Judge Cote grants class action certification to plaintiffs in a lawsuit over Apple's alleged e-book price conspiracy; the plaintiffs are seeking $840 million in damages. Apple appeals the decision. June – Apple settles the e-book antitrust case that alleged Apple conspired to e-book price fixing out of court with the States; however if Judge Cote's ruling is overturned in appeal the settlement would be reversed. July – Amazon launches Kindle Unlimited, an unlimited-access e-book and audiobook subscription service. 2015 June – The 2nd US Circuit Court of Appeals with a 2:1 vote concurs with Judge Cote that Apple conspired to e-book price fixing and violated federal antitrust law. Apple appealed the decision. June – Amazon releases the Kindle Paperwhite (3rd generation) that is the first e-reader to feature Bookerly, a font exclusively designed for e-readers. September – Oyster announces its unlimited access e-book subscription service would be shut down in early 2016 and that it would be acquired by Google. September – Malaysian e-book company, e-Sentral, introduces for the first time geo-location distribution technology for e-books via bluetooth beacon. It was first demonstrated in a large scale at Kuala Lumpur International Airport. October – Amazon releases the Kindle Voyage that has a 6-inch, 300 ppi E Ink Carta HD display, which was the highest resolution and contrast available in e-readers as of 2014. It also features adaptive LED lights and page turn sensors on the sides of the device. October – Barnes & Noble releases the Glowlight Plus, its first waterproof e-reader. October – The US appeals court sides with Google instead of the Authors' Guild, declaring that Google did not violate copyright law in its book scanning project. December – Playster launches an unlimited-access subscription service including e-books and audiobooks. By the end of 2015, Google Books scanned more than 25 million books. By 2015, over 70 million e-readers had been shipped worldwide. 2016 March – The Supreme Court of the United States declines to hear Apple's appeal against the court's decision of July 2013 that the company conspired to e-book price fixing, hence the previous court decision stands, obliging Apple to pay $450 million. April – The Supreme Court declines to hear the Authors Guild's appeal of its book scanning case, so the lower court's decision stands; the result means that Google can scan library books and display snippets in search results without violating US copyright law. April – Amazon releases the Kindle Oasis, its first e-reader in five years to have physical page turn buttons and, as a premium product, it includes a leather case with a battery inside; without including the case, it is the lightest e-reader on the market to date. August – Kobo releases the Aura One, the first commercial e-reader with a 7.8-inch E Ink Carta HD display. By the end of the year, smartphones and tablets have both individually overtaken e-readers as methods for reading an e-book, and paperback book sales are now higher than e-book sales. 2017 February – The Association of American Publishers releases data showing that the US adult e-book market declined 16.9% in the first nine months of 2016 over the same period in 2015, and Nielsen Book determines that the e-book market had an overall total decline of 16% in 2016 over 2015, including all age groups. This decline is partly due to widespread e-book price increases by major publishers, which has increased the average e-book price from $6 to almost $10. February – The US version of Kindle Unlimited comprises more than 1.5 million titles, including over 290,000 foreign language titles. March – The Guardian reports that sales of physical books are outperforming digital titles in the UK, since it can be cheaper to buy the physical version of a book when compared to the digital version due to Amazon's deal with publishers that allows agency pricing. April – The Los Angeles Times reports that, in 2016, sales of hardcover books were higher than e-books for the first time in five years. October – Amazon releases the Oasis 2, the first Kindle to be IPX8 rated meaning that it is water resistant up to 2 meters for up to 60 minutes; it is also the first Kindle to enable white text on a black background, a feature that may be helpful for nighttime reading. 2018 January – U.S. public libraries report record-breaking borrowing of OverDrive e-books over the course of the year, with more than 274 million e-books loaned to card holders, a 22% increase over the 2017 figure. October – The EU allowed its member countries to charge the same VAT for ebooks as for paper books. 2019 May – Barnes & Noble releases the GlowLight Plus e-reader, the largest Nook e-reader to date with a 7.8-inch E Ink screen. Formats Writers and publishers have many formats to choose from when publishing e-books. Each format has advantages and disadvantages. The most popular e-readers and their natively supported formats are shown below: Digital rights management Most e-book publishers do not warn their customers about the possible implications of the digital rights management tied to their products. Generally, they claim that digital rights management is meant to prevent illegal copying of the e-book. However, in many cases, it is also possible that digital rights management will result in the complete denial of access by the purchaser to the e-book. The e-books sold by most major publishers and electronic retailers, which are Amazon.com, Google, Barnes & Noble, Kobo Inc. and Apple Inc., are DRM-protected and tied to the publisher's e-reader software or hardware. The first major publisher to omit DRM was Tor Books, one of the largest publishers of science fiction and fantasy, in 2012. Smaller e-book publishers such as O'Reilly Media, Carina Press and Baen Books had already forgone DRM previously. Production Some e-books are produced simultaneously with the production of a printed format, as described in electronic publishing, though in many instances they may not be put on sale until later. Often, e-books are produced from pre-existing hard-copy books, generally by document scanning, sometimes with the use of robotic book scanners, having the technology to quickly scan books without damaging the original print edition. Scanning a book produces a set of image files, which may additionally be converted into text format by an OCR program. Occasionally, as in some projects, an e-book may be produced by re-entering the text from a keyboard. Sometimes only the electronic version of a book is produced by the publisher. It is possible to release an e-book chapter by chapter as each chapter is written. This is useful in fields such as information technology where topics can change quickly in the months that it takes to write a typical book. It is also possible to convert an electronic book to a printed book by print on demand. However, these are exceptions as tradition dictates that a book be launched in the print format and later if the author wishes an electronic version is produced. The New York Times keeps a list of best-selling e-books, for both fiction and non-fiction. Reading data All of the e-readers and reading apps are capable of tracking e-book reading data, and the data could contain which e-books users open, how long the users spend reading each e-book and how much of each e-book is finished. In December 2014, Kobo released e-book reading data collected from over 21 million of its users worldwide. Some of the results were that only 44.4% of UK readers finished the bestselling e-book The Goldfinch and the 2014 top selling e-book in the UK, "One Cold Night", was finished by 69% of readers; this is evidence that while popular e-books are being completely read, some e-books are only sampled. Comparison to printed books Advantages In the space that a comparably sized physical book takes up, an e-reader can contain thousands of e-books, limited only by its memory capacity. Depending on the device, an e-book may be readable in low light or even total darkness. Many e-readers have a built-in light source, can enlarge or change fonts, use text-to-speech software to read the text aloud for visually impaired, elderly or dyslexic people or just for convenience. Additionally, e-readers allow readers to look up words or find more information about the topic immediately using an online dictionary. Amazon reports that 85% of its e-book readers look up a word while reading. Printed books use three times more raw materials and 78 times more water to produce when compared to e-books. A 2017 study found that even when accounting for the emissions created in manufacturing the e-reader device, substituting more than 4.7 print books a year resulted in less greenhouse gas emissions than print. While an e-reader costs more than most individual books, e-books may have a lower cost than paper books. E-books may be made available for less than the price of traditional books using on-demand book printers. Moreover, numerous e-books are available online free of charge on sites such as Project Gutenberg. For example, all books printed before 1923 are in the public domain in the United States, which enables websites to host ebook versions of such titles for free. Depending on possible digital rights management, e-books (unlike physical books) can be backed up and recovered in the case of loss or damage to the device on which they are stored, a new copy can be downloaded without incurring an additional cost from the distributor. Readers can synchronize their reading location, highlights and bookmarks across several devices. Disadvantages There may be a lack of privacy for the user's e-book reading activities; for example, Amazon knows the user's identity, what the user is reading, whether the user has finished the book, what page the user is on, how long the user has spent on each page, and which passages the user may have highlighted. One obstacle to wide adoption of the e-book is that a large portion of people value the printed book as an object itself, including aspects such as the texture, smell, weight and appearance on the shelf. Print books are also considered valuable cultural items, and symbols of liberal education and the humanities. Kobo found that 60% of e-books that are purchased from their e-book store are never opened and found that the more expensive the book is, the more likely the reader would at least open the e-book. Joe Queenan has written about the pros and cons of e-books: Apart from all the emotional and habitual aspects, there are also some readability and usability issues that need to be addressed by publishers and software developers. Many e-book readers who complain about eyestrain, lack of overview and distractions could be helped if they could use a more suitable device or a more user-friendly reading application, but when they buy or borrow a DRM-protected e-book, they often have to read the book on the default device or application, even if it has insufficient functionality. While a paper book is vulnerable to various threats, including water damage, mold and theft, e-books files may be corrupted, deleted or otherwise lost as well as pirated. Where the ownership of a paper book is fairly straightforward (albeit subject to restrictions on renting or copying pages, depending on the book), the purchaser of an e-book's digital file has conditional access with the possible loss of access to the e-book due to digital rights management provisions, copyright issues, the provider's business failing or possibly if the user's credit card expired. Market share United States According to the Association of American Publishers 2018 annual report, ebooks accounted for 12.4% of the total trade revenue. Publishers of books in all formats made $22.6 billion in print form and $2.04 billion in e-books, according to the Association of American Publishers’ annual report 2019. Canada Spain In 2013, Carrenho estimates that e-books would have a 15% market share in Spain in 2015. UK According to Nielsen Book Research, e-book share went up from 20% to 33% between 2012 and 2014, but down to 29% in the first quarter of 2015. Amazon-published and self-published titles accounted for 17 million of those books (worth £58m) in 2014, representing 5% of the overall book market and 15% of the digital market. The volume and value sales, although similar to 2013, had seen a 70% increase since 2012. Germany The Wischenbart Report 2015 estimates the e-book market share to be 4.3%. Brazil The Brazilian e-book market is only emerging. Brazilians are technology savvy, and that attitude is shared by the government. In 2013, around 2.5% of all trade titles sold were in digital format. This was a 400% growth over 2012 when only 0.5% of trade titles were digital. In 2014, the growth was slower, and Brazil had 3.5% of its trade titles being sold as e-books. China The Wischenbart Report 2015 estimates the e-book market share to be around 1%. Public domain books Public domain books are those whose copyrights have expired, meaning they can be copied, edited, and sold freely without restrictions. Many of these books can be downloaded for free from websites like the Internet Archive, in formats that many e-readers support, such as PDF, TXT, and EPUB. Books in other formats may be converted to an e-reader-compatible format using e-book writing software, for example Calibre. See also Accessible publishing Book scanning Blook Cell phone novel Digital library Braille e-book Electronic publishing List of digital library projects Networked book Online book TeX and LaTeX Web fiction Braille translator Perkins Brailler Comparison of e-readers References External links James, Bradley (November 20, 2002). The Electronic Book: Looking Beyond the Physical Codex, SciNet Cory Doctorow (February 12, 2004). Ebooks: Neither E, Nor Books, O'Reilly Emerging Technologies Conference Lynch, Clifford (May 28, 2001). The Battle to Define the Future of the Book in the Digital World, First Monday – Peer reviewed journal. . Dene Grigar & Stuart Moulthrop (2013–2016) "Pathfinders: Documenting the Experience of Early Digital Literature", Washington State University Vancouver, July 1, 2013. Book formats Electronic publishing Electronic paper technology Web fiction New media Fiction forms
13356100
https://en.wikipedia.org/wiki/Convention%20over%20configuration
Convention over configuration
Convention over configuration (also known as coding by convention) is a software design paradigm used by software frameworks that attempts to decrease the number of decisions that a developer using the framework is required to make without necessarily losing flexibility and Don't repeat yourself (DRY) principles. The concept was introduced by David Heinemeier Hansson to describe the philosophy of the Ruby on Rails web framework, but is related to earlier ideas like the concept of "sensible defaults" and the principle of least astonishment in user interface design. The phrase essentially means a developer only needs to specify unconventional aspects of the application. For example, if there is a class Sales in the model, the corresponding table in the database is called "sales" by default. It is only if one deviates from this convention, such as the table "product sales", that one needs to write code regarding these names. When the convention implemented by the tool matches the desired behavior, it behaves as expected without having to write configuration files. Only when the desired behavior deviates from the implemented convention is explicit configuration required. Ruby on Rails' use of the phrase is particularly focused on its default project file and directory structure, which prevent developers from having to write XML configuration files to specify which modules the framework should load, which was common in many earlier frameworks. Disadvantages of the convention over configuration approach can occur due to conflicts with other software design principles, like the Zen of Python's "explicit is better than implicit." A software framework based on convention over configuration often involves a domain-specific language with a limited set of constructs or an inversion of control in which the developer can only affect behavior using a limited set of hooks, both of which can make implementing behaviors not easily expressed by the provided conventions more difficult than when using a software library that does not try to decrease the number of decisions developers have to make or require inversion of control. Other methods of decreasing the number of decisions a developer needs to make include programming idioms and configuration libraries with a multilayered architecture. Motivation Some frameworks need multiple configuration files, each with many settings. These provide information specific to each project, ranging from URLs to mappings between classes and database tables. Many configuration files with many parameters are often difficult to maintain. For example, early versions of the Java persistence mapper Hibernate mapped entities and their fields to the database by describing these relationships in XML files. Most of this information could have been revealed by conventionally mapping class names to the identically named database tables and the fields to their columns, respectively. Later versions did away with the XML configuration file and instead employed these very conventions, deviations from which can be indicated through the use of Java annotations (see JavaBeans specification, linked below). Usage Many modern frameworks use a convention over configuration approach. The concept is older, however, dating back to the concept of a default, and can be spotted more recently in the roots of Java libraries. For example, the JavaBeans specification relies on it heavily. To quote the JavaBeans specification 1.01: "As a general rule we don't want to invent an enormous java.beans.everything class that people have to inherit from. Instead we'd like the JavaBeans runtimes to provide default behaviour for 'normal' objects, but to allow objects to override a given piece of default behaviour by inheriting from some specific java.beans.something interface." See also Comparison of web frameworks Convention over Code References Bachle, M., & Kirchberg, P. (2007). "Ruby on rails". IEEE Software, 24(6), 105–108. DOI 10.1109/BCI.2009.31. Miller, J. (2009). "Design For Convention Over Configuration". Microsoft, Retrieved 18 April 2010. Chen, Nicholas (2006). "Convention over configuration". External links Detailed information on CoC Object-oriented programming Software design
51714892
https://en.wikipedia.org/wiki/MLT%20%28hacktivist%29
MLT (hacktivist)
MLT, real name Matthew Telfer, (born 1994) is a cybersecurity researcher, former grey hat computer hacker and former member of TeaMp0isoN. MLT was arrested in May 2012 in relation to his activities within TeaMp0isoN, a computer-hacking group which claimed responsibility for many high-profile attacks, including website vandalism of the United Nations, Facebook, NATO, BlackBerry, T-Mobile USA and several other large sites in addition to high-profile denial-of-service attacks and leaks of confidential data. After his arrest, he reformed his actions and shifted his focus to activities as a white hat cybersecurity specialist. He was the founder of now-defunct Project Insecurity LTD. History Believed to be the former co-leader and spokesperson of TeaMp0isoN, MLT, along with Junaid Hussain and other hackers targeted many large websites and corporations over a two-year period, from 2010 up until 2012 when both individuals were arrested. The group first gained popularity after targeting infamous hacking collective LulzSec, releasing personal information on their members and purporting to have hacked their websites, they then went on to target sites such as NATO, and various government officials from the United Kingdom and United States of America . The arrests finally came as a result of the probe into the alleged hacking and wiretapping of the British Security Services Anti-Terrorism Hotline. MLT was the former hacking partner of Junaid Hussain, who later went on to join ISIS and was killed in a drone strike by the US Government after becoming the third highest target on their 'kill list' due to his role in inspiring international lone-wolf terrorism alongside his hacking activities for ISIS under the banner of Islamic State Hacking Division. It was reported by Vice that Junaid Hussain remained in contact with MLT while in Syria, and that he used to occasionally ask for advice relating to hacking or would sometimes even openly boast about his activities within ISIS to MLT. Arrest On 9 May 2012, MLT was arrested in Newcastle upon Tyne by the Metropolitan Police who released a statement saying: "The suspect, who is believed to use the online 'nic' 'MLT', is allegedly a member of and spokesperson for TeaMp0isoN ('TeamPoison')--a group which has claimed responsibility for more than 1,400 offences including denial of service and network intrusions where personal and private information has been illegally extracted from victims in the U.K. and around the world". It was reported that MLT could have faced up to 10 years in prison for the events leading to his arrest. Recent activity In May 2015, someone purporting to be MLT featured on CNN, speaking to them about Junaid Hussain and claiming that he witnessed him appear on video chat once as a 'black power ranger' while wielding an AK-47. In August 2015, MLT featured on Episode 5 of the TV show Viceland Cyberwar where he spoke about subjects ranging from the security of autonomous cars to the death of his former hacking partner. In 2016, Matthew identified and reported vulnerabilities to eBay and the U.S. Department of Defense. He has stated that he avoids illegal activities and instead dedicates his time to participating in bug bounty programs. In 2022, MLT appeared as a guest on Darknet Diaries. In this episode the history of TeaMp0isoN and some of the high profile hacks that MLT undertook are discussed, as well as the relationship between MLT and Junaid Hussain. Currently MLT works as a Bug bounty finder as well as a Zero-day exploit developer. Career Matthew was the founder and chief executive officer of now-defunct Project Insecurity LTD, an exploit research group and educational platform. References Living people Hackers Anonymous (hacker group) activists 1994 births
3292102
https://en.wikipedia.org/wiki/Certified%20Software%20Development%20Professional
Certified Software Development Professional
Certified Software Development Professional (CSDP) is a vendor-neutral professional certification in software engineering developed by the IEEE Computer Society for experienced software engineering professionals. This certification was offered globally since 2001 through Dec. 2014. The certification program constituted an element of the Computer Society's major efforts in the area of Software engineering professionalism, along with the IEEE-CS and ACM Software Engineering 2004 (SE2004) Undergraduate Curricula Recommendations, and The Guide to the Software Engineering Body of Knowledge (SWEBOK Guide 2004), completed two years later. As a further development of these elements, to facilitate the global portability of the software engineering certification, since 2005 through 2008 the International Standard ISO/IEC 24773:2008 "Software engineering -- Certification of software engineering professionals -- Comparison framework" has been developed. (Please, see an overview of this ISO/IEC JTC1 and IEEE standardization effort in the article published by Stephen B. Seidman, CSDP. ) The standard was formulated in such a way, that it allowed to recognize the CSDP certification scheme as basically aligned with it, soon after the standard's release date, 2008-09-01. Several later revisions of the CSDP certification were undertaken with the aim of making the alignment more complete. In 2019, ISO/IEC 24773:2008 has been withdrawn and revised (by ISO/IEC 24773-1:2019 ). The certification was initially offered by the IEEE Computer Society to experienced software engineering and software development practitioners globally in 2001 in the course of the certification examination beta-testing. The CSDP certification program has been officially approved in 2002. After December 2014 this certification program has been discontinued, all issued certificates are recognized as valid forever. A number of new similar certifications were introduced by the IEEE Computer Society, including the Professional Software Engineering Master (PSEM) and Professional Software Engineering Process Master (PSEPM) Certifications (the later soon discontinued). To become a Certified Software Development Professional (CSDP) candidates had to have four years (initially six years) of professional software engineering experience, pass a three-and-half-hour, 180-question examination on various knowledge areas of software engineering, and possess at least a bachelor's degree in Computer Science or Software Engineering. The CSDP examination tested candidates' proficiency in internationally accepted, industry-standard software engineering principles and practices. CSDP credential holders are also obligated to adhere to the IEEE/ACM's Software Engineering Code of Ethics and Professional Practice. As of 2021, the IEEE-CS offer which is a successor to CSDP is the Professional Software Engineering Master (PSEM) certification. The exam is three hours, is proctored remotely, and consists of 160 questions over the 11 SWEBOK knowledge areas: Software Requirements, Software Design, Software Construction, Software Testing, Software Maintenance, Software Configuration Management, Software Engineering Management, Software Engineering Process, Software Engineering Models and Methods, Software Quality, Software Engineering Economics. (There is also the Professional Software Developer (PSD) certification, which covers only 4 knowledge areas: software requirements, software design, software construction, and software testing. The similarity of the name of this certification to the CSDP is confusing, it is a reputable credential but NOT an equivalent of CSDP.) History The IEEE Computer Society introduced the CSDP in 2002, and on October 27, 2008, it became the first certification to conform to ISO/IEC 24773 standard for software engineering certification.) Determination of eligibility Candidates had to undergo a peer review of their education and professional qualifications in order to receive authorization to take the CSDP examination. Candidates therefore had to submit an application to the IEEE Computer Society that provided verifiable information regarding their educational background and professional experience. The Certified Software Development Associate (CSDA) certification was available to graduating students and early-career software professionals who did not meet the eligibility requirements for the CSDP. CSDP examination content The CSDP examination content was based on the Guide To The Software Engineering Body of Knowledge. The examination covered content from all primary knowledge areas in the SWEBOK Guide Version 3. Below is a list of the topics tested in terms of their proportion of the total examination. Software requirements 11% Software design 11% Software construction 9% Software testing 11% Software maintenance 5% Software configuration management 5% Software engineering management 8% Software engineering process 5% Software engineering methods 4% Software quality 7% Software engineering professional practice 5% Software engineering economics 5% Computing foundations 5% Mathematical foundations 3% Engineering foundations 4% External links IEEE Computer Society Certification home: Software Professional Certification Program References Software engineering Information technology qualifications Institute of Electrical and Electronics Engineers Professional titles and certifications
199351
https://en.wikipedia.org/wiki/Yahoo%21%20Messenger
Yahoo! Messenger
Yahoo! Messenger (sometimes abbreviated Y!M) was an advertisement-supported instant messaging client and associated protocol provided by Yahoo!. Yahoo! Messenger was provided free of charge and could be downloaded and used with a generic "Yahoo ID" which also allowed access to other Yahoo! services, such as Yahoo! Mail. The service also offered VoIP, file transfers, webcam hosting, a text messaging service, and chat rooms in various categories. Yahoo! Messenger dates back to Yahoo! Chat, which was a public chat room service. The actual client, originally called Yahoo! Pager, launched on March 9, 1998 and renamed to Yahoo! Messenger in 1999. The chat room service shut down in 2012. In addition to instant messaging features similar to those offered by ICQ, it also offered (on Microsoft Windows) features such as: IMVironments (customizing the look of Instant Message windows, some of which include authorized themes of various cartoons such as Garfield or Dilbert), address-book integration and Custom Status Messages. It was also the first major IM client to feature BUZZing and music-status. A new Yahoo! Messenger was released in 2015, replacing the older one. Yahoo! Messenger was shut down entirely on July 17, 2018, replaced by a new service titled Yahoo! Together, only to be shut down as well in 2019. Features File sharing Yahoo! Messenger offered file sending capabilities to its users. Files could be up to 2 GB each. Since the software's relaunch, only certain media files can be shared: photos, animated GIFs and videos. It also allows album sharing, with multiple media files in one IM. The animated GIF feature integrates with Tumblr, owned by Yahoo! Likes The new Yahoo! Messenger added a like button to messages and media. It was basic in functionality, adding a heart when clicked and listing contacts who added a like. Unsend The new Yahoo! Messenger allowed messages to be unsent, deleting them from both the sender and the receiver's messaging page. Group conversations (formerly Yahoo! Chat) The new Yahoo! Messenger allowed private group conversations. Yahoo! Chat was a free online chat room service provided exclusively for Yahoo! users. Yahoo! Chat was first launched on January 7, 1997. Yahoo! Chat was a separate vertical on Yahoo! In its original form, Yahoo! Chat was a user-to-user text chat service used by millions worldwide. Soon after launch, Yahoo! Chat partnered with NBC and NewsCorp to produce moderated Chat Events. Yahoo! Chat events eventually developed broadcast partnerships with 100+ entities and hosted 350+ events-a-month. Yahoo's Live Chat with the music group Hanson on July 21, 1998, was the Internet's largest live event to date. The blockbusters kept on with events including 3 Beatles (Paul, George, Ringo), a live event from Columbine during the tragedy (in partnership with Time Online), live chats from outer space with John Glenn and many others. Sadly, in one of Yahoo's poorer decisions, Yahoo! Chat Events were discontinued in 2001, right at the start of the social media era. On March 9, 1998, the first public version of Yahoo! Pager was released, with Yahoo! Chat among its features. It allowed users to create public chat rooms, send private messages, and use emoticons. In June 2005, with no advance warning, Yahoo disabled users' ability to create their own chat rooms. The move came after KPRC-TV in Houston, Texas reported that many of the user-created rooms were geared toward pedophilia. The story prompted several advertisers, including Pepsi and Georgia-Pacific, to pull their ads from Yahoo. On November 30, 2012, Yahoo announced that among other changes that the public chat rooms would be discontinued as of December 14, 2012. quoting "This will enable us to refocus our efforts on modernizing our core Yahoo products experiences and of course, create new ones." Until the chat rooms became unavailable on December 14, 2012, all versions of Yahoo! Messenger could access Yahoo chat rooms. Yahoo has since closed down the chat.yahoo.com site (first having it redirect visitors to a section of the Yahoo! Messenger page, but as of June 2019 not even resolving that host name anymore) because the great majority of chat users accessed it through Messenger. The company worked for a while on a way to allow users to create their own rooms while providing safeguards against abuse. A greyed-out option to "create a room" was available until the release of version 11. Voice and video As of January 2014, the iOS version supported voice calls, with video calling on some devices. The Android version supported "voice & video calls (beta)". From September 2016, Yahoo! Messenger no longer offered webcam service on their computer application. Yahoo's software previously allowed users with newer versions (8 through 10) to use webcams. This option enabled users from distances all over the world to view others who had installed a webcam on their end. The service was free with provided speeds averaging from a range in between 1 and 2 frames per second. The resolution of the images could be seen starting at 320×240 pixels or 160×120. Protocol The Yahoo! Messenger Protocol (YMSG) was the client's underlying network protocol. It provided a language and series of conventions for software communicating with Yahoo!'s Instant Messaging service. In essence, YMSG performed the same role for Yahoo!'s IM as HTTP does for the World Wide Web. Unlike HTTP, however, YMSG was a proprietary protocol, a closed standard aligned only with the Yahoo! messaging service. Rival messaging services have their own protocols, some based on open standards, others proprietary, each effectively fulfilling the same role with different mechanics. One of the fundamental tenets of instant messaging is the notion that users can see when someone is connected to the network—known in the industry as 'presence'. The YMSG protocol used the mechanics of a standard internet connection to achieve presence—the same connection it used to send and receive data. In order for each user to remain 'visible' to other users on the service, and thereby signaling their availability, their Yahoo! IM client software maintained a functional, open, network connection linking the client to Yahoo!'s IM servers. URI scheme Yahoo! Messenger's installation process automatically installed an extra uniform resource identifier (URI) scheme handler for the Yahoo! Messenger Protocol into some web browsers, so that URIs beginning ymsgr could open a new Yahoo! Messenger window with specified parameters. This is similar in function to the mailto URI scheme, which creates a new e-mail message using the system's default mail program. For instance, a web page might include a link like the following in its HTML source to open a window for sending a message to the YIM user exampleuser: <a href="ymsgr:sendim?exampleuser">Send Message</a> To specify a message body, the m parameter was used, so that the link location might look like this: ymsgr:sendim?exampleuser&m=This+is+my+message Other commands were: ymsgr:sendim?yahooid ymsgr:addfriend?yahooid ymsgr:sendfile?yahooid ymsgr:call?yahooid ymsgr:callPhone?phonenumber ymsgr:im – opened the "Send an IM" window ymsgr:customstatus?A+custom+status – changed the status message ymsgr:getimv?imvname – loaded an IMVironment (example: ymsgr:getimv?doodle, ymsgr:getimv?yfighter) Interoperability On October 13, 2005, Yahoo and Microsoft announced plans to introduce interoperability between their two messengers, creating the second-largest real-time communications service userbase worldwide: 40 percent of all users. The announcement came after years of third-party interoperability success (most notably, Trillian and Pidgin) and criticisms that the major real-time communications services were locking their networks. Microsoft has also had talks with AOL in an attempt to introduce further interoperability, but AOL was unwilling to participate. Interoperability between Yahoo and Windows Live Messenger was launched July 12, 2006. This allowed Yahoo and Windows Live Messenger users to chat to each other without the need to create an account on the other service, provided both contacts used the latest versions of the clients. It was not possible to talk using the voice service between the two different messengers. As of December 14, 2012, the interoperability between Yahoo! Messenger and Windows Live Messenger ceased to exist. The Live Messenger contacts appeared as greyed out and it was not possible to send instant messages to them. Games There were various games and applications available that can be accessed via the conversation window by clicking the games icon and challenging your current contact. It requires Java to function. As of April 18, 2014, games were removed from Yahoo! Messenger. Plug-ins In version 8.0, Yahoo! Messenger featured the ability for users to create plug-ins, which are then hosted and showcased on the Yahoo chat room. Yahoo now no longer provides plugin development SDK. Yahoo! Messenger users could listen to free and paid Internet radio services, using the defunct Yahoo! Music Radio plug-in from within the messenger window. The plug-in also player functionality, such as play, pause, skip and rate this song. Adoption As of August 2000, according to Media Metrix, Yahoo! Messenger had about 10.6 million users in the U.S., about the same as MSN Messenger but trailing AOL Instant Messenger. However another analyst doubted the figures for Yahoo! and MSN. As of the month of September 2001, over five billion instant messages were sent on the network, up 115% from the year before, according to Media Metrix. Another study in August 2002 showed that it had a 16.7 percent share of IM work and home subscribers in the U.S., compared to 24.1 percent for MSN and 28.3 percent for AIM. In April 2002, 19.1 million people in the U.S. used Yahoo! Messenger, according to Media Metrix. Another study from Nielsen Net Ratings showed that as of 2002, Yahoo! Messenger had some 12 million users worldwide. This increased to 22 million by March 2006. Yahoo! Messenger was the dominant instant messaging platform among commodity traders until the platform was discontinued in August 2016. At the time of Yahoo! Messenger's closure in 2018, it remained popular in Vietnam. Software As of March 27, 2016, the only supported clients were the Android, iOS and web browser clients. The previous Windows, Mac, Unix and Solaris clients were not supported anymore, and their servers began shutting down on August 5, 2016, with the clients no longer working by August 31, 2016. It turned out that the servers for the legacy clients were finally shut down sometime between the mid-morning and early afternoon hours Eastern Standard Time on September 1, 2016, resulting in the legacy desktop clients no longer being able to access their buddy/contact lists. As of 2018 (with the last version), Yahoo! Messenger was available for computers as a web service, including both a messenger-only site and Yahoo! Mail integration. Apps were also available on Android and iOS. Pidgin could connect to Yahoo! Messenger by using the FunYahoo++ plugin. Mobile versions of Yahoo! Messenger were launched originally for Palm OS and Windows CE devices. In a deal signed March 2000, Yahoo! Messenger would come bundled on Palm handheld computers. It was also available for Verizon Wireless customers, through a deal with Yahoo! announced in March 2001, and through Sprint's MiniBrowser. A version for the T-Mobile Sidekick II was released in 2004. This was to be followed by versions for Symbian (via Yahoo! Go), BlackBerry, and then for iPhone in April 2009. A version called Yahoo! Messenger for SMS also existed, which allowed IM via SMS. History Yahoo! Pager launched on March 9, 1998, an instant messaging (IM) client integrated with Yahoo! services including Yahoo! Chat. It included basic messaging support, a buddy list with status message support, the ability to block other users, alerts when a buddies came online, and notifications when a new Yahoo! Mail message arrived. In 2000, the name changed to Yahoo! Messenger. Version 5.0, released November 2001, introduced IMVironments, an initiative that allowed users to play music and Flash Video clips inside the IM window. Yahoo! partnered with rock band Garbage that provided their single "Androgyny" available to share by users. Other partnerships also made IMVironments for the Monsters, Inc. movie, the Super Smash Bros Melee video game, and the Hello Kitty character, among others. In August 2002 with the release of version 5.5, the resolution for video calling was increased to a possible 320x240 and 20 frames per second (up from 160x120 and 1 frame per second). From October 2002, Yahoo! offered for corporate subscribers a more secure and better (SSL) encrypted IM client, called Yahoo! Messenger Enterprise Edition. It was released with a $30 yearly subscription package in 2003. Yahoo! Messenger version 6.0 was released in May 2004. It added games, music, photos, and Yahoo! Search, alongside a "stealth" mode. It also debuted Yahoo! Avatars. With the release of version 7.0 in August 2005, the client was now renamed to Yahoo! Messenger with Voice. It had several new features such as VoIP, voicemail, drag-and-drop file and photo sharing, Yahoo! 360° and LAUNCHcast integration, and others. It was seen as a challenger against Skype. On October 12, 2005, Yahoo! and Microsoft formed an alliance in which Yahoo! Messenger and MSN Messenger (later known as Windows Live Messenger) will be interconnected, allowing users of both communities to communicate and share emoticons and buddy lists with each other. The service was enabled on Yahoo! Messenger with Voice 8.0 in July 2006. As of version 8.1, the name switched back to just Yahoo! Messenger. Beginning in 2006, Yahoo made known its intention to provide a web interface for Yahoo! Messenger, culminating in the Gmail-like web archival and indexing of chat conversations through Yahoo! Mail. However, while Yahoo! Mail integrated much of the rudimentary features of Messenger beginning in 2007, Yahoo did not succeed initially in integrating archival of chat conversations into Mail. Instead, a separate Adobe Flex-based web messenger was released in 2007 with archival of conversations which take place inside the web messenger itself. At the Consumer Electronics Show in January 2007, Yahoo! Messenger for Vista was introduced, which is a version designed and optimized for Windows Vista. It exploited the new design elements of Vista's Windows Presentation Foundation (WPF) and introduced a new user interface and features. The application was in a preview beta until finally released for download on December 6, 2007. As of October 24, 2008, Yahoo! Messenger for Vista is no longer available. In May 2007, Yahoo! Messenger for the Web was launched, a browser-based client of the IM service. Yahoo! Messenger version 9 was released in September 2008. It allows the viewing of YouTube videos within the chat window, and integrates with other Yahoo! services such as Flickr. This version also saw the release of Pingbox, which embeds on a blog or website and allows visitors to send IM texts anonymously without needing Yahoo! Messenger software or to sign in. Version 10, released November 2009, incorporates many bug fixes and features high-quality video calling. The last major Windows client release, version 11 in 2011, featured integration with Facebook, Twitter and Zynga, allowing chat with Facebook friends and playing Zynga games within. It also archives past messages on an online server which is accessible through the client. Version 11.5 (released November 2011) added tabbed IMs. In December 2015, an all-new, rewritten Yahoo! Messenger was launched, only on mobile and through a browser. A desktop version of the "new" Messenger was later released, shortly before the "legacy" Messenger shut down on August 5, 2016. Yahoo! Together Yahoo! Together was a freeware and cross-platform messaging service, developed by Yahoo! for the Android and iOS mobile platforms. The software was introduced in beta on May 8, 2018, as Yahoo! Squirrel to replace Yahoo! Messenger and Verizon Media's AOL Instant Messenger. In October 2018, it was renamed to its present name. Yahoo! Together was targeted to families and the consumer market rather than enterprise. The app was compared to Slack. Less than a year after its public beta release, Yahoo! Together went offline on April 4, 2019. Third-party clients Third-party clients could also be used to access the original service. These included: Adium BitlBee Centericq Empathy Fire imeem IMVU Kopete meebo Meetro Miranda IM Paltalk Pidgin Trillian Trillian Astra Trillian Pro Windows Live Messenger SPIM Yahoo! Messenger users were subjected to unsolicited messages (SPIM). Yahoo's primary solution to the issue involved deleting such messages and placing the senders on an Ignore List. , it was estimated that at least 75% of all users who used Yahoo chat rooms were bots. Yahoo introduced a CAPTCHA system to help filter out bots from joining chat rooms, but such systems generally do little to prevent abuse by spammers. Security On November 4, 2014, the Electronic Frontier Foundation listed Yahoo! Messenger on its "Secure Messaging Scorecard". Yahoo! Messenger received 1 out of 7 points on the scorecard. It received a point for encryption during transit, but missed points because communications were not encrypted with a key the provider didn't have access to (i.e. the communications were not end-to-end encrypted), users couldn't verify contacts' identities, past messages were not secure if the encryption keys were stolen (i.e. the service did not provide forward secrecy), the code was not open to independent review (i.e. the source code was not open-source), the security design was not properly documented, and there had not been a recent independent security audit. The British intelligence agency Government Communications Headquarters (GCHQ)'s secret mass surveillance program Optic Nerve and National Security Agency (NSA) were reported to be indiscriminately collecting still images from Yahoo webcam streams from millions of mostly innocent Yahoo webcam users from 2008 to 2010, among other things creating a database for facial recognition for future use. Optic Nerve took a still image from the webcam stream every 5 minutes. In September 2016, The New York Times reported that Yahoo's security team, led by Alex Stamos, had pressed for Yahoo to adopt end-to-end encryption sometime between 2014 and 2015, but this had been resisted by Jeff Bonforte, Yahoo's senior vice president, "because it would have hurt Yahoo's ability to index and search message data". See also Comparison of instant messaging clients Comparison of instant messaging protocols Comparison of IRC clients Instant messaging Yahoo Together References External links Discontinued software BlackBerry software IOS software Classic Mac OS instant messaging clients MacOS instant messaging clients Symbian instant messaging clients Windows instant messaging clients Freeware Messenger Yahoo! instant messaging clients VoIP software Videotelephony 1998 software Android (operating system) software Yahoo! community websites Internet properties established in 1998 Internet properties disestablished in 2018 Messenger
181203
https://en.wikipedia.org/wiki/Paranoia%20%28role-playing%20game%29
Paranoia (role-playing game)
Paranoia is a dystopian science-fiction tabletop role-playing game originally designed and written by Greg Costikyan, Dan Gelber, and Eric Goldberg, and first published in 1984 by West End Games. Since 2004 the game has been published under license by Mongoose Publishing. The game won the Origins Award for Best Roleplaying Rules of 1984 and was inducted into the Origins Awards Hall of Fame in 2007. Paranoia is notable among tabletop games for being more competitive than co-operative, with players encouraged to betray one another for their own interests, as well as for keeping a light-hearted, tongue in cheek tone despite its dystopian setting. Several editions of the game have been published since the original version, and the franchise has spawned several spin-offs, novels and comic books based on the game. A crowdfunding at Kickstarter for a new edition was successfully funded. Delivery to backers began in March 2017. Premise The game is set in a dystopian future city controlled by the Computer (also known as "Friend Computer"), and where information (including the game rules) are restricted by color-coded "security clearance". Player characters are initially enforcers of the Computer's authority (known as Troubleshooter, mainly for the fact that they shoot trouble), and will be given missions to seek out and eliminate threats to the Computer's control. The player characters are also part of prohibited underground movements (which means that the players' characters are usually included among the aforementioned 'security threats'), and will have secret objectives including theft from and murder of other player characters. Tone Paranoia is a humorous role-playing game set in a dystopian future along the lines of Nineteen Eighty-Four, Brave New World, Logan's Run, and THX 1138; however, the tone of the game is rife with black humor, frequently tongue-in-cheek rather than dark and heavy. Most of the game's humor is derived from the players' (usually futile) attempts to complete their assignment while simultaneously adhering to the Computer's arbitrary, contradictory and often nonsensical security directives. The Paranoia rulebook is unusual in a number of ways; demonstrating any knowledge of the rules is forbidden, and most of the rulebook is written in an easy, conversational tone that often makes fun of the players and their characters, while occasionally taking digs at other notable role-playing games. Setting The game's main setting is an immense, futuristic city called Alpha Complex. Alpha Complex is controlled by the Computer, a civil service AI construct (a literal realization of the "Influencing Machine" that some schizophrenics fear). The Computer serves as the game's principal antagonist, and fears a number of threats to its 'perfect' society, such as the Outdoors, mutants, and secret societies (especially Communists). To deal with these threats, the Computer employs Troubleshooters, whose job is to go out, find trouble, and shoot it. Player characters are usually Troubleshooters, although later game supplements have allowed the players to take on other roles, such as High-Programmers of Alpha Complex. The player characters frequently receive mission instructions from the Computer that are incomprehensible, self-contradictory, or obviously fatal if adhered to, and side-missions (such as Mandatory Bonus Duties) that conflict with the main mission. Failing a mission generally results in termination of the player character, but succeeding can just as often result in the same fate, after being rewarded for successfully concluding the mission. They are issued equipment that is uniformly dangerous, faulty, or "experimental" (i.e., almost certainly dangerous and faulty). Additionally, each player character is generally an unregistered mutant and a secret society member (which are both termination offenses in Alpha Complex), and has a hidden agenda separate from the group's goals, often involving stealing from or killing teammates. Thus, missions often turn into a comedy of errors, as everyone on the team seeks to double-cross everyone else while keeping their own secrets. The game's manual encourages suspicion between players, offering several tips on how to make the gameplay as paranoid as possible. Every player's character is assigned six clones, known as a six-pack, which are used to replace the preceding clone upon his or her death. The game lacks a conventional health system; most wounds the player characters can suffer are assumed to be fatal. As a result, Paranoia allows characters to be routinely killed, yet the player can continue instead of leaving the game. This easy spending of clones tends to lead to frequent firefights, gruesome slapstick, and the horrible yet humorous demise of most if not all of the player character's clone family. Additional clones can be purchased if one gains sufficient favour with the Computer. Security clearances Paranoia features a security clearance system based on colors of the visible spectrum which heavily restricts what the players can and cannot legally do; everything from corridors to food and equipment have security restrictions. The lowest rating is Infrared, but the lowest playable security clearance is Red; the game usually begins with the characters having just been promoted to Red grade. Interfering with anything which is above that player's clearance carries significant risk. The full order of clearances from lowest to highest is Infrared (visually represented by black), Red, Orange, Yellow, Green, Blue, Indigo, Violet, and Ultraviolet (visually represented by white). Within the game, Infrared-clearance citizens live dull lives of mindless drudgery and are heavily medicated, while higher clearance characters may be allowed to demote or even summarily execute those of a lower rank and those with Ultraviolet clearance are almost completely unrestricted and have a great deal of access to the Computer; they are the only citizens that may (legally) access and modify the Computer's programming, and thus Ultraviolet citizens are also referred to as "High Programmers". Security clearance is not related to competence but is instead the result of the Computer's often insane and unjustified calculus of trust concerning a citizen. It is suggested that it may in fact be the High Programmers' meddling with The Computer's programming that resulted in its insanity. Secret Societies In the game, secret societies tend to be based on sketchy and spurious knowledge of historical matters. For example, previous editions included societies such as the "Seal Club" that idolizes the Outdoors but is unsure what plants and animals actually look like. Other societies include the Knights of the Circular Object (based on the Knights of the Round Table), the Trekkies, and the First Church of Christ Computer Programmer. In keeping with the theme of paranoia, many secret societies have spies or double agents in each other's organizations. The 1st edition also included secret societies such as Programs Groups (the personal agents and spies of the High Programmers at the apex of Alpha Complex society) and Spy For Another Alpha Complex. The actual societies which would be encountered in a game depends on the play style; some societies are more suited for more light-hearted games (Zap-style, or the lighter end of Classic), whereas others represent a more serious threat to Alpha Complex and are therefore more suitable for Straight or the more dark sort of Classic games. Publication history Six editions have been published. Three of these were published by West End Games - the 1st, 2nd, and "Fifth" Editions - whereas the later three editions (Paranoia XP, the 25th Anniversary edition and the "Red Clearance" edition) were published by Mongoose Publishing. In addition to these six published editions, it is known that West End Games were working on a "Third Edition" - to replace the poorly received Fifth Edition - in the late 1990s, but their financial issues would prevent this edition from being published, except for being included in one tournament adventure. 1st edition 1st edition () - written by Greg Costikyan, Dan Gelber, and Eric Goldberg - published in 1984 by West End Games. In 1985, this edition of Paranoia won the Origins Award for Best Roleplaying Rules of 1984. This edition, while encouraging dark humour in-game, took a fairly serious dystopian tone; the supplements and adventures released to accompany it emphasised the lighter side, however, establishing the freewheeling mix of slapstick, intra-team backstabbing and satire that is classically associated with a game of Paranoia. 2nd edition 2nd edition () - written by Greg Costikyan, Dan Gelber, Eric Goldberg, Ken Rolston, and Paul Murphy - published in 1987 by West End Games. This edition can be seen as a response to the natural development of the line towards a rules-light, fast and entertaining play style. Here, the humorous possibilities of life in a paranoid dystopia are emphasised, and the rules are simplified. Metaplot and the Second Edition Many of the supplements released for the Second edition fall into a story arc set up by new writers and line editors that was intended to freshen up the game and broaden roleplay possibilities. Players could travel in space and time, play in a Computerless Alpha Complex, or an Alpha Complex in which Computer battled for control with other factions. Some fans criticized the change to the default narrative. Second edition supplements can generally be divided into four eras: Classic: No metaplot. Secret Society Wars: Introduced in The DOA Sector Travelogue, and supported by a series of Secret Society Wars modules. Individual missions can be run in the Classic format, but running themes and conspiracies persist from book to book. The Crash: Detailed in The Crash Course Manual, and supported by the Vulture Warriors of Dimension X series of time-travelling modules. Adventures occur in a fractured Complex in which there is no Computer, possibly as a result of the Secret Society Wars, possibly not. Reboot: Detailed in The Paranoia Sourcebook, and supported by a few modules and supplements. The Computer returns, but does not control all of Alpha Complex - plays as a hybrid of the other eras, with players free to choose sides. "Fifth" Edition "Fifth Edition" () - published in 1995 by West End Games - was in fact the third edition of the game released. (The game skipped two editions as a joke, and possibly also as a reference to the two major revisions to the game released during the lifetime of the Second Edition with the Crash Course Manual and the Paranoia Sourcebook.) It has since been declared an "un-product" (cf. "unperson") by the writers of the current edition, due to its extremely poor commercial and critical reception. Almost none of the original production staff were involved, and the books in this line focused less on the dark humor and oppressive nature of Alpha, and more on cheap pop culture spoofs, such as a Vampire: The Masquerade parody. It had a lighter and sillier atmosphere and fans and more cartoonish illustrations. In his introduction to Flashbacks, a compilation of Paranoia adventures from the West End Games era, Allen Varney details the management decisions which led, in the eyes of many, to the decline of the Paranoia line, and cites rumours that the line saw a 90% decline in sales before West End Games went into bankruptcy: Art director Larry Catalano left West End in 1986. Catalano’s successor fired (illustrator) Jim Holloway and brought in a succession of increasingly poor cartoonists. (Writer/editor) Ken Rolston left shortly thereafter for unrelated reasons. In Ken’s wake, developers Doug Kaufman and Paul Murphy in turn briefly supervised the PARANOIA line. After they too departed, editorial control fell to—how do I put this tactfully?—people with different views of the PARANOIA line. Unreleased West End Games Third Edition Following the unfavorable reception of the Fifth Edition, West End Games began planning a new edition of the game, which would be released as the "Third Edition". Pages from this planned edition were exhibited at Gen Con in 1997 - two years after the release of the Fifth Edition. Due to West End Games' financial problems this edition was never completed. In an interview in 1999 Scott Palter of West End expressed hopes that the Third Edition would be published that summer; however, he also disclosed that court proceedings had been begun by the original designers in order to reclaim the rights to the game. The designers would ultimately succeed in purchasing the rights to the game, putting an end to any possibility that the final West End Games edition would be released. A single adventure has surfaced which contained a summary of the third edition rules. Paranoia XP Paranoia XP () Following the bankruptcy of West End Games, the original designers of Paranoia banded together and purchased the rights to the game from West End in order to regain control of the line. The designers in turn granted a license to Mongoose Publishing to produce a new version of the game, with the result that Paranoia XP, written by Allen Varney, Aaron Allston, Paul Baldowski, Beth Fischi, Dan Curtis Johnson and Greg Costikyan, was published in 2004. In 2005, Microsoft requested that the XP be removed. As such, the name was shortened to just Paranoia. This edition of the game has received a much warmer critical reception, as well as selling well. This edition also introduced three different styles of play, with some game mechanics differing between the various modes to support the specific tone being sought-after: Zap is anarchic slapstick with no claims to making sense and little effort at satire. Zap represents Paranoia as popularly understood: troubleshooters who open fire on each other with little to no provocation. It is often associated with the "Fifth Edition". Best for a one-shot game of Paranoia. Classic is the atmosphere associated with the 2nd edition. Conflict within troubleshooter teams is less common and less lethal. Good for a one-shot game of Paranoia, but still suitable for an ongoing campaign. Straight represents a relatively new style. This is more serious and focuses more on dark, complex satire. Players are punished for executing other characters without first filing evidence of the other character's treason; this encourages slower, more careful gameplay and discourages random firefights and horseplay. Poor for one-shoots, good for an ongoing campaign. Primary designer Allen Varney, in the designer's notes, explained that his aim with the new edition was to return to the game's roots whilst updating both the game system and the satirical setting to take account of twenty years of game design progress. In both the core rulebook and the Flashbacks supplement - a reprint of classic adventures originally published by West End Games - Varney was highly critical of West End Games' handling of the product line in its latter days. In a posting on RPG.net he explained that the point of including the three playstyles in Paranoia XP was to counteract the impression that "Zap"-styled play was the default for Paranoia, an impression which had in part been created by the more cartoonish later supplements in the West End Games line (as well as "Fifth Edition"). In order to distance the new edition from the less commercially and critically successful aspects of the West End Game line, and to discourage new players from wasting time and money on what he considered to be inferior products, Varney additionally used the designer's notes to declare many West End products, including the "Fifth Edition" and everything published for the 2nd Edition after The People's Glorious Revolutionary Adventure, to be "unproducts" - no longer part of the game's continuity, and not recommended for use with the new edition. An upshot of this is that much of the poorly received metaplot established late in the West End Games line, from the Secret Society Wars to the Reboot and beyond, was disposed of. Varney has explained that this is due mainly to his distaste for the direction the metaplot took the game line in, a distaste he asserts is shared by the game's fan community. He has also stated that he personally has little affection for the "Zap" style, and therefore may have given it short shrift in the main rulebook, although later supplements for Paranoia XP did provide more support for Zap play. Long-time Paranoia artist Jim Holloway, called "the master of the fun-filled illustration", drew the cover art and much of the internal art for the game until 1986. His art for the series generally portray comedic scenarios that capture the essential "deathtrap" feeling of Alpha Complex. Paranoia XP marked his return to the line as well; he has designed every cover of the XP edition, and many books contain both his classic and new Paranoia art. While Paranoia XP kept Communists as the big bad scapegoat in spite of the Cold War being long over, the updated edition integrates several 21st century themes into its satire. Troubleshooters carry PDCs (Personal Digital Companion) that are reminiscent of PDAs and smartphones and can try to acquire gear by bidding on CBay (an obvious pun on eBay). New threats to Alpha Complex include file sharing, phishing scams, identity theft and Weapons of mass destructions. Consumerism in Alpha Complex has been tooled into its economy and has taken on an element of patriotism, echoing sentiments expressed after 9/11 along similar trends. A mission pack released in 2009 titled War On (Insert Noun) lampoons government initiatives like the War on Drugs and the War on Terror. In writing the new edition, Varney, Goldberg and Costikyan reached out to and actively collaborated with the Paranoia online fan community through an official blog and through Paranoia-Live.net. In addition, Varney ran an online game, the Toothpaste Disaster, where players took the role of High Programmers documenting the titular disaster in a Lexicon format. Many ideas established in the Lexicon game were written into the rulebook. Later, some of the best players and writers from the game and a few other places were formally integrated as the Traitor Recycling Studio to write official Paranoia material; their first credited work was the mission supplement Crash Priority. In 2006, Varney's fellow Paranoia writer, Mongoose Publishing employee Gareth Hanrahan, took over as primary writer for the Paranoia line. During the lifetime of the XP line Mongoose released numerous supplements and adventures for the game. Notable amongst the supplements was Extreme Paranoia, which provided ideas for scenarios based around characters of security clearances Orange to Violet, with premises differing greatly from the standard Red-clearance Troubleshooter concept but remaining thematically appropriate to the game's setting and atmosphere. (This included an updated reprint of the 1st Edition supplement HIL Sector Blues, which focused on playing Blue-clearance IntSec agents.) The idea of devising new and varied concepts to base Paranoia adventures and campaigns around would be revisited for the next edition of the game. 25th Anniversary Editions In June 2009, Mongoose Publishing announced that they would be retiring the books in the XP line to clear the way for the 25th Anniversary Edition line - revealing a new edition of the rulebook as well as two new rulebooks, one casting the players as higher-clearance Internal Security investigators and one as Ultraviolet High Programmers. They stated that the XP material would "maintain a 90% compatibility rating with the new Paranoia books". Each of the three books is an entirely self-contained and playable game: Paranoia: Troubleshooters, Paranoia: Internal Security, and Paranoia: High Programmers. The Troubleshooters volume presents a slimmed-down version of the XP rules, the most notable difference being the removal of the Service Firms and the advanced economy of the XP edition, with the focus firmly on the game's traditional premise of casting the player characters as Red-clearance Troubleshooters. The Internal Security volume casts the player characters as Blue-clearance Internal Security agents, a refinement of the premise of the 1st edition supplement HIL Sector Blues (reprinted in the XP line as part of Extreme Paranoia). The third game, Paranoia: High Programmers, casts the player characters as the Ultraviolet-clearance elite of Alpha Complex society and focuses on the political plotting and infighting that dominates the High Programmers' lives, a premise not dissimilar to the Violet-level campaign ideas presented in Extreme Paranoia. The Troubleshooters volume retains the play styles of the XP rulebook; however, the "Classic" playstyle is assumed by default, with "Zap" and "Straight" relegated to an appendix. Allen Varney, designer of the XP edition, explained in a posting on RPG.net that this decision came about as a result of the XP edition successfully convincing the wider gaming public that "Zap" was not the default playstyle for the game; since it was now generally accepted that Paranoia could have a variety of playstyles and each GM would interpret it somewhat differently, it was considered no longer necessary to emphasise the different playstyles in the main text. The Internal Security volume includes an appendix listing three new styles tailored for the game - "Heist", "Overkill" and "Horror". High Programmers does not specify playstyles. Red Clearance Edition The most recent edition (at the time of writing) from Mongoose Publishing was announced through Kickstarter 24 October 2014. In a departure from previous Mongoose editions, RED Clearance Edition utilises a d6-based dice pool system as well as using cards for equipment, mutant powers, secret societies, and combat actions. The base game was primarily designed by James Wallis, and released in March 2017. Additional writing for the new edition was initially provided by Gareth Hanrahan, while the first major expansion, Acute PARANOIA, was written by various writers and funded through Kickstarter in 2018 for an early 2019 release. Reception In the Jan-Feb 1985 edition of Space Gamer (Issue No. 72), the editorial staff were enthusiastic about the game, commenting "If you're likely to take it personally when your best friend's character plugs your character from behind, stay away from this game. But if you like high-tension suspense along with a slightly bent sense of humor, Paranoia is a unique and highly desirable experience." Marcus L. Rowland reviewed Paranoia for White Dwarf #65, giving it an overall rating of 7 out of 10, and stated that "I like Paranoia, but I'm not sure that I'd want to run it as a prolonged campaign. It's the sort of concept which works well as light relief from a 'serious' RPG campaign, and will definitely appeal to 'hack and slay' merchants. Dedicated rule lawyers and wargamers will hate it. Overall, a lot of fun for a minimum of three or four players." In the April 1988 edition of Dragon (Issue 132), Jim Bambra thought that the second edition had marked improvements compared to the first edition: "The first edition of Paranoia promised hilarious fun and a combat system that didn’t get bogged down in tedious mechanics. It soon found a following among gamers looking for something different in their role-playing adventures. Still, a close inspection of the combat system revealed that it was slow moving and cumbersome. The mechanics were hard to grasp in places, making it difficult to get into the freewheeling fun. Now, all that’s changed. The Paranoia game has been treated to a revamp, and this time the rules are slick. All that tricky stuff which made the combat system such a pain to run has been shelved off into optional rules. If you want the extra complications, you’re welcome to them, or you can do what most people did anyway and simply ignore them." Bambra did express reservations about the suitability of the game for an on-going campaign, saying "It doesn't lend itself easily to long-term campaign play. This game is best treated as a succession of short adventure sessions in which players get to enjoy themselves doing all those despicable things that would spoil a more 'serious’ game." However Bambra concluded with a recommendation, saying "As a tongue-in-cheek science-fiction game, this one is hard to beat." In The Games Machine #3, John Wood enjoyed the "darkly humorous" artwork of the second edition, and complimented the writers for a better-organized set of rules. He concluded, "The new edition is far more suitable for those with little or no RPG experience, and is excellent value for a complete system (just add a 20-sided die)." In a 1996 reader poll conducted by Arcane magazine to determine the 50 most popular roleplaying games of all time, Paranoia was ranked 7th. Editor Paul Pettengale commented: "For players of games where character development and campaign continuity are a priority, Paranoia is an absolute no-no. If a character (of which there are six versions - each person in Alpha Complex has six clones) lives through an entire scenario then they're doing well. Hell, they're doing better than well, they're probably Jesus Christ reborn (er, no offence intended, all ye Christian types). Suffice to say that Paranoia is, and always will be, a complete laugh - it should be played for nothing more than fun". Paranoia was chosen for inclusion in the 2007 book Hobby Games: The 100 Best. Steve Jackson described the game as "the first sophisticated parody of the basic tropes of roleplaying. Paranoia didn't offer dungeons full of monsters with sillier names than those in D&D. It introduced something scarier... the futuristic tunnels of Alpha Complex, in which all the monsters were human and nobody ever got out. Paranoia held all of roleplaying, as it was then practiced, to a dark and twisted mirror. Then it threw cream pies." Awards Paranoia won the Origins Award for Best Roleplaying Rules of 1984. The game was inducted into the Origins Awards Hall of Fame in 2007. Other reviews Different Worlds #39 (May/June, 1985) Arcane #10 (September 1996) Casus Belli #24 (Feb 1985) Pyramid - XP Paranoia-related software JParanoia is freeware fan-made software specifically created for playing Paranoia over the Internet. It runs on the Java Virtual Machine and consists of a client and a server with built-in features for character and gameplay management. In September 2004, both attracted some mainstream attention when the UK edition of PC Gamer magazine ran an article about Paranoia as one of their "Extra Life" columns and showcased JParanoia and Paranoia Live; coincidentally the publicity came right before the site was poised to celebrate the launch of the new Paranoia edition from Mongoose. Paranoia was also made into a video game called The Paranoia Complex released in 1989 by Magic Bytes. It was available for Amiga, Amstrad CPC, Commodore 64 and ZX Spectrum. It took the form of a top-down maze shooter dressed in a Paranoia plot and trappings; reviews of the game from hobby magazines of the period pegged it as mediocre to poor. A Paranoia mini-gamebook) was published in issue #77 of SpaceGamer/FantasyGamer magazine in the late 1980s. Unauthorized automated versions of the story (a Troubleshooter's assignment to undermine the subversive activity known as Christmas) have circulated via machine-independent ports to C, Python, Go and Inform as well as to Adventure Game Toolkit and for Applix, CP/M and the Cybiko. Paranoia: Happiness is Mandatory is a video game that was released on 5 December 2019, for PC on the Epic Game Store. It was developed by Cyanide and Black Shamrock studios and is published by Bigben Interactive. It is an isometric view real-time RPG. In mid-January 2020, the game was removed from the Epic Games Store without explanation from Cyanide or BigBen Interactive. As of 30 April 2020, no public explanation had occurred. See also List of Paranoia books References External links Mongoose Publishing's Paranoia Homepage Artificial intelligence in fiction Campaign settings Comedy role-playing games Dystopian fiction Greg Costikyan games Mongoose Publishing games Origins Award winners Role-playing games introduced in 1984 Post-apocalyptic role-playing games West End Games games Games with concealed rules
49387
https://en.wikipedia.org/wiki/Deep%20Blue%20%28chess%20computer%29
Deep Blue (chess computer)
Deep Blue was a chess-playing expert system run on a unique purpose-built IBM supercomputer. It was the first computer to win a game, and the first to win a match, against a reigning world champion under regular time controls. Development began in 1985 at Carnegie Mellon University under the name ChipTest. It then moved to IBM, where it was first renamed Deep Thought, then again in 1989 to Deep Blue. It first played world champion Garry Kasparov in a six-game match in 1996, where it lost two games to four. In 1997 it was upgraded and, in a six-game re-match, it defeated Kasparov winning three and drawing one. Deep Blue's victory was considered a milestone in the history of artificial intelligence and has been the subject of several books and films. History While a doctoral student at Carnegie Mellon University, Feng-hsiung Hsu began development of a chess-playing supercomputer under the name ChipTest. The machine won the World Computer Chess Championship in 1987 and Hsu and his team followed up with a successor, Deep Thought, in 1988. After receiving his doctorate in 1989, Hsu and Murray Campbell joined IBM Research to continue their project to build a machine that could defeat a world chess champion. Their colleague Thomas Anantharaman briefly joined them at IBM before leaving for the finance industry and being replaced by programmer Arthur Joseph Hoane. Jerry Brody, a long-time employee of IBM Research, subsequently joined the team in 1990. After Deep Thought's two-game 1989 loss to Kasparov, IBM held a contest to rename the chess machine: the winning name was "Deep Blue," submitted by Peter Fitzhugh Brown, was a play on IBM's nickname, "Big Blue." After a scaled-down version of Deep Blue played Grandmaster Joel Benjamin, Hsu and Campbell decided that Benjamin was the expert they were looking for to help develop Deep Blue's opening book, so hired him to assist with the preparations for Deep Blue's matches against Garry Kasparov. In 1995, a Deep Blue prototype played in the eighth World Computer Chess Championship, playing Wchess to a draw before ultimately losing to Fritz in round five, despite playing as White. In 1997, the Chicago Tribune mistakenly reported that Deep Blue had been sold to United Airlines, a confusion based upon its physical resemblance to IBM's mainstream RS6000/SP2 systems. Today, one of the two racks that made up Deep Blue is held by the National Museum of American History, having previously been displayed in an exhibit about the Information Age, while the other rack was acquired by the Computer History Museum in 1997, and is displayed in the Revolution exhibit's "Artificial Intelligence and Robotics" gallery. Several books were written about Deep Blue, among them Behind Deep Blue: Building the Computer that Defeated the World Chess Champion by Deep Blue developer Feng-hsiung Hsu. Deep Blue versus Kasparov Subsequent to its predecessor Deep Thought's 1989 loss to Garry Kasparov, Deep Blue played Kasparov twice more. In the first game of the first match, which took place from 10 to 17 February 1996, Deep Blue became the first machine to win a chess game against a reigning world champion under regular time controls. However, Kasparov won three and drew two of the following five games, beating Deep Blue by 4–2 at the close of the match. Deep Blue's hardware was subsequently upgraded, doubling its speed before it faced Kasparov again in May 1997, when it won the six-game rematch 3½–2½. Deep Blue won the deciding game after Kasparov failed to secure his position in the opening, thereby becoming the first computer system to defeat a reigning world champion in a match under standard chess tournament time controls. The version of Deep Blue that defeated Kasparov in 1997 typically searched to a depth of six to eight moves, and twenty or more moves in some situations. David Levy and Monty Newborn estimate that each additional ply (half-move) of forward insight increases the playing strength between 50 and 70 Elo points. In the 44th move of the first game of their second match, unknown to Kasparov, a bug in Deep Blue's code led it to enter an unintentional loop, which it exited by taking a randomly-selected valid move. Kasparov did not take this possibility into account, and misattributed the seemingly pointless move to "superior intelligence." Subsequently, Kasparov experienced a decline in performance in the following game, though he denies this was due to anxiety in the wake of Deep Blue's inscrutable move. After his loss, Kasparov said that he sometimes saw unusual creativity in the machine's moves, suggesting that during the second game, human chess players had intervened on behalf of the machine. IBM denied this, saying the only human intervention occurred between games. Kasparov demanded a rematch, but IBM had dismantled Deep Blue after its victory and refused the rematch. The rules allowed the developers to modify the program between games, an opportunity they said they used to shore up weaknesses in the computer's play that were revealed during the course of the match. Kasparov requested printouts of the machine's log files, but IBM refused, although the company later published the logs on the Internet. Aftermath Chess Kasparov called Deep Blue an "alien opponent" but later belittled it stating that it was "as intelligent as your alarm clock". According to Martin Amis, two grandmasters who played Deep Blue agreed that it was "like a wall coming at you". Hsu had the rights to use the Deep Blue design independently of IBM, but also independently declined Kasparov's rematch offer. In 2003 the documentary film Game Over: Kasparov and the Machine investigated Kasparov's claims that IBM had cheated. In the film, some interviewees describe IBM's investment in Deep Blue as an effort to boost its stock value. Other games Following Deep Blue's victory, AI specialist Omar Syed designed a new game, Arimaa, which was intended to be very simple for humans but very difficult for computers to master, but in 2015, computers proved capable of defeating strong Arimaa players. Since Deep Blue's victory, computer scientists have developed software for other complex board games with competitive communities. AlphaGo defeated top Go players in the 2010s. Computer science Computer scientists such as Deep Blue developer Campbell believed that playing chess was a good measurement for the effectiveness of artificial intelligence, and by beating a world champion chess player, IBM showed that they had made significant progress. While Deep Blue, with its capability of evaluating 200 million positions per second, was the first computer to face a world chess champion in a formal match, it was a then-state-of-the-art expert system, relying upon rules and variables defined and fine-tuned by chess masters and computer scientists. In contrast, current chess engines such as Leela Chess Zero typically use supervised machine learning systems that train a neural network to play, developing its own internal logic rather than relying upon rules defined by human experts. In a November 2006 match between Deep Fritz and world chess champion Vladimir Kramnik, the program ran on a computer system containing a dual-core Intel Xeon 5160 CPU, capable of evaluating only 8 million positions per second, but searching to an average depth of 17 to 18 plies (half-moves) in the middlegame thanks to heuristics; it won 4–2. Design Software Deep Blue's evaluation function was initially written in a generalized form, with many to-be-determined parameters (e.g., how important is a safe king position compared to a space advantage in the center, etc.). Values for these parameters were determined by analyzing thousands of master games. The evaluation function was then split into 8,000 parts, many of them designed for special positions. The opening book encapsulated more than 4,000 positions and 700,000 grandmaster games, while the endgame database contained many six-piece endgames and all five and fewer piece endgames. An additional database named the “extended book” summarizes entire games played by Grandmasters. The system combines its searching ability of 200 million chess positions per second with summary information in the extended book to select opening moves. Before the second match, the program's rules were fine-tuned by grandmaster Joel Benjamin. The opening library was provided by grandmasters Miguel Illescas, John Fedorowicz, and Nick de Firmian. When Kasparov requested that he be allowed to study other games that Deep Blue had played so as to better understand his opponent, IBM refused, leading Kasparov to study many popular PC chess games to familiarize himself with computer gameplay. Hardware Deep Blue used custom VLSI chips to parallelize the alpha-beta search algorithm, an example of GOFAI (Good Old-Fashioned Artificial Intelligence). The system derived its playing strength mainly from brute force computing power. It was a massively parallel IBM RS/6000 SP Supercomputer with 30 PowerPC 604e processors and 480 custom 600 µm CMOS VLSI "chess chips" designed to execute the chess-playing expert system, as well as FPGAs intended to allow patching of the VLSIs (which ultimately went unused) all housed in two cabinets. Its chess playing program was written in C and ran under the AIX operating system. It was capable of evaluating 200 million positions per second, twice as fast as the 1996 version. In 1997 Deep Blue was upgraded again to become the 259th most powerful supercomputer according to the TOP500 list, achieving 11.38 GFLOPS on the parallel high performance LINPACK benchmark. See also Anti-computer tactics, which exploit the repetitive habits of computers Mechanical Turk, an 18th- and 19th-century hoax purported to be a chess-playing machine Watson (computer), which could adeptly answer questions in human language X3D Fritz, which also tied Kasparov References Notes Citations Bibliography External links IBM.com, IBM Research pages on Deep Blue IBM.com, IBM page with the computer logs from the games Chesscenter.com, Open letter from Feng-hsiung Hsu on the aborted rematch with Kasparov, The Week in Chess Magazine, issue 270, 10 January 2000 Chesscenter.com, Open Letter from Owen Williams (Gary Kasparov's manager), responding to Feng-hsiung Hsu, 13 January 2000 Sjeng.org, Deep Blue system described by Feng-hsiung Hsu, Murray Campbell and A. Joseph Hoane Jr. (PDF) Chessclub.com, ICC Interview with Feng-Hsiung Hsu, an online interview with Hsu in 2002 (annotated) History of chess Chess computers One-of-a-kind computers IBM supercomputers PowerPC-based supercomputers
895608
https://en.wikipedia.org/wiki/GoldenEye%3A%20Rogue%20Agent
GoldenEye: Rogue Agent
GoldenEye: Rogue Agent is a first-person shooter video game in the James Bond franchise, developed by EA Los Angeles and published by Electronic Arts. The player takes the role of an ex-MI6 agent, who is recruited by Auric Goldfinger (a member of a powerful unnamed criminal organization based on Ian Fleming's SPECTRE) to assassinate his rival Dr. No. Several other characters from the Bond franchise make appearances throughout the game, including Pussy Galore, Oddjob, Xenia Onatopp and Francisco Scaramanga. Despite its name and being part of the James Bond franchise, this James Bond video game has no relation to the 1995 film or the 1997 video game of the same name, and is the first James Bond video game where the titular character does not appear in person. In this setting the game's protagonist is given the name 'GoldenEye' after he loses his eye and receives a gold-colored cybernetic replacement. Electronic Arts has listed the title along with 007 Racing (2000) as spin-offs that do not make part of the canon they have built with Tomorrow Never Dies (1999). GoldenEye: Rogue Agent received generally mixed reception from critics who praised the unique premise and multiplayer mode, but criticised the bland gameplay, plot, departure from the Bond canon, and misleading use of the GoldenEye name. Gameplay GoldenEye: Rogue Agent is a first-person shooter, played across eight levels. The player can use various types of handheld weapons throughout the game, as well as the GoldenEye, which has four abilities: MRI vision, allowing the player to see through walls; EM hack, allowing the player to hack electronic systems, machines, and enemies' weapons; a polarity shield, which deflects incoming bullets; and a magnetic field, which allows the player to send enemies flying to their deaths. Each ability is granted to the player as the game progresses. Each ability requires a certain amount of energy, seen on a meter. After using an ability, the meter slowly recharges itself. There are various weapons to collect, allowing the player to fight with a single one-handed weapon, dual wield two of such weapons, or use powerful two-handed weapons. The player's free hand can also be used to throw grenades or take hostages, using them as human shields to make it difficult for enemies to hit the player. Multiplayer The game featured a highly customizable multiplayer component with four-player split screen play, as well as online play on non-Nintendo versions, which supported up to 8 players. LAN support is exclusive to the Xbox release of the game, and up to 8 consoles can be linked together. On October 1, 2006, the servers for both of such versions were shut down due to "inactivity" online. Players can unlock additional customization options, skins, maps, and gametype variations by playing through Story Mode earning Octopus tokens based on performance. There are some locked skins such as Oddjob, Dr. No, and Xenia Onatopp. There are also locked maps such as the Pump Room, Carver's Press, the Bath House, the Vault Core, the Lower Turbine, Dr. No's reactor, the Fissure Platform, and GoldenEye's Retreat. Plot A recording by M (head of MI6) reveals that Dr. No shot the agent in his right eye during a mission. Consumed with vengeance, he frequently resorts to violence and brutality, and is considered no longer fit for service with MI6. Three years after the incident that claimed the agent's right eye, he is evaluated through a holographic simulation in which he is paired with 007 to stop Auric Goldfinger, a member of a criminal organization, from detonating a suitcase nuke inside Fort Knox. He fails the test and is held directly responsible for the "death" of 007. Charged with "reckless brutality," he is dismissed from MI6. As he leaves the headquarters, he reads an offer by Goldfinger to enlist in his organization. The agent accepts Goldfinger's offer and is recruited as his enforcer, meeting with him at Auric Enterprises, where Goldfinger's scientists have developed a weapon known as the OMEN (Organic Mass Energy Neutralizer), which releases energy capable of breaking down organic matter on a nearly atomic level, resulting in disintegration. For his job of eliminating Dr. No, a fellow official of the criminal organization who has declared war on Goldfinger's branch of the organization, he is given a gold-hued cybernetic eye created by Francisco Scaramanga, another official of Goldfinger's organization (from which he receives his codename "GoldenEye"). Scaramanga provides upgrades for the eye, starting with MRI vision. During his first mission in Hong Kong, GoldenEye has to get a sniper rifle to take down Dr. No with the EM hack feature. At the Midas Casino, GoldenEye has to get to the vault to protect the OMEN with the magnetic polarity shield. The next mission takes place at the Hoover Dam, GoldenEye has to destroy the dam and kill Xenia Onatopp. GoldenEye also tosses Oddjob over a rail into a pit inside the Hoover Dam after he betrays and attacks GoldenEye for unknown reasons. At The Octopus, GoldenEye has to download the navigation coordinates to Crab Key (Dr. No's base) with the help of the generated force field from his golden eye. He is eventually sent to Crab Key, where he confronts Dr. No. During their duel, GoldenEye uses his mechanical eye to sabotage the island's nuclear reactor, causing it to electrocute Dr. No. Upon No's death, Goldfinger contacts GoldenEye and informs him that he believes he is too dangerous to be left alive, and that he had contacted GoldenEye earlier and told him to activate a program which would shut down the Lair's defense grid. Goldfinger reveals that he is intent on taking over the Lair, and leaves GoldenEye to die in the impending nuclear meltdown. GoldenEye, however, manages to escape in Dr. No's osprey before the reactor overloads and the island is destroyed in a large explosion. GoldenEye returns to the Lair intent on confronting Goldfinger. Pussy Galore rendezvous with GoldenEye and informs him that Goldfinger has used the OMEN to wipe out most of the Lair's guards, and taken control of it. Scaramanga provides the mechanical eye with a computer virus that he can use to overload the OMEN. GoldenEye fights his way through the Lair, implanting the computer virus in the process, eventually reaching Goldfinger and the OMEN. Goldfinger traps GoldenEye inside a chamber that he claims will soon be devoid of oxygen. The computer virus then activates the OMEN, causing it to explode in a burst of energy, killing Goldfinger and his troops. GoldenEye and Galore leave the Lair aboard Galore's chopper, and Scaramanga and Number One (Ernst Stavro Blofeld) later discuss what to do with GoldenEye and decide to simply see what he does next before proceeding. Characters Both the Campaign missions and the multiplayer game feature characters based on characters in the film adaptations of Fleming's Bond novels. GoldenEye: A fearsome man who used to be in service with the MI6, fired for his reckless brutality and recruited into the World's most powerful criminal organization under the employment of Auric Goldfinger. Shot in the right eye by Dr. No but merely survived during an assignment that went awry in the past, he was given a new gold-hued cybernetic eye, from which he gained his nickname, "GoldenEye". Even though he is the protagonist of the game, his face is rarely seen on screen and he never speaks. In the game's instruction manual, his surname is revealed to be Hunter. Auric Goldfinger: A very mysterious man who has his own firm called "Auric Enterprises", expresses an obsession with gold and wealth, and is determined to take down Dr. No, and be the sole dominant operative in the organization he works for. He is modeled after Gert Fröbe but voiced by Enn Reitel. Dr. Julius No: A high-ranking officer in the world's most powerful criminal organization, who went freelance, seeking world domination for his own, therefore setting up his evil schemes on his own personal island, Crab Key. He is modeled after Joseph Wiseman and was voiced by Carlos Alazraqui in the game. Ernst Stavro Blofeld: The head of the criminal organization, whose face is never seen on the screen, and simply called "Number One" in the game's closing credits; the name he was known by in From Russia With Love and Thunderball. Official footage of character renders released by Electronic Arts feature him holding his Persian Cat, with the likeness of Donald Pleasence. He was voiced by Gideon Emery. M: The head of the MI6 is a woman who has been a veteran in the business of espionage after the cold war. She dismissed GoldenEye from duty for his "unwarranted brutality," revealing that "there is no place in the service for an agent like him". She was modeled after and was voiced by Judi Dench. Francisco Scaramanga: He is in charge of the operations and technological division within the criminal organization, and is often seen mentoring GoldenEye through an earpiece. He is modeled after and was voiced by Christopher Lee. Pussy Galore: Pussy is Goldfinger's personal pilot, who helps GoldenEye in his mission to take down his employer's chaotic plot. She is modeled after Honor Blackman but voiced by Jeannie Elias. Xenia Onatopp: A deadly assassin and a femme fatale, who works for Dr. No, sent to specifically eliminate GoldenEye. Her likeness is based on that of Famke Janssen's and was voiced by Jenya Lano. Oddjob: Goldfinger's right-hand man, a martial arts master who is also very deadly with his razor-sharp bowler hat, as well. He is the second character next to GoldenEye to never speak at all. He is modeled after Harold Sakata. 007: An agent of the MI6 within the Double-O Division, who was tasked with re-evaluating GoldenEye, but lost hope when the latter has proven to be a loose cannon and caused the "death" of 007 in a training simulation, therefore failing the test. Agent 007 is only seen in the first level of the game. A generic model was used for his likeness and he was voiced by Jason Carter. Development The game was announced in February 2004, under the working title of GoldenEye 2, and was scheduled for release in the fall of 2004. It was also revealed by EA that the game takes place at the dark side of the 007 universe, in an alternate timeline, relocating the perspective at the underworld. In May 2004, the game was unveiled at E3 as GoldenEye: Rogue Agent. The game was developed using a modified version of the Medal of Honor game engine. Ken Adam, a production designer of the Bond films during the 1960s and 1970s, served as production designer for the game. Adam worked on several level designs that were based on locations from earlier Bond films. Kym Barrett was also involved in development, as well as Paul Oakenfold, who created the music for the game. The Studio CTO/COO was Steve Anderson and the Director of Quality Assurance was John Palmieri. Takayoshi Sato, who was known for building the character models and concept artworks for the Silent Hill video game series, served as associate art director. The Nintendo DS version, which is regarded to be the first-ever first-person shooter designed on the platform, was co-developed by EA Tiburon and n-Space and runs on a custom game engine that maintains a framerate of 30fps. EA's original plan was to recast every classical character derived from the series with newer actors, leading them to consider casting Jessica Biel in the role of Pussy Galore, but it eventually fell apart. Instead, they have based every single classic character on the actors and actresses that portrayed them in the films, and hired voice actors to imitate the originals for the most. A few of the exceptions have been made as well, as famous cinematic screen veterans such as Judi Dench and Christopher Lee were brought to reprise their roles, playing M and Scaramanga, respectively. The game's script was written by Danny Bilson, who had previously worked on the games James Bond 007: Agent Under Fire (2001) and James Bond 007: Nightfire (2002). Paul De Meo also wrote the script. Reception GoldenEye: Rogue Agent received "mixed or average" reviews, according to review aggregator Metacritic. Reviewers criticized the game's lack of innovation and personality, despite its unique premise, and mediocre gameplay. Several reviewers also disliked its departure from James Bond canon in its introduction and killing off of characters. It was largely considered to be an attempt to recreate the success of one of the best-selling video games in recent history, GoldenEye 007, which was a first-person shooter for the Nintendo 64 based on the Bond film GoldenEye. Aside from the character Xenia Onatopp, the Uplink multiplayer level, and the fact that both involve a good agent going bad (although in the case of the original, not the protagonist), it had nothing to do with either the film GoldenEye or its video game adaptation, although the protagonist's scarred appearance considerably resembles Sean Bean's portrayal of rogue agent Alec Trevelyan. The game was, however, noted for showcasing certain levels and multiplayer maps based on locations from the Bond movies, such as Fort Knox from Goldfinger, the space shuttle base from Moonraker, and Scaramanga's hideout from The Man with the Golden Gun. According to Electronic Arts, GoldenEye: Rogue Agent was a commercial success, with sales above 1 million units worldwide by the end of 2004. References External links 2004 video games Electronic Arts games MGM Interactive games First-person shooters GameCube games Rogue Agent Interactive Achievement Award winners James Bond video games Multiplayer and single-player video games Multiplayer online games Nintendo DS games PlayStation 2 games Video game spin-offs Video games developed in the United States Video games set in Hong Kong Video games set in Kentucky Video games set in Nevada Video games using Havok Xbox games
1218906
https://en.wikipedia.org/wiki/Rune%20%28video%20game%29
Rune (video game)
Rune is an action-adventure video game developed by Human Head Studios which was released in 2000. The game is based on Ragnarok, showing the conflict between the Gods Odin and Loki and the buildup to Ragnarok. Built on the Unreal Engine, the game casts the player as Ragnar, a young Viking warrior whose mettle is tested when Loki and his evil allies plot to destroy the world and bring about Ragnarok. Upon release, Rune received generally positive reviews. A standalone expansion pack for the game, Rune: Halls of Valhalla, was released in 2001. Both the base game and expansion were ported to Linux by Loki Software. Ryan C. Gordon, a former Loki employee, would also later port Human Head's 2006 title Prey. A port to the PlayStation 2 was also released under the title Rune: Viking Warlord in 2001. The game was re-released digitally under the name Rune Classic in 2012, with the expansion included. A sequel, Rune II, was released on November 12, 2019. A pen-and-paper adaptation of Rune was published by Atlas Games. Gameplay The game casts the player as Ragnar, a young Viking warrior. It follows a fantasy plot based on Norse mythology. The various enemies Ragnar faces include man-eating fish, goblins, zombies, Norse dwarves and other Vikings. As the game goes on, as in most games of its type, better weapons are accumulated. Late in the game the player wields weapons of enormous size, even though most weapons maintain their usefulness to the end. Runes are strewn around the game world. When collected, they add to the player's rune power. Weapons in Rune are divided into three categories: swords, axes, and maces/hammers. Each of the three classes have five weapons, increasing in size as the game progresses. Each weapon has a unique "Rune Power" that can be activated for a short period of time when the player has enough rune power. Shields may be equipped along with the first three weapons of each class. Weapons of tier 4 and 5 are two-handed, and may not be used along with shields. Besides these standard weapons, other items such as torches and severed limbs may also be equipped to be used as weapons. While high-tier weapons tend to be preferable in singleplayer games, all tiers are considered somewhat equal for multiplayer situations due to balancing factors such as speed. Depending on the direction of Ragnar's movement, weapons can be thrust, swung overhead, or slashed. Repeated strikes unleash a powerful spinning attack. All weapons may be thrown, and deal as much damage when thrown as a melee attack. When Ragnar has killed enough enemies in a short span of time, he enters a brief "Berserk Mode", which allows him to resist damage and hit harder. There is also a special rune which instantly activates Berserk Mode. Although rather linear, Ragnar does not need to kill everything in sight (common in games of the time) to travel from one level to the next. In some levels, players have found alternative ways of getting through to the next level. However, particular scripted pawns must be activated (killed, moved or tripped) in key zones to initiate certain actions to continue and move the story along. Multiplayer Rune features several multiplayer modes, typical for the time, such as Deathmatch, Team Deathmatch, and so on. The expansion, Halls of Valhalla, added one unique mode, which is inspired by football; the players are split into team, and score points by dismembering players in the opposing team, picking up their body-parts, and throwing them into the goal. It is a game of spatial orientation in which opponents manoeuvre around each other, swinging in and out of range and attempting to score hits on each other. There are a variety of attacks available to the player at any one time, dictated by the weapon they hold at that moment. The geometries of each swing are immutable – thus players are able to fine tune their movement to the precision of a few pixels, and accurately behead their opponents. Over the years, Rune developed a thriving and competitive clan community, with players from all over the world joining servers, playing together, and forming clans. Story The story begins when the player, as Ragnar, is initiated into the Odinsblade, an order of warriors sworn to protect the runestones, magical creations of Odin which bind the evil god, Loki and prevent him from unleashing Ragnarok – the end of the world. Ragnar has completed his initiation by beating the great warrior Ulf in combat, when a warrior bursts into the scene and informs the two that a Viking known as Conrack is leading a raid on an allied village. Ragnar and the rest of his village's warriors are assembled into a longship to do battle. They encounter Conrack's longship, and Ragnar's father is about to order his men to attack, when Conrack calls upon Loki and destroys the ship with a thunderbolt. The ship sinks, killing all on board but Ragnar, who receives a message from Odin that it is not his time to die. Recovered, he swims to safety in an underwater cave. Ragnar fights his way through the monster-riddled caverns. He eventually enters the land of the dead, domain of Loki's daughter Hel. Passing through the Underworld and facing the ghastly undead, Ragnar learns the enemy's plan: Conrack's carnage sends many dishonored souls to Hel's domain, who in turn gives them to Loki to transform into an army which will conquer the world. After fighting his way through Hel, Ragnar is captured by goblins and fights their beast in the trial pit. He defeats the beast and escapes goblin lands riding on a giant flying beetle. When he emerges from the caverns, he stands before Thorstadt, the mountain fortress of Conrack, and fights his way through it to a Temple of Loki. Inside, Sigurd – Conrack's right arm – confronts his master about all the destruction and asks him to drop the charade of worshipping Loki. Conrack states that Sigurd has outlived his usefulness, and sends two of the transformed dishonored to kill him, then escapes. Ragnar enters the scene and stands before the dying warrior. Sigurd informs Ragnar that he is the last of the Odinsblade, and saving the world is up to him, then dies. Ragnar follows Conrack, and ends up in the land of the Dwarves. In Rune, Dwarves are depicted as short, stocky, purple beings. He travels through the industrial powerhouse of the dwarven land and learns that the dwarves are supplying weaponry and armor for Loki's new sinister armies. Odin then tasks Ragnar to murder the dwarf king, whose will holds the dwarves' allegiance to Loki together. The king has apparently proclaimed himself a semi-god, and resides in a great temple dedicated to himself. Ragnar enters battle with the king, and he uses the great machine that gives the king his powers to destroy him. Ragnar travels deep below the earth and to the castle of Loki himself. Odin tells Ragnar that even he will not be able to contact him whilst he transverses through Loki's realm. Ragnar discovers that it is Loki's blood that transforms Hel's undead warriors to the monsters of Loki's armies. He passes through the castle and Loki's maze, arriving at the holding chamber of Loki himself. It is here Ragnar faces Conrack at last. Ragnar knocks the rogue Viking into a river of Loki's blood, which seems poised to kill him. However, the great stone snake which binds Loki drips acid onto his gaping chest wound and the green blood turns purple. Conrack rises out of the river, reborn in Loki's image as a hideous monster. Conrack reveals to Ragnar that Loki's armies are invading Midgard and destroying Odin's runestones left and right. He escapes with a great leap. After a tussle with some undead, Ragnar dives into the river, emerging as a mighty giant. Loki tries to persuade Ragnar to join his side. He refuses and goes after Conrack. He escapes from Loki's castle and makes his way through caverns out into the world above. He then stands witness to the devastation wrought by Loki's armies. Loki mocks him, but he presses on. His fellow warriors no longer recognize him, and attack him on sight. Ragnar finally arrives home, only to see it totally destroyed. Loki offers one last time to join him, and Conrack sends his men forward to destroy the runestone and Ragnar. There are two possible outcomes of the game, depending on what the player does here. In the canonical good ending, Ragnar bests Conrack and his men. Odin speaks to Ragnar, telling him that the people of his village are safe in the hands of his servant Bragi. He informs Ragnar that he has succeeded and Ragnarok has been averted. Loki, full of bitterness and rage, has his cave filled in by Odin, thwarted for the time being. Odin then opens up a portal in his last runestone, telling Ragnar to step through and join him at his side as the first living warrior to enter Asgard. Complying, Ragnar enters Odin's realm and finds himself restored to his human form. Beckoned by Odin, Ragnar runs over Bifröst and enters the Halls of Valhalla. In the evil ending, Ragnar strides up the hill toward the last runestone and shatters it. As soon as this happens, Loki is freed from his underground prison. The last we see of Ragnar is that he is crucified in Loki's lair. Loki then takes over all of Midgard. Development The game was developed with a 15 person team. The genesis of Rune occurred while Ted Halsted and Shane Gurno were working at Raven Software. Halsted, who had always been fascinated by Vikings, took inspiration from the Icelandic Völsunga saga. Halsted, Gurno and four others left Raven to found Human Head Studios, and soon found work developing a sequel to Daikatana using the Unreal Engine. When that project fell through, the Viking idea was revived and work began on Rune with the publisher Gathering of Developers. Epic Games allowed Human Head to keep using the Unreal Engine originally licensed for the Daikatana project. The two developers collaboratively made several enhancements to it, including a skeletal animation system, a new particle effects system, and an enhanced shadowing system. Although made using the Unreal Engine, Rune is a third-person perspective game without any shooting. The weapons used in the game include swords, axes, maces, and other medieval fantasy melee weapons. Despite using an engine made for shooting, the interface lends itself well to a playing style consisting of running, jumping, and hacking at opponents. Although the game includes no ranged weapons, any weapon can be thrown. An innovative feature of the game is that anything dropped by a dead opponent (body parts included) can be picked up and used. Limbs can be swung as clubs, and heads can be carried and used as weapons. Enemies whose sword arms have been chopped off will run away from battle. Both Rune and Rune: Halls of Valhalla were released with their own RuneEd toolkits which the community quickly used making several popular multiplayer mods (coop, CTT—capture the torch, 'bots, etc.). Although a few single player addons have been made, it is Rune's multiplayer aspect has been the focus of several mutators, skins, and hundreds of maps that are available through many clan and resource websites. In 2004, the source header files were released freely by Human Head. Release Rune shipped for Windows on October 27, 2000. A playable demo was released at the same time. The Mac OS version followed in December 2000 and Loki Software released the Linux port in June 2001. In October 2001, Rune was re-released with the HOV expansion included, as Rune Gold. A Dreamcast port of the game had been planned, but was eventually cancelled. Human Head Studios would also feature new multiplayer levels for Rune online. A pen-and-paper adaptation was released by Atlas Games, whom Tim Gerritsen had approached at GenCon. Reception The original PC version of Rune received "generally favorable reviews" according to the review aggregation website Metacritic. Critics generally thought the game was good, but not great, and required the player to adjust their expectations. A common point of criticism was the enemy artificial intelligence. Opponents would simply gang up on the player without bothering to use tactics. Enemy variety was also found wanting. Praise was however given to the graphics, especially the crisp textures, and the detailed limb-hacking violence was appreciated. The multiplayer was thought middling. Although many appreciated the cathartic fun of running around lopping the heads off other players, the lack of game modes and problems with lag interfered with the enjoyment. Jim Preston of NextGen said of the game in its February 2001 issue, "Rune fails to impress. The shaky gameplay undermines the first-rate visuals and ripping good story. John Brandon of GameZone gave the game a score of eight out of ten, calling it "a brawny game without a huge emphasis on brains – although Viking warriors toward the end of the game are challenging enough. There's a lot of visceral, visual enjoyment here – a near-perfect antidote to a long workday." Michael Tresca of AllGame gave it three-and-a-half stars out of five, saying, "Despite good graphics and excellent sound, Rune falls short of its potential due to various flaws, glitches, and a lack of variation over the long haul." However, Benjamin E. Sones of Computer Games Strategy Plus gave it two-and-a-half stars out of five, saying that it was "too long and uninspiring to hold your attention. The game would have been far more entertaining with half as many levels. More can be better... but only if it's good." The game sold 49,000 units in the U.S. by October 2001. The game was nominated for the Action Game of the Year award at the CNET Gamecenter Computer Game Awards for 2000, which went to MechWarrior 4: Vengeance. Expansion and PS2 version The Rune multiplayer component was expanded with the 2001 release of the stand-alone expansion pack Rune: Halls of Valhalla. HOV adds two new modes: "Head Ball" is a variant of capture the flag with body parts standing in for flags; "Arena" is a duel-centric deathmatch mode. The developers drew inspiration from the violent sport of the Aztecs, in which the losing team of a ball game was decapitated. 37 new maps in total are included: 20 for deathmatch, 8 for Head Ball, and 9 for Arena. Some of these maps were the winners of a four-week competition for fans of the game to create their own maps using the level editor. Sixteen new character models are available, some of them female. The Wren Valkyrie model is based on a fan of Rune who ended up getting hired as site director of the official Rune website. The soundtrack is also expanded. Rune: Viking Warlord is the PlayStation 2 port of Rune, released in 2001 by Take-Two Interactive. It contains a few extra maps and enemies, but is otherwise a straight port. The PlayStation 2 port received "mixed" reviews according to Metacritic. It was criticized for inferior graphics to the original and long loading times. Whatever their opinion of the multiplayer on PC, critics disliked the multiplayer on PS2. In NextGens October 2001 issue, Preston called it "Another disc on the already enormous heap of mediocre PS2 games." Dylan Parrotta of GameZone gave it 4.5 out of 10, saying, "Though I could not find a stopwatch I would say that I spent around a quarter of my time in the confines of the load screen." Sequel Human Head was considering a sequel as early as 2000. Rune II had been negotiated with an unnamed publisher during the early-to-mid-2000s. In 2012, Human Head Studios indicated that it was considering making a sequel to Rune. A sequel titled Rune: Ragnarok was announced to be in production by Human Head in August 2017. In March 2018, the Ragnarok subtitle was dropped and the title was renamed Rune. By May 2019, Human Head renamed the title again as Rune II, with a target release for Windows in mid-2019 through the Epic Games Store. The partnership with Epic Games enabled Human Head to acquire additional funding to finish out their more expansive version of the game, according to studio head Chris Rhinehart. The game was pushed back and was released on November 12, 2019, under publisher Ragnarok Game LLC. Immediately after the game's release, Human Head Studios was shuttered and a new studio, Roundhouse Studios, was formed with the staff of Human Head Studios under management with Bethesda Softworks. Ragnarok stated they were unaware of Human Head's closure until that day but remained committed to providing ongoing support and releases outside Epic Games Store through 2020. By December 2019, Ragnarok had filed a lawsuit against the former Human Head staff for failing to provide the final source code and assets for Rune II following the studio's surprise closure upon their request, as well as in damages relating to the poor state in which Rune II was released at launch and to cover the game's post-launch support period which Human Head was to have done. Ragnarok had received the source code back from the former Human Head staff by January 2020, but still intend to follow through with the lawsuit. By October 2020, Ragnarok had expanded the lawsuit to include both ZeniMax and Bethesda as parties to the suit, alleging they had frauded Ragnarok and sabotaged the development of two games. References External links 2000 video games Action-adventure games Cancelled Dreamcast games Classic Mac OS games Fictional Vikings Gathering of Developers games Hack and slash games Linux games Loki Entertainment games Multiplayer online games PlayStation 2 games Take-Two Interactive games Unreal Engine games Video games based on Norse mythology Video games scored by Rom Di Prisco Video games set in the Viking Age Video games developed in the United States Video games with expansion packs Windows games
2126855
https://en.wikipedia.org/wiki/Virtual%20IP%20address
Virtual IP address
A virtual IP address (VIP or VIPA) is an IP address that doesn't correspond to an actual physical network interface. Uses for VIPs include network address translation (especially, one-to-many NAT), fault-tolerance, and mobility. Usage For one-to-many NAT, a VIP address is advertised from the NAT device (often a router), and incoming data packets destined to that VIP address are routed to different actual IP addresses (with address translation). These VIP addresses have several variations and implementation scenarios, including Common Address Redundancy Protocol (CARP) and Proxy ARP. In addition, if there are multiple actual IP addresses, load balancing can be performed as part of NAT. VIP addresses are also used for connection redundancy by providing alternative fail-over options for one machine. For this to work, the host has to run an interior gateway protocol like Open Shortest Path First (OSPF), and appear as a router to the rest of the network. It advertises virtual links connected via itself to all of its actual network interfaces. If one network interface fails, normal OSPF topology reconvergence will cause traffic to be sent via another interface. A VIP address can be used to provide nearly unlimited mobility. For example, if an application has an IP address on a physical subnet, that application can be moved only to a host on that same subnet. VIP addresses can be advertised on their own subnet, so its application can be moved anywhere on the reachable network without changing addresses. See also Anycast, single IP bound simultaneously to many, potentially geographically disparate, NICs IP network multipathing (IPMP), Solaris virtual IP implementation for fault-tolerance and load balancing Virtual LAN Notes References Cluster computing IP addresses
34007442
https://en.wikipedia.org/wiki/ARM%20Cortex-R
ARM Cortex-R
The ARM Cortex-R is a family of 32-bit and 64-bit RISC ARM processor cores licensed by Arm Holdings. The cores are optimized for hard real-time and safety-critical applications. Cores in this family implement the ARM Real-time (R) profile, which is one of three architecture profiles, the other two being the Application (A) profile implemented by the Cortex-A family and the Microcontroller (M) profile implemented by the Cortex-M family. The ARM Cortex-R family of microprocessors currently consists of ARM Cortex-R4(F), ARM Cortex-R5(F), ARM Cortex-R7(F), ARM Cortex-R8(F), ARM Cortex-R52(F), and ARM Cortex-R82(F). Overview The ARM Cortex-R is a family of ARM cores implementing the R profile of the ARM architecture; that profile is designed for high performance hard real-time and safety critical applications. It is similar to the A profile for applications processing but adds features which make it more fault tolerant and suitable for use in hard real-time and safety critical applications. Real time and safety critical features added include: Tightly coupled memory (uncached memory with guaranteed fast access time) Increased exception handling in hardware Hardware division instructions Memory protection unit (MPU) Deterministic interrupt handling as well as fast non-maskable interrupts ECC on L1 cache and buses Dual-core lockstep for CPU fault tolerance The Armv8-R architecture includes virtualization features similar to those introduced in the Armv7-A architecture. Two stages of MPU-based translation are provided to enable multiple operating systems to be isolated from one another under the control of a hypervisor. Prior to the R82, introduced on 4 September 2020, the Cortex-R family did not have a memory management unit (MMU). Models prior to the R82 could not use virtual memory, which made them unsuitable for many applications, such as full-featured Linux. However, many real-time operating systems (RTOS), with an emphasis on total control, have traditionally regarded the lack of an MMU as a feature, not a bug. On the R82, it may be possible to run a traditional RTOS in parallel with a paged OS such as Linux, where Linux takes advantage of the MMU for flexibility, while the RTOS locks the MMU into a direct translation mode on pages assigned to the RTOS so as to retain full predictability for real-time functions. ARM license ARM Holdings neither manufactures nor sells CPU devices based on its own designs, but rather licenses the core designs to interested parties. ARM offers a variety of licensing terms, varying in cost and deliverables. To all licensees, ARM provides an integratable hardware description of the ARM core, as well as complete software development toolset and the right to sell manufactured silicon containing the ARM CPU. Silicon customization Integrated device manufacturers (IDM) receive the ARM Processor IP as synthesizable RTL (written in Verilog). In this form, they have the ability to perform architectural level optimizations and extensions. This allows the manufacturer to achieve custom design goals, such as higher clock speed, very low power consumption, instruction set extensions, optimizations for size, debug support, etc. To determine which components have been included in a particular ARM CPU chip, consult the manufacturer datasheet and related documentation. Applications The Cortex-R is suitable for use in computer-controlled systems where very low latency and/or a high level of safety is required. An example of a hard real-time, safety critical application would be a modern electronic braking system in an automobile. The system not only needs to be fast and responsive to a plethora of sensor data input, but is also responsible for human safety. A failure of such a system could lead to severe injury or loss of life. Other examples of hard real-time and/or safety critical applications include: Medical device Programmable logic controller (PLC) Electronic control units (ECU) for a wide variety of applications Robotics Avionics Motion control See also List of ARM Cortex-M development tools ARM architecture List of ARM architectures and cores JTAG, SWD Interrupt, Interrupt handler Real-time operating system, Comparison of real-time operating systems References External links ARM Cortex-R official documents {| class="wikitable" |- ! ARMCore !! BitWidth !! ARMWebsite !! ARM TechnicalReference Manual !! ARM ArchitectureReference Manual |- | style="background: LightCyan" | Cortex-R4(F) || 32 || Link || Link || ARMv7-R |- | style="background: LightCyan" | Cortex-R5(F) || 32 || Link || Link || ARMv7-R |- | style="background: LightCyan" | Cortex-R7(F) || 32 || Link || Link || ARMv7-R |- | style="background: LightCyan" | Cortex-R8(F) || 32 || Link || Link || ARMv7-R |- | style="background: LightCyan" | Cortex-R52(F) || 32 || Link || Link || ARMv8 ARMv8-R |- | style="background: LightCyan" | Cortex-R82(F) || 64 || Link || Link || ARMv8-R |} Migrating Migrating from MIPS to ARM – arm.com Migrating from PPC to ARM – arm.com Migrating from IA-32 (x86-32) to ARM – arm.com Other CORTEX-R versus CORTEX-M Cortex-R ARM cores Real-time computing
490767
https://en.wikipedia.org/wiki/Mark%20Spencer%20%28computer%20engineer%29
Mark Spencer (computer engineer)
Mark Spencer (born April 8, 1977) is an American computer engineer and is the original author of the GTK+-based instant messaging client Gaim (which has since been renamed to Pidgin), the L2TP daemon l2tpd and the Cheops Network User Interface. Mark Spencer is also the creator of Asterisk, a Linux-based open-sourced PBX. He is the founder, chairman and CTO of Digium, an open-source telecommunications supplier most notable for its development and sponsorship of Asterisk. Spencer shifted from CEO to Chairman and CTO in early 2007. Early life Spencer was born and raised in Auburn, Alabama. He attended Auburn University where both his parents were professors. In high school, he was mentored by another Auburn professor, Thaddeus Roppel, and Mark Smith, co-founder of Adtran. Career While attending Auburn University, Spencer co-oped at Adtran when he wrote l2tpd. He went on to start a Linux technical support business. Spencer did not have enough money to buy a PBX (private branch exchange) for his company so he decided to write Asterisk and later founded Digium. As a pilot, Mark founded Avilution, LLC. to create Android apps including QuickWeather and AviationMaps. AviationMaps was later spun out to FlightPro, then DroidEFB. Adapting a similar strategy as Asterisk, he developed the eXtensible Flight System, XFS, a cross-platform avionics architecture. XFS has already been integrated in the Zenith CH750 STOL aircraft in the form of both a three-screen panel and the "Unpanel," a portrait-orientation (also landscape) screen to replace the entire traditional glass cockpit. References External links Mark Spencer's previous homepage (copies at the Internet Archive) Interview with Mark Spencer at OSDir.com at CNet Linux Link Tech Show interview (audio), 2005 Global open source enthusiasts interview Mark Spencer December 2009 News story regarding changing roles at Digium 1977 births Living people People from Huntsville, Alabama Free software programmers Auburn High School (Alabama) alumni Auburn University alumni Asterisk (PBX)
7338348
https://en.wikipedia.org/wiki/Fontys%20University%20of%20Applied%20Sciences
Fontys University of Applied Sciences
Fontys University of Applied Sciences is a Dutch university of applied sciences with over 44,000 students in several campuses located in the southern Netherlands. The three largest Fontys campuses are located in the cities of Eindhoven, Tilburg and Venlo. The name Fontys comes from the Latin word "fons" which means "source". Thus, Fontys wants to highlight that it is a source of knowledge for students. Fontys offers 200 bachelor's and master's study programmes in the fields of economics, technology, health care, social work, sports and teacher training. A selection of these programmes is offered in German and English. The independent Dutch ranking Keuzegids ranks Fontys as one of the best large universities of applied sciences in the Netherlands. In 2014, former Fontys Chairperson Nienke Meijer was declared "Most influential Woman of the Netherlands". Ranking Fontys University of Applied Sciences is ranked among the best universities of applied sciences in the Netherlands. Here, Fontys scores above-average results in the study fields of engineering, IT, logistics as well as business administration and management (consisting of the study programmes International Business (IB), Marketing Management (MM), International Finance & Control (IFC) and International Fresh Business Management (IFBM)). The ranking is done annually by the Centre of Higher Education Information and published in the Keuzegids HBO. It takes into account the perspectives of experts, students and universities themselves. Next to factors such as student satisfaction with their university's lecturers and study content, student success rates, IT facilities and practice-orientation the ranking is based on the accreditation reports of the Dutch-Flemish Accreditation Organisation (NVAO). All Fontys study programmes are accredited by the Dutch-Flemish Accreditation Organisation (NVAO) or equivalent British accreditation bodies. About 11% (4,798) out of the 44,486 Fontys students come from abroad; they represent more than 70 countries around the world, with a large delegation from Germany. Many of these international students are enrolled as exchange students or degree students in English-taught bachelor and master programmes, often offered in cooperation with Fontys’ partner universities in other countries. Campuses Fontys campuses are equipped with computer rooms, libraries and student restaurants. Wireless internet is available throughout the campuses, enabling students to bring their laptop. In this way students can work, study, search for information whenever they want. The Student Facilities department at Fontys University of Applied Sciences offers advice and support about financial affairs, course-related matters, practical matters, sports facilities and student associations, supervises during the course of studies and provides information about events relating to the study process. Eindhoven Fontys International Campus Eindhoven is located in the South-East of the Netherlands. Tilburg Fontys has three locations in Tilburg. Fontys International Campus Stappegoor is located on the southern outskirts of the city. The campus population consists of diverse students, ranging from Economics students to Software engineering students. Fontys School of Fine and Performing Arts (Dutch: Fontys Hogeschool voor de Kunsten – FHK) groups all educational programs in the visual and performing arts under the same roof in a building known as the Kunstkluster ('Art cluster') located in the centre of the city. The Fontys Academy of Journalism (Dutch: Fontys Hogeschool voor Journalistiek – FHJ) is located in the Professor Gimbrèrelaan, in the west of the city, and is one of the four journalism schools in the Netherlands. Venlo Fontys Venlo is a young campus with an old history. At first, the Venlo University of Applied Sciences was established on the grounds of the former country estate De Wylderbeek (whose forest still contains protected artifacts from Roman times). In the late 1990s, the institution joined the growing Fontys network. Several renovations and expansions helped the campus buildings ("which were originally build by a congregation of nuns in 1965") remain "state of the art on the inside and outside". Today, the campus houses three institutes: Fontys Teacher Training Academy (FHKE), Fontys University of Applied Sciences for Technology and Logistics (FHTenL) and Fontys International Business School (FIBS). Although Venlo's city centre was rebuilt after the Second World War, it has buildings dating back to the 14th century. The city retained the historical significance of its monuments and old facades. Ald Weishoes (Old Orphanage), Romerhoes (Romer House), the Stadhuis (City Hall) and the St. Martinuskerk (St. Martin's Church) are just some of the many historic buildings in Venlo's city centre. From 2013 to 2015, together with The Hague, Venlo was chosen to have the best city centre in The Netherlands. Both, the close location to the river Meuse (one of three major rivers in the Netherlands) and Germany make Venlo one of Europe's most important logistics hotspots. It is accessible via six surrounding airports: Eindhoven Airport, Maastricht Aachen Airport, Düsseldorf Airport, Weeze Airport, Cologne Bonn Airport and Dortmund Airport. Venlo's international focus and strong sense of entrepreneurship are represented in the international Fontys campus in Venlo having students from over 50 nationalities. Study programmes Fontys provides full-time study programmes. Several programmes are offered in English or German. English and German Bachelor's programmes Business and Management International Business (English) International Finance & Control (English or German) Marketing Management (English or German) International Fresh Business Management (English or German) International Lifestyle Studies (English) International Communication Management (English) Communication – International Event, Music and Entertainment Studies (English) Marketing Management – Digital Business Concepts (English) ICT and Engineering Automotive Engineering (English) Industrial Engineering & Management (English) Mechatronics Engineering (English or German) Electrical and Electronic Engineering (English) Mechanical Engineering (English or Dutch-German language mix) Information Technology (specializations: Software Engineering, Business Informatics) (English) Information & Communication Technology (specializations: ICT & Business, ICT & Media, ICT & Software Engineering, ICT & Technology, ICT & Media Design, ICT & Infrastructure) (English or Dutch) Industrial Design Engineering (English or Dutch-German language mix) Embrace TEC – Technology, Entrepreneurship & Creativity (English) (Part of Fontys | Pulsed) Logistics Logistics (specializations: Logistics Management, Logistics Engineering) (English or German) International Fresh Business Management (English or German) Physiotherapy Physiotherapy (English) Arts Circus and Performance Art (English) Dance Academy (English) English Master's programmes Master of Science in Business and Management (MBM) (in cooperation with University of Plymouth, UK) Master of Business Administration (MBA) (in cooperation with FOM University of Applied Sciences for Economics and Management, Germany) Master of Science in International Logistics/ Procurement/ Operations and Supply Chain Management (in cooperation with University of Plymouth, UK) Master of Architecture Master of Urbanism Master of Music Master in Performing Public Space Exchange programmes Fontys has over 100 partner universities around the world, including Australia, Canada, China], Finland, Hong Kong], Italy, Mexico,Taiwan, the United Kingdom, the United States, Vietnam, and Zambia (to name only a few). In the third academic year, Fontys students are given the opportunity of studying abroad for one semester at one of the mentioned partner universities. Alternatively, students can choose to stay at Fontys and take part in one of many academic minors (secondary academic discipline) such as International Business Management, Trendwatching, Game Business or Event Management. Accommodation Through own student dormitories and contracts with landlords Fontys offers enough places to accommodate all its international students. All student apartments are furnished and located close to Fontys and/or the city centre. Student company/software factory Together with about 10 students from different study programmes, all business students at Fontys in Venlo start their own company which is officially registered in the Dutch Commercial Registry. For nearly a year the students manage their company including: writing a business plan, selling shares to finance your entrepreneurial activities, developing and selling a product/service, and filing a tax report. All positions and functions such as general manager, Marketing Manager or Finance Manager within the company are filled by students. At the end of the academic year the company is liquidated properly. Over the past years, several Fontys student companies participated in national and international business competitions and were awarded for their successful business. Engineering students at Fontys Venlo take part in the Software Factory where they develop customized software solutions for cooperating companies. Internships/work placements During their studies, all Fontys students do two internships (one semester in the third and fourth academic year). Internships can be done in the Netherlands or abroad. Students chose the company themselves but Fontys also provides a list of local and international partner companies. During both internships students work on a specific company project. They are supervised by one university lecturer and one company employee. Amongst others, these internships are useful for building a professional network and getting work experience which eases getting a job after studies. Partner companies Fontys has partnerships with more than 500 international companies including 3M, Adidas, Bayer, BMW, Coca-Cola, Daimler AG, Deloitte, Ernst & Young, Henkel, IKEA, KPMG, L'Oréal, Metro AG, Nike, Inc., Philips, Porsche, Robert Bosch GmbH, Siemens, Sony, Vodafone and Volkswagen. These companies either offer projects and/or internships to Fontys students. Student organizations Fontys has several student associations throughout its campuses: DaVinci – First student association in Venlo founded in 1999 FC FSV-Venlo – Official Fontys Venlo football club playing at Seacon stadium of VVV-Venlo Fontys4Fairtrade – Fontys Venlo sustainability committee working on projects to raise awareness for the environment and sustainability IMagine – Fontys Venlo student association organizing various events on campus (study programme: International Marketing) Knowledge Business Consulting (KBC) – student-run business consultancy headquartered in Fontys Venlo providing consultancy services to companies in both Germany and the Netherlands Omnia – Fontys Venlo student association organizing various events on campus (study programme: International Business) Proxy – Student association of ICT English stream in Fontys Eindhoven (study programme: Information and Communication Technology) Student Sports Venlo – Organizing sports for students such as football, basketball, volleyball, tennis, rowing, fitness and urban dance Notable alumni Elly Blanksma-van den Heuvel – Mayor of Helmond, Dutch politician and former banking manager at Rabobank Pieter Elbers – chief executive officer (CEO) and Chairman of the national flag carrier of the Netherlands KLM (Royal Dutch Airlines) Florence Kasumba – Ugandan-German actress Onno Hoes – Dutch politician and former Mayor of Maastricht Twan Huys – Dutch journalist, television presenter, and author Johannes Oerding – German singer-songwriter Hans Teeuwen – Dutch comedian, musician, actor and filmmaker Floor Jansen – Dutch singer, lead singer of Finnish symphonic metal band Nightwish References External links Fontys University of Applied Sciences Vocational universities in the Netherlands 1996 establishments in the Netherlands
31548332
https://en.wikipedia.org/wiki/Larry%20Irving
Larry Irving
Clarence "Larry" Irving, Jr. (born July 7, 1955 in Brooklyn, New York) is the former Vice President of Global Government Affairs for Hewlett-Packard Company. He joined the company on September 9, 2009 and left in 2011. Career Irving is president and CEO of the Irving Information Group, a telecommunication and information technology strategic planning and consulting business based in Washington, D.C. Irving launched his business in October 1999. Prior to starting his business, he was head of the National Telecommunications Infrastructure Administration (NTIA), an agency of the United States Department of Commerce that serves as the President's principal adviser on telecommunications policies pertaining to the United States' economic and technological advancement and to regulation of the telecommunications industry. He was a principal architect of President Bill Clinton's telecommunications, Internet and e-commerce policies and initiatives and acted as a senior adviser to the President and to Vice President Al Gore and the United States Secretary of Commerce during his tenure from 1993 to 1999. Irving was a member of the Clinton-Gore transition team focusing on telecommunications issues. More recently, he worked with President Barack Obama's transition team on science and tech agencies. Prior offices Before becoming head of NTIA, Irving was a Senior Counsel for the U.S. House of Representatives, Subcommittee on Telecommunications and Finance. While working for the committee, he helped draft and negotiate the Cable Television Consumer Protection Act, the Children's Television Act and Television Decoder Circuitry Act. He has also served as Legislative Director and Counsel for U.S. congressman Mickey Leland, a Democrat from the 18th District in Texas. Irving was active with the Congressional Black Caucus while Leland was its chair. He was an Associate with the law firm Hogan and Hartson, the oldest major law firm headquartered in Washington, D.C. Published reports While serving the NTIA, Irving authored three reports entitled "Falling Through the Net" that highlighted the scope and the consequences of inequities in access to information technology. He helped define the scope of and bring to public attention to the digital divide, a term referring to the gap between people with effective access to digital and information technology and those with very limited or no access at all. Other activities and honors He also sits on the board in an official or adviser capacity for a variety of organizations, including Internews, ReliabilityFirst Corporation, Waggener Edstrom, the Education Development Center, Annenberg School of the University of Southern California, Stanford Law School and the Weinberg College of Arts and Sciences of Northwestern University. He was named one of the 50 most influential persons in the "Year of the Internet" by Newsweek magazine in 1994. In 2019, he was inducted into the Internet Hall of Fame. Education Irving is a 1976 B.A. graduate of Northwestern University in Chicago, Illinois, and holds the degree of J.D. from Stanford University in Palo Alto, California, where he was class president. References External links Larry Irving Biography 1955 births American chief executives Northwestern University alumni Stanford Law School alumni Hewlett-Packard people Living people Businesspeople from New York City
61792424
https://en.wikipedia.org/wiki/Oppo%20F1
Oppo F1
The Oppo F1 is an Android smartphone manufactured by Oppo Electronics that was released in January 2016. The phone featured a touchscreen display, Android 5.1 (Lollipop) operating system, and microSD card support for up to an additional 256GB of storage. This is the first phone in Oppo's F series. Specifications Hardware The Oppo F1 has a touchscreen display with 720 x 1280 resolution. It has a 13MP main camera and an 8MP selfie camera, along with a Screen Flash feature that lights up the screen to light up selfies. The phone comes with 16GB of internal storage, 3GB RAM, and expandable storage up to 256GB via a microSD card. It also has a Snapdragon 616 CPU. The phone itself measures X X and weighs . Software The Oppo F1 came with Android 5.1 (Lollipop) with ColorOS 2.1. See also Oppo phones Android (operating system) References F1 Android (operating system) devices Mobile phones introduced in 2016 Discontinued smartphones
2227982
https://en.wikipedia.org/wiki/Looking%20Glass%20server
Looking Glass server
Looking Glass servers (LG servers) are servers on the Internet running one of a variety of publicly available Looking Glass software implementations. They are commonly deployed by autonomous systems (AS) to offer access to their routing infrastructure in order to facilitate debugging network issues. A Looking Glass server is accessed remotely for the purpose of viewing routing information. Essentially, the server acts as a limited, read-only portal to routers of whatever organization is running the LG server. Typically, Looking Glass servers are run by autonomous systems like Internet service providers (ISPs), Network Service Providers (NSPs), and Internet exchange points (IXPs). Implementation Looking glasses are web scripts directly connected to routers' admin interfaces such as telnet and SSH. These scripts are designed to relay textual commands from the web to the router and print back the response. The are often implemented in Perl PHP, and Python, and are publicly available on GitHub. Security concerns A 2014 paper demonstrated the potential security concerns of Looking Glass servers, noting that even an "attacker with very limited resources can exploit such flaws in operators' networks and gain access to core Internet infrastructure", resulting in anything from traffic disruption to global Border Gateway Protocol (BGP) route injection. This is due in part because looking glass servers are "an often overlooked critical part of an operator infrastructure" because it sits at the intersection of the public internet and "restricted admin consoles". As of 2014, most Looking Glass software were small and old, having last been updated in the early 2000's. See also Autonomous system (Internet) Internet backbone References External links Source code for the *original* Multi-Router Looking Glass (MRLG) by John Fraizer @ OP-SEC.US Packet Clearing House Looking Glass servers around the world. Looking Glass server source code Clickable map of known Reverse Lookup and Looking Glass servers in the world Looking Glass Wiki - List of hundreds of Looking Glass servers, sorted by Autonomous System Number. IPv4 and IPv6 BGP Looking Glasses at BGP4.as BGP Looking Glass links collection at LookinGlass.org CSpace Hostings Looking Glass a Network Service Providers looking glass example. RFC 8522: Looking Glass Command Set Servers (computing)
12684962
https://en.wikipedia.org/wiki/Fisher%E2%80%93Yates%20shuffle
Fisher–Yates shuffle
The Fisher–Yates shuffle is an algorithm for generating a random permutation of a finite sequence—in plain terms, the algorithm shuffles the sequence. The algorithm effectively puts all the elements into a hat; it continually determines the next element by randomly drawing an element from the hat until no elements remain. The algorithm produces an unbiased permutation: every permutation is equally likely. The modern version of the algorithm is efficient: it takes time proportional to the number of items being shuffled and shuffles them in place. The Fisher–Yates shuffle is named after Ronald Fisher and Frank Yates, who first described it, and is also known as the Knuth shuffle after Donald Knuth. A variant of the Fisher–Yates shuffle, known as Sattolo's algorithm, may be used to generate random cyclic permutations of length n instead of random permutations. Fisher and Yates' original method The Fisher–Yates shuffle, in its original form, was described in 1938 by Ronald Fisher and Frank Yates in their book Statistical tables for biological, agricultural and medical research. Their description of the algorithm used pencil and paper; a table of random numbers provided the randomness. The basic method given for generating a random permutation of the numbers 1 through N goes as follows: Write down the numbers from 1 through N. Pick a random number k between one and the number of unstruck numbers remaining (inclusive). Counting from the low end, strike out the kth number not yet struck out, and write it down at the end of a separate list. Repeat from step 2 until all the numbers have been struck out. The sequence of numbers written down in step 3 is now a random permutation of the original numbers. Provided that the random numbers picked in step 2 above are truly random and unbiased, so will be the resulting permutation. Fisher and Yates took care to describe how to obtain such random numbers in any desired range from the supplied tables in a manner which avoids any bias. They also suggested the possibility of using a simpler method — picking random numbers from one to N and discarding any duplicates—to generate the first half of the permutation, and only applying the more complex algorithm to the remaining half, where picking a duplicate number would otherwise become frustratingly common. The modern algorithm The modern version of the Fisher–Yates shuffle, designed for computer use, was introduced by Richard Durstenfeld in 1964 and popularized by Donald E. Knuth in The Art of Computer Programming as "Algorithm P (Shuffling)". Neither Durstenfeld's article nor Knuth's first edition of The Art of Computer Programming acknowledged the work of Fisher and Yates; they may not have been aware of it. Subsequent editions of Knuth's The Art of Computer Programming mention Fisher and Yates' contribution. The algorithm described by Durstenfeld differs from that given by Fisher and Yates in a small but significant way. Whereas a naïve computer implementation of Fisher and Yates' method would spend needless time counting the remaining numbers in step 3 above, Durstenfeld's solution is to move the "struck" numbers to the end of the list by swapping them with the last unstruck number at each iteration. This reduces the algorithm's time complexity to compared to for the naïve implementation. This change gives the following algorithm (for a zero-based array). -- To shuffle an array a of n elements (indices 0..n-1): for i from n−1 downto 1 do j ← random integer such that 0 ≤ j ≤ i exchange a[j] and a[i] An equivalent version which shuffles the array in the opposite direction (from lowest index to highest) is: -- To shuffle an array a of n elements (indices 0..n-1): for i from 0 to n−2 do j ← random integer such that i ≤ j < n exchange a[i] and a[j] Examples Pencil-and-paper method As an example, we'll permute the letters from A to H using Fisher and Yates' original method. We'll start by writing the letters out on a piece of scratch paper: Now we roll a random number k from 1 to 8—let's make it 3—and strike out the kth (i.e. third) letter on the scratch pad and write it down as the result: Now we pick a second random number, this time from 1 to 7: it turns out to be 4. Now we strike out the fourth letter not yet struck off the scratch pad—that's letter E—and add it to the result: Now we pick the next random number from 1 to 6, and then from 1 to 5, and so on, always repeating the strike-out process as above: Modern method We'll now do the same thing using Durstenfeld's version of the algorithm: this time, instead of striking out the chosen letters and copying them elsewhere, we'll swap them with the last letter not yet chosen. We'll start by writing out the letters from A to H as before: For our first roll, we roll a random number from 1 to 8: this time it is 6, so we swap the 6th and 8th letters in the list: The next random number we roll from 1 to 7, and turns out to be 2. Thus, we swap the 2nd and 7th letters and move on: The next random number we roll is from 1 to 6, and just happens to be 6, which means we leave the 6th letter in the list (which, after the swap above, is now letter H) in place and just move to the next step. Again, we proceed the same way until the permutation is complete: At this point there's nothing more that can be done, so the resulting permutation is G E D C A H B F. Variants The "inside-out" algorithm The Fisher–Yates shuffle, as implemented by Durstenfeld, is an in-place shuffle. That is, given a preinitialized array, it shuffles the elements of the array in place, rather than producing a shuffled copy of the array. This can be an advantage if the array to be shuffled is large. To simultaneously initialize and shuffle an array, a bit more efficiency can be attained by doing an "inside-out" version of the shuffle. In this version, one successively places element number i into a random position among the first i positions in the array, after moving the element previously occupying that position to position i. In case the random position happens to be number i, this "move" (to the same place) involves an uninitialised value, but that does not matter, as the value is then immediately overwritten. No separate initialization is needed, and no exchange is performed. In the common case where source is defined by some simple function, such as the integers from 0 to n − 1, source can simply be replaced with the function since source is never altered during execution. To initialize an array a of n elements to a randomly shuffled copy of source, both 0-based: for i from 0 to n − 1 do j ← random integer such that 0 ≤ j ≤ i if j ≠ i a[i] ← a[j] a[j] ← source[i] The inside-out shuffle can be seen to be correct by induction. Assuming a perfect random number generator, every one of the n! different sequences of random numbers that could be obtained from the calls of random will produce a different permutation of the values, so all of these are obtained exactly once. The condition that checks if j ≠ i may be omitted in languages that have no problems accessing uninitialized array values. This eliminates n conditional branches at the cost of the Hn ≈ ln n + γ redundant assignments. Another advantage of this technique is that n, the number of elements in the source, does not need to be known in advance; we only need to be able to detect the end of the source data when it is reached. Below the array a is built iteratively starting from empty, and a.length represents the current number of elements seen. To initialize an empty array a to a randomly shuffled copy of source whose length is not known: while source.moreDataAvailable j ← random integer such that 0 ≤ j ≤ a.length if j = a.length a.append(source.next) else a.append(a[j]) a[j] ← source.next Sattolo's algorithm A very similar algorithm was published in 1986 by Sandra Sattolo for generating uniformly distributed cycles of (maximal) length n. The only difference between Durstenfeld's and Sattolo's algorithms is that in the latter, in step 2 above, the random number j is chosen from the range between 1 and i−1 (rather than between 1 and i) inclusive. This simple change modifies the algorithm so that the resulting permutation always consists of a single cycle. In fact, as described below, it is quite easy to accidentally implement Sattolo's algorithm when the ordinary Fisher–Yates shuffle is intended. This will bias the results by causing the permutations to be picked from the smaller set of (n−1)! cycles of length N, instead of from the full set of all n! possible permutations. The fact that Sattolo's algorithm always produces a cycle of length n can be shown by induction. Assume by induction that after the initial iteration of the loop, the remaining iterations permute the first n − 1 elements according to a cycle of length n − 1 (those remaining iterations are just Sattolo's algorithm applied to those first n − 1 elements). This means that tracing the initial element to its new position p, then the element originally at position p to its new position, and so forth, one only gets back to the initial position after having visited all other positions. Suppose the initial iteration swapped the final element with the one at (non-final) position k, and that the subsequent permutation of first n − 1 elements then moved it to position l; we compare the permutation π of all n elements with that remaining permutation σ of the first n − 1 elements. Tracing successive positions as just mentioned, there is no difference between π and σ until arriving at position k. But then, under π the element originally at position k is moved to the final position rather than to position l, and the element originally at the final position is moved to position l. From there on, the sequence of positions for π again follows the sequence for σ, and all positions will have been visited before getting back to the initial position, as required. As for the equal probability of the permutations, it suffices to observe that the modified algorithm involves (n−1)! distinct possible sequences of random numbers produced, each of which clearly produces a different permutation, and each of which occurs—assuming the random number source is unbiased—with equal probability. The (n−1)! different permutations so produced precisely exhaust the set of cycles of length n: each such cycle has a unique cycle notation with the value n in the final position, which allows for (n−1)! permutations of the remaining values to fill the other positions of the cycle notation. A sample implementation of Sattolo's algorithm in Python is: from random import randrange def sattolo_cycle(items) -> None: """Sattolo's algorithm.""" i = len(items) while i > 1: i = i - 1 j = randrange(i) # 0 <= j <= i-1 items[j], items[i] = items[i], items[j] Comparison with other shuffling algorithms The asymptotic time and space complexity of the Fisher–Yates shuffle are optimal. Combined with a high-quality unbiased random number source, it is also guaranteed to produce unbiased results. Compared to some other solutions, it also has the advantage that, if only part of the resulting permutation is needed, it can be stopped halfway through, or even stopped and restarted repeatedly, generating the permutation incrementally as needed. Naïve method The naïve method of swapping each element with another element chosen randomly from all elements is biased and fundamentally broken. Different permutations will have different probabilities of being generated, for every , because the number of different permutations, , does not evenly divide the number of random outcomes of the algorithm, . In particular, by Bertrand's postulate there will be at least one prime number between and , and this number will divide but not divide . from random import randrange def naive_shuffle(items) -> None: """A naive method. This is an example of what not to do -- use Fisher-Yates instead.""" n = len(items) for i in range(n): j = randrange(n) # 0 <= j <= n-1 items[j], items[i] = items[i], items[j] Sorting An alternative method assigns a random number to each element of the set to be shuffled and then sorts the set according to the assigned numbers. The sorting method has the same asymptotic time complexity as Fisher–Yates: although general sorting is O(n log n), numbers are efficiently sorted using Radix sort in O(n) time. Like the Fisher–Yates shuffle, the sorting method produces unbiased results. However, care must be taken to ensure that the assigned random numbers are never duplicated, since sorting algorithms typically don't order elements randomly in case of a tie. Additionally, this method requires asymptotically larger space: O(n) additional storage space for the random numbers, versus O(1) space for the Fisher–Yates shuffle. Finally, we note that the sorting method has a simple parallel implementation, unlike the Fisher–Yates shuffle, which is sequential. A variant of the above method that has seen some use in languages that support sorting with user-specified comparison functions is to shuffle a list by sorting it with a comparison function that returns random values. However, this is an extremely bad method: it is very likely to produce highly non-uniform distributions, which in addition depends heavily on the sorting algorithm used. For instance suppose quicksort is used as sorting algorithm, with a fixed element selected as first pivot element. The algorithm starts comparing the pivot with all other elements to separate them into those less and those greater than it, and the relative sizes of those groups will determine the final place of the pivot element. For a uniformly distributed random permutation, each possible final position should be equally likely for the pivot element, but if each of the initial comparisons returns "less" or "greater" with equal probability, then that position will have a binomial distribution for p = 1/2, which gives positions near the middle of the sequence with a much higher probability for than positions near the ends. Randomized comparison functions applied to other sorting methods like merge sort may produce results that appear more uniform, but are not quite so either, since merging two sequences by repeatedly choosing one of them with equal probability (until the choice is forced by the exhaustion of one sequence) does not produce results with a uniform distribution; instead the probability to choose a sequence should be proportional to the number of elements left in it. In fact no method that uses only two-way random events with equal probability ("coin flipping"), repeated a bounded number of times, can produce permutations of a sequence (of more than two elements) with a uniform distribution, because every execution path will have as probability a rational number with as denominator a power of 2, while the required probability 1/n! for each possible permutation is not of that form. In principle this shuffling method can even result in program failures like endless loops or access violations, because the correctness of a sorting algorithm may depend on properties of the order relation (like transitivity) that a comparison producing random values will certainly not have. While this kind of behaviour should not occur with sorting routines that never perform a comparison whose outcome can be predicted with certainty (based on previous comparisons), there can be valid reasons for deliberately making such comparisons. For instance the fact that any element should compare equal to itself allows using them as sentinel value for efficiency reasons, and if this is the case, a random comparison function would break the sorting algorithm. Potential sources of bias Care must be taken when implementing the Fisher–Yates shuffle, both in the implementation of the algorithm itself and in the generation of the random numbers it is built on, otherwise the results may show detectable bias. A number of common sources of bias have been listed below. Implementation errors A common error when implementing the Fisher–Yates shuffle is to pick the random numbers from the wrong range. The flawed algorithm may appear to work correctly, but it will not produce each possible permutation with equal probability, and it may not produce certain permutations at all. For example, a common off-by-one error would be choosing the index j of the entry to swap in the example above to be always strictly less than the index i of the entry it will be swapped with. This turns the Fisher–Yates shuffle into Sattolo's algorithm, which produces only permutations consisting of a single cycle involving all elements: in particular, with this modification, no element of the array can ever end up in its original position. Similarly, always selecting j from the entire range of valid array indices on every iteration also produces a result which is biased, albeit less obviously so. This can be seen from the fact that doing so yields nn distinct possible sequences of swaps, whereas there are only n! possible permutations of an n-element array. Since nn can never be evenly divisible by n! when n > 2 (as the latter is divisible by n−1, which shares no prime factors with n), some permutations must be produced by more of the nn sequences of swaps than others. As a concrete example of this bias, observe the distribution of possible outcomes of shuffling a three-element array [1, 2, 3]. There are 6 possible permutations of this array (3! = 6), but the algorithm produces 27 possible shuffles (33 = 27). In this case, [1, 2, 3], [3, 1, 2], and [3, 2, 1] each result from 4 of the 27 shuffles, while each of the remaining 3 permutations occurs in 5 of the 27 shuffles. The matrix to the right shows the probability of each element in a list of length 7 ending up in any other position. Observe that for most elements, ending up in their original position (the matrix's main diagonal) has lowest probability, and moving one slot backwards has highest probability. Modulo bias Doing a Fisher–Yates shuffle involves picking uniformly distributed random integers from various ranges. Most random number generators, however — whether true or pseudorandom — will only directly provide numbers in a fixed range from 0 to RAND_MAX, and in some libraries, RAND_MAX may be as low as 32767. A simple and commonly used way to force such numbers into a desired range is to apply the modulo operator; that is, to divide them by the size of the range and take the remainder. However, the need in a Fisher–Yates shuffle to generate random numbers in every range from 0–1 to 0–n almost guarantees that some of these ranges will not evenly divide the natural range of the random number generator. Thus, the remainders will not always be evenly distributed and, worse yet, the bias will be systematically in favor of small remainders. For example, assume that your random number source gives numbers from 0 to 99 (as was the case for Fisher and Yates' original tables), and that you wish to obtain an unbiased random number from 0 to 15. If you simply divide the numbers by 16 and take the remainder, you'll find that the numbers 0–3 occur about 17% more often than others. This is because 16 does not evenly divide 100: the largest multiple of 16 less than or equal to 100 is 6×16 = 96, and it is the numbers in the incomplete range 96–99 that cause the bias. The simplest way to fix the problem is to discard those numbers before taking the remainder and to keep trying again until a number in the suitable range comes up. While in principle this could, in the worst case, take forever, the expected number of retries will always be less than one. A related problem occurs with implementations that first generate a random floating-point number—usually in the range [0,1]—and then multiply it by the size of the desired range and round down. The problem here is that random floating-point numbers, however carefully generated, always have only finite precision. This means that there are only a finite number of possible floating point values in any given range, and if the range is divided into a number of segments that doesn't divide this number evenly, some segments will end up with more possible values than others. While the resulting bias will not show the same systematic downward trend as in the previous case, it will still be there. Pseudorandom generators An additional problem occurs when the Fisher–Yates shuffle is used with a pseudorandom number generator or PRNG: as the sequence of numbers output by such a generator is entirely determined by its internal state at the start of a sequence, a shuffle driven by such a generator cannot possibly produce more distinct permutations than the generator has distinct possible states. Even when the number of possible states exceeds the number of permutations, the irregular nature of the mapping from sequences of numbers to permutations means that some permutations will occur more often than others. Thus, to minimize bias, the number of states of the PRNG should exceed the number of permutations by at least several orders of magnitude. For example, the built-in pseudorandom number generator provided by many programming languages and/or libraries may often have only 32 bits of internal state, which means it can only produce 232 different sequences of numbers. If such a generator is used to shuffle a deck of 52 playing cards, it can only ever produce a very small fraction of the 52! ≈ 2225.6 possible permutations. It is impossible for a generator with less than 226 bits of internal state to produce all the possible permutations of a 52-card deck. No pseudorandom number generator can produce more distinct sequences, starting from the point of initialization, than there are distinct seed values it may be initialized with. Thus, a generator that has 1024 bits of internal state but which is initialized with a 32-bit seed can still only produce 232 different permutations right after initialization. It can produce more permutations if one exercises the generator a great many times before starting to use it for generating permutations, but this is a very inefficient way of increasing randomness: supposing one can arrange to use the generator a random number of up to a billion, say 230 for simplicity, times between initialization and generating permutations, then the number of possible permutations is still only 262. A further problem occurs when a simple linear congruential PRNG is used with the divide-and-take-remainder method of range reduction described above. The problem here is that the low-order bits of a linear congruential PRNG with modulo 2e are less random than the high-order ones: the low n bits of the generator themselves have a period of at most 2n. When the divisor is a power of two, taking the remainder essentially means throwing away the high-order bits, such that one ends up with a significantly less random value. Different rules apply if the LCG has prime modulo, but such generators are uncommon. This is an example of the general rule that a poor-quality RNG or PRNG will produce poor-quality shuffles. See also RC4, a stream cipher based on shuffling an array Reservoir sampling, in particular Algorithm R which is a specialization of the Fisher–Yates shuffle References External links An interactive example Combinatorial algorithms Randomized algorithms Permutations Monte Carlo methods Articles with example pseudocode Articles with example Python (programming language) code
32189987
https://en.wikipedia.org/wiki/I-MSCP
I-MSCP
i-MSCP (internet Multi Server Control Panel) is a software (OSS) for shared hosting environments management on Linux servers. It comes with a large choice of modules for various services such as Apache2, ProFTPd, Dovecot, Courier, Bind9, and can be easily extended through plugins, or listener files using its events-based API. Latest stable is the 1.5.3 version (build 2018120800) which has been released on 8 December 2018. Key people Laurent Declercq (France) - CEO, Lead Developer Glenn B. Jakobsen (Sweden) - Logistic, hoster (Kazi Network) Licensing i-MSCP has a dual license. A part of the base code is licensed under the Mozilla Public License. All new code, and submissions to i-MSCP are licensed under the GNU LESSER GENERAL PUBLIC LICENSE Version 2.1 (LGPLv2). To solve this license conflict there is work on a complete rewrite for a completely LGPLv2 licensed i-MSCP. Features Supported Linux Distributions Debian ≥ Jessie (8.0) Devuan ≥ Jessie (1.0) Ubuntu Any LTS version ≥ Trusty Thar (14.04 LTS) Supported Daemons / Services Web server: Apache (ITK, Fcgid and FastCGI/PHP-FPM), Nginx Name server: Bind9 MTA (Mail Transport Agent): Postfix MDA (Mail Delivery Agent): Courier, Dovecot Database: MySQL, MariaDB, Percona FTP-Server: ProFTPD, vsftpd Web statistics: AWStats Addons PhpMyAdmin Pydio, formerly AjaXplorer Net2ftp Roundcube Rainloop Plugins See i-MSCP plugin store Competing software cPanel DTC Froxlor ISPConfig ispCP Plesk SysCP Virtualmin External links Linux software Web applications Website management User interfaces Web hosting Web server management software
21298772
https://en.wikipedia.org/wiki/Kineo%20CAM
Kineo CAM
Kineo Computer Aided Motion ("Kineo CAM") was a computer software company based in Toulouse, France that was awarded the European ICT Prize in 2007 in Hannover, Germany for KineoWorks, its automatic motion planning, path planning and pathfinding technology. It was acquired by Siemens PLM Software in 2012. KineoWorks is a core software component dedicated to motion planning that enables automatic motion of any mechanical system or virtual artifact in a 3D environment, ensuring collision avoidance and respecting kinematic constraints. Kineo Collision Detector (KCD) is a collision detection software library with an object-oriented API. It is included in KineoWorks and exists also as a standalone library. It works with a hierarchical architecture of heterogeneous data types based on composite design pattern and is especially suited for large 3D models. The Kineo CAM main market is PLM, DMU and CAD/CAM systems, robotics and coordinate-measuring machines (CMM). History Incorporated in December 2000, Kineo CAM benefited from a 15-year research legacy from the LAAS-CNRS. The company was acquired by Siemens PLM Software on October 8, 2012. Awards 2000: Winner of the national contest of innovation from French Ministry of Research and Technology 2005: Kineo CAM receives IEEE/IFR Innovation Award for Outstanding Achievements in Commercializing Innovative Robot and Automation Technology 2006: Awarded by Daratech the title of emerging technology at the DaratechSUMMIT2006 with eight other innovative American companies 2007: Innovation ICT Prize from the European Commission and the European Council of Applied Sciences, Technologies & Engineering 2007: Innovation and International award from Regional council of Midi-Pyrénées External links LAAS-CNRS French companies established in 2000 French companies disestablished in 2012 Software companies established in 2000 Software companies disestablished in 2012 Defunct software companies of France Companies based in Toulouse 2012 mergers and acquisitions
11690640
https://en.wikipedia.org/wiki/Operation%20Looking%20Glass
Operation Looking Glass
Looking Glass (or Operation Looking Glass) is the code name for an airborne command and control center operated by the United States. In more recent years it has been more officially referred to as the ABNCP (Airborne Command Post).<ref>[http://www.minot.af.mil/News/Article-Display/Article/954496/looking-glass-usstratcoms-airborne-command-post Looking Glass: USSTRATCOM's Airborne Command Post By Airman 1st Class J.T. Armstrong, Public Affairs / Published September 23, 2016. the Airborne Command Post (ABNCP). ..According to Konowicz, the ABNCP is dual purposed. While it primarily functions as a communications relay platform for submarines with its two trailing antenna wires, it also serves as an Airborne Launch Control System (ALCS).] and EC-135, Looking Glass. Federation of American Scientists</ref> It provides command and control of U.S. nuclear forces in the event that ground-based command centers have been destroyed or otherwise rendered inoperable. In such an event, the general officer aboard the Looking Glass serves as the Airborne Emergency Action Officer (AEAO) and by law assumes the authority of the National Command Authority and could command execution of nuclear attacks. The AEAO is supported by a battle staff of approximately 20 people, with another dozen responsible for the operation of the aircraft systems. The name Looking Glass, which is another name for a mirror, was chosen for the Airborne Command Post because the mission operates in parallel with the underground command post at Offutt Air Force Base. History The code name "Looking Glass" came from the aircraft's ability to "mirror" the command and control functions of the underground command post at the U.S. Air Force's Strategic Air Command (SAC) headquarters at Offutt AFB, Nebraska. The SAC Airborne Command Post or "Looking Glass" was initiated in 1961, and operated by the 34th Air Refueling Squadron at Offutt AFB. The mission transferred to the 38th Strategic Reconnaissance Squadron in August 1966, to the 2nd Airborne Command and Control Squadron in April 1970, to the 7th Airborne Command and Control Squadron in July 1994, and to the USSTRATCOM's Strategic Communications Wing One in October 1998.USSTRATCOM ABNCP Fact Sheet The Strategic Air Command began the Looking Glass mission on February 3, 1961, using five specially modified KC-135A aircraft from the 34th Air Refueling Squadron based at its headquarters at Offutt AFB, backed up by aircraft flying with the Second Air Force / 913th Air Refueling Squadron at Barksdale AFB, Louisiana, Eighth Air Force / 99th Air Refueling Squadron at Westover AFB, Massachusetts, and Fifteenth Air Force / 22d Air Refueling Squadron, March AFB, California. EC-135 Looking Glass aircraft were airborne 24 hours a day for over 29 years, until July 24, 1990, when "The Glass" ceased continuous airborne alert, but remained on ground or airborne alert 24 hours a day. Looking Glass mirrors ground-based command, control, and communications (C3 or C³) located at the USSTRATCOM Global Operations Center (GOC) at Offutt AFB. The EC-135 Looking Glass aircraft were equipped with the Airborne Launch Control System, capable of transmitting launch commands to U.S. ground-based intercontinental ballistic missiles (ICBMs) in the event that the ground launch control centers were rendered inoperable.LGM-30G Fact Sheet The Looking Glass was also designed to help ensure COG, continuity and reconstitution of the US government in the event of a nuclear attack on North America. Although the two types of aircraft are distinct, the Doomsday Plane'' nickname is also frequently associated with the Boeing E-4 "Nightwatch" Advanced Airborne Command Post mission and aircraft. The Looking Glass was the anchor in what was known as the World Wide Airborne Command Post (WWABNCP) network. This network of specially equipped EC-135 aircraft would launch from ground alert status and establish air-to-air wireless network connections in the event of a U.S. national emergency. Members of the WWABNCP network included: Operation Silk Purse for the Commander in Chief, U.S. European Command (USCINCEUR), based at RAF Mildenhall in the United Kingdom (callsign Seabell); Operation "Scope Light" for the Commander in Chief, U.S. Atlantic Command (CINCLANT), based at Langley AFB, VA; Operation "Blue Eagle" for the Commander in Chief, U.S. Pacific Command (USCINCPAC), based at Hickam AFB, HI; and Operation "Nightwatch" which supported the President of the United States and were based at Andrews AFB, Maryland. In the early 1970s the E-4A aircraft replaced the EC-135Js on this mission. The Eastern Auxiliary (EAST Aux) and Western Auxiliary (West Aux) Command Posts were also part of the WWABNCP ("wah-bin-cop") network and were capable of assuming responsibility for Looking Glass as the anchor. The West Aux 906th Air Refueling Squadron was based at Minot AFB, North Dakota, and moved to the 4th Airborne Command and Control Squadron at Ellsworth AFB, South Dakota, in April 1970 and the East Aux mission 301st Air Refueling Squadron was based at Lockbourne AFB, Ohio, and moved to the 3rd Airborne Command & Control Squadron at Grissom AFB, Indiana, in April 1970. After 1975, East Aux was assumed from the Looking Glass backup ground alert aircraft launched from Offutt AFB. In June 1992, United States Strategic Command took over the Looking Glass mission from the Strategic Air Command, as SAC was disbanded and Strategic Command assumed the nuclear deterrence mission. Current status On October 1, 1998 the United States Navy fleet of E-6Bs replaced the EC-135C in performing the "Looking Glass" mission, previously carried out for 37 years by the U.S. Air Force. Unlike the original Looking Glass aircraft, the E-6Bs are modified Boeing 707 aircraft, not the military-only KC-135. The E-6B provides the National Command Authority with the same capability as the EC-135 fleet to control the nation's intercontinental ballistic missile (ICBM) force, nuclear-capable bombers and submarine-launched ballistic missiles (SLBM). With the assumption of this mission, a USSTRATCOM battle staff now flies with the TACAMO crew. If the USSTRATCOM Global Operations Center (GOC) is unable to function in its role, the E-6B Looking Glass can assume command of all U.S. nuclear-capable forces. Flying aboard each ABNCP is a crew of 22, which includes an air crew, a Communications Systems Officer and team, an Airborne Emergency Action Officer (an Admiral or General officer), a Mission Commander, a Strike Advisor, an Airborne Launch Control System/Intelligence Officer, a Meteorological Effects Officer, a Logistics Officer, a Force Status Controller, and an Emergency Actions NCO. In addition to being able to direct the launch of ICBMs using the Airborne Launch Control System, the E-6B can communicate Emergency Action Messages (EAM) to nuclear submarines running at depth by extending a two and a half-mile-long () trailing wire antenna (TWA) for use with the Survivable Low Frequency Communications System (SLFCS), as the EC-135C could. There was some speculation that the "mystery plane" seen flying over the White House on September 11, 2001, was some newer incarnation of Looking Glass. However, as indicated by retired Major General Donald Shepperd, speaking on CNN on September 12, 2007, the plane circling the White House on 9/11 resembled an E-4B which was likely launched from Nightwatch ground alert at Andrews Air Force Base. See also TACAMO Boeing EC-135 Boeing E-4 Advanced Airborne Command Post ("Nightwatch") E-6 Mercury Airborne Launch Control System Airborne Launch Control Center Decapitation strike Letters of last resort Dead Hand (Perimeter) Continuity of Operations Plan Single Integrated Operational Plan Nuclear utilization target selection Ghosts of the East Coast: Doomsday Ships Cold War museum References The History of the PACCS USSTRATCOM ABNCP Fact Sheet United States command and control aircraft Disaster preparedness in the United States Nuclear warfare United States nuclear command and control 1961 establishments in the United States
37938604
https://en.wikipedia.org/wiki/Narnarayan%20Shastri%20Institute%20of%20Technology
Narnarayan Shastri Institute of Technology
Narnarayan Shastri Institute of Technology is an engineering college in the town of Jetalpur very near to the city of Ahmedabad, Gujarat, India. The academic program of Narnarayan Shastri Institute of Technology is approved by the All India Council of Technical Education (AICTE) and is affiliated with Gujarat Technological University (GTU) About The institute is established in 2008. The institute is blessed by Acharya Shree Koshlendraprasadji Maharaj and Acharya Shree Tejendraprasadji Maharaj and is connected with Shree Swaminarayan Temple - Kalupur. It is functioning under "Swaminarayan Vividh Seva Niketan Trust - Jetalpur". It admits students from all over the Gujarat as well as other states of India to the Bachelor of Engineering. The academic program of NSIT is approved by the All India Council of Technical Education (AICTE) and is affiliated to Gujarat Technological University (GTU). At its establishment in 2008, the institute comprised Mechanical Engineering Electrical Engineering, Electronics and Communication Engineering and Computer Science Engineering. Thereafter the institute has been expanded with inclusion of two new departments in the fields of Civil Engineering and Automobile Engineering.. Departments UG Programmes The institute offers undergraduate courses in eight branches of engineering and thus is the training ground for the would-be technocrats. The under graduate programs in the institute uses pedagogy which infuses practical learning through industry based project and application based approach to the theoretical concepts. The college offers six undergraduate courses leading to Bachelor of Engineering (B.E.) degrees: Mechanical engineering Computer Science and Engineering (CSE) Electronics and Communications Engineering Electrical engineering Civil engineering Automobile Engineering PG Programmes The college offers two postgraduate courses leading to Master of Engineering (M.E.) degrees: Thermal Engineering Computer Science and Engineering (CSE) Campus The Institute campus extends over large area and is situated close to Swaminarayan Mandir - Jetalpur. The campus is divided into various functional zones like hostel, Main College Building, Administration Block. In addition to Lecture Rooms, tutorial rooms and drawing halls, the institute has an Auditorium, library, computer centre, reading rooms, workshops and laboratories. The college has playgrounds and gymnasium. Facilities The college has computer labs with more than 168 computers, a Workshop and with laboratories. There is library to provide books to students and faculty, and a canteen as well sports room with Tablet Tennis, Chess Board, Carom and Gymnasium. There is an auditorium is used for technical events, lectures and seminars. Students of Computer Science and Engineering department has developed own "Result" Portal system for result announcement NSIT result portal system References Engineering colleges in Gujarat
3636075
https://en.wikipedia.org/wiki/History%20of%20television
History of television
The concept of television was the work of many individuals in the late 19th and early 20th centuries, with its roots initially starting from back even in the 18th century. The first practical transmissions of moving images over a radio system used mechanical rotating perforated disks to scan a scene into a time-varying signal that could be reconstructed at a receiver back into an approximation of the original image. Development of television was interrupted by the Second World War. After the end of the war, all-electronic methods of scanning and displaying images became standard. Several different standards for addition of color to transmitted images were developed with different regions using technically incompatible signal standards. Television broadcasting expanded rapidly after World War II, becoming an important mass medium for advertising, propaganda, and entertainment. Television broadcasts can be distributed over the air by VHF and UHF radio signals from terrestrial transmitting stations, by microwave signals from Earth orbiting satellites, or by wired transmission to individual consumers by cable TV. Many countries have moved away from the original analog radio transmission methods and now use digital television standards, providing additional operating features and conserving radio spectrum bandwidth for more profitable uses. Television programming can also be distributed over the Internet. Television broadcasting may be funded by advertising revenue, by private or governmental organizations prepared to underwrite the cost, or in some countries, by television license fees paid by owners of receivers. Some services, especially carried by cable or satellite, are paid by subscriptions. Television broadcasting is supported by continuing technical developments such as long-haul microwave networks, which allow distribution of programming over a wide geographic area. Video recording methods allow programming to be edited and replayed for later use. Three-dimensional television has been used commercially but has not received wide consumer acceptance owing to the limitations of display methods. Mechanical television Facsimile transmission systems pioneered methods of mechanically scanning graphics in the early 19th century. The Scottish inventor Alexander Bain introduced the facsimile machine between 1843 and 1846. The English physicist Frederick Bakewell demonstrated a working laboratory version in 1851. The first practical facsimile system, working on telegraph lines, was developed and put into service by the Italian priest Giovanni Caselli from 1856 onward. Willoughby Smith, an English electrical engineer, discovered the photoconductivity of the element selenium in 1873. This led, among other technologies, towards telephotography, a way to send still images through phone lines, as early as in 1895, as well as any kind of electronic image scanning devices, both still and in motion, and ultimately to TV cameras. As a 23-year-old German university student, Paul Julius Gottlieb Nipkow, proposed and patented the Nipkow disk in 1884. This was a spinning disk with a spiral pattern of holes in it, so each hole scanned a line of the image. Although he never built a working model of the system, variations of Nipkow's spinning-disk "image rasterizer" became exceedingly common. Constantin Perskyi had coined the word television in a paper read to the International Electricity Congress at the International World Fair in Paris on August 24, 1900. Perskyi's paper reviewed the existing electromechanical technologies, mentioning the work of Nipkow and others. However, it was not until 1907 that developments in amplification tube technology, by Lee de Forest and Arthur Korn among others, made the design practical. The first demonstration of the instantaneous transmission of images was by Georges Rignoux and A. Fournier in Paris in 1909. A matrix of 64 selenium cells, individually wired to a mechanical commutator, served as an electronic retina. In the receiver, a type of Kerr cell modulated the light and a series of variously angled mirrors attached to the edge of a rotating disc scanned the modulated beam onto the display screen. A separate circuit regulated synchronization. The 8×8 pixel resolution in this proof-of-concept demonstration was just sufficient to clearly transmit individual letters of the alphabet. An updated image was transmitted "several times" each second. In 1911, Boris Rosing and his student Vladimir Zworykin created a system that used a mechanical mirror-drum scanner to transmit, in Zworykin's words, "very crude images" over wires to the "Braun tube" (cathode ray tube or "CRT") in the receiver. Moving images were not possible because, in the scanner, "the sensitivity was not enough and the selenium cell was very laggy". In May 1914 Archibald Low gave the first demonstration of his television system at the Institute of Automobile Engineers in London. He called his system 'Televista'. The events were widely reported worldwide and were generally entitled Seeing By Wireless. The demonstrations had so impressed Harry Gordon Selfridge that he included Televista in his 1914 Scientific and Electrical Exhibition at his store. It also interested Deputy Consul General Carl Raymond Loop who filled a US consular report from London containing considerable detail about Low's system. Low's invention employed a matrix detector (camera) and a mosaic screen (receiver/viewer) with an electro-mechanical scanning mechanism that moved a rotating roller over the cell contacts providing a multiplex signal to the camera/viewer data link. The receiver employed a similar roller. The two rollers were synchronised. Hence, it was unlike any of the intervening TV systems of the 20th Century and in many respects, Low had a digital TV system 80 years before the advent of today's digital TV. World War One began shortly after these demonstrations in London and Low became involved in sensitive military work, and so he did not apply for a patent until 1917. His "Televista" Patent No. 191,405 titled "Improved Apparatus for the Electrical Transmission of Optical Images" was finally published in 1923—delayed possibly for security reasons. The patent states that the scanning roller had a row of conductive contacts corresponding to the cells in each row of the array and arranged to sample each cell in turn as the roller rotated. The receivers roller was similarly constructed and each revolution addressed a row of cells as the rollers traversed over their array of cells. Loops report tells us that... "The roller is driven by a motor of 3,000 revolutions per minute, and the resulting variations of light are transmitted along an ordinary conducting wire." The cell-matrix shown in the patent is 22×22 (approaching an impressive 500 cells/pixels) and each 'camera' cell had a corresponding 'viewer' cell. Loop said it was a "screen divided into a large number of small squares cells of selenium" and the patent states "into each... space I place a selenium cell". Low covered the cells with a liquid dielectric and the roller connected with each cell in turn through this medium as it rotated and travelled over the array. The receiver used bimetallic elements that acted as shutters "transmitting more or less light according to the current passing through them..." as stated in the patent. Low said the main deficiency of the system was the selenium cells used for converting light waves into electric impulses, which responded too slowly thus spoiling the effect. Loop reported that "The system has been tested through a resistance equivalent to a distance of four miles, but in the opinion of Doctor Low there is no reason why it should not be equally effective over far greater distances. The patent states that this connection could be either wired or wireless. The cost of the apparatus is considerable because the conductive sections of the roller are made of platinum..." In 1914 the demonstrations certainly garnered a lot of media interest with The Times reporting on 30 May: On 29 May the Daily Chronicle reported: In 1927 Ronald Frank Tiltman asked Low to write the introduction to his book in which he acknowledged Low's work, referring to Low's various related patents with an apology that they were of 'too technical a nature for inclusion'. Later in his 1938 patent Low envisioned a much larger 'camera' cell density achieved by a deposition process of caesium alloy on an insulated substrate that was subsequently sectioned to divide it into cells, the essence of today's technology. Low's system failed for various reasons, mostly due to its inability to reproduce an image by reflected light and simultaneously depict gradations of light and shade. It can be added to the list of systems, like that of Boris Rosing, that predominantly reproduced shadows. With subsequent technological advances, many such ideas could be made viable decades later, but at the time they were impractical. In 1923, Scottish inventor John Logie Baird envisaged a complete television system that employed the Nipkow disk. Nipkow's was an obscure, forgotten patent and not at all obvious at the time. He created his first prototypes in Hastings, where he was recovering from a serious illness. In late 1924, Baird returned to London to continue his experiments there. On March 25, 1925, Baird gave the first public demonstration of televised silhouette images in motion, at Selfridge's Department Store in London. Since human faces had inadequate contrast to show up on his system at this time, he televised cut-outs and by mid-1925 the head of a ventriloquist's dummy he later named "Stooky Bill", whose face was painted to highlight its contrast. "Stooky Bill" also did not complain about the long hours of staying still in front of the blinding level of light used in these experiments. On October 2, 1925, suddenly the dummy's head came through on the screen with incredible clarity. On January 26, 1926, he demonstrated the transmission of images of real human faces for 40 distinguished scientists of the Royal Institution. This is widely regarded as being the world's first public television demonstration. Baird's system used Nipkow disks for both scanning the image and displaying it. A brightly illuminated subject was placed in front of a spinning Nipkow disk set with lenses that swept images across a static photocell. At this time, it is believed that it was a thallium sulphide (Thalofide) cell, developed by Theodore Case in the US, that detected the light reflected from the subject. This was transmitted by radio to a receiver unit, where the video signal was applied to a neon bulb behind a similar Nipkow disk synchronised with the first. The brightness of the neon lamp was varied in proportion to the brightness of each spot on the image. As each lens in the disk passed by, one scan line of the image was reproduced. With this early apparatus, Baird's disks had 16 lenses, yet in conjunction with the other discs used produced moving images with 32 scan-lines, just enough to recognize a human face. He began with a frame-rate of five per second, which was soon increased to a rate of 12 frames per second and 30 scan-lines. In 1927, Baird transmitted a signal over of telephone line between London and Glasgow. In 1928, Baird's company (Baird Television Development Company/Cinema Television) broadcast the first transatlantic television signal, between London and New York, and the first shore-to-ship transmission. In 1929, he became involved in the first experimental mechanical television service in Germany. In November of the same year, Baird and Bernard Natan of Pathé established France's first television company, Télévision-Baird-Natan. In 1931, he made the first outdoor remote broadcast, of the Derby. In 1932, he demonstrated ultra-short wave television. Baird Television Limited's mechanical systems reached a peak of 240 lines of resolution at the company's Crystal Palace studios, and later on BBC television broadcasts in 1936, though for action shots (as opposed to a seated presenter) the mechanical system did not scan the televised scene directly. Instead, a 17.5mm film was shot, rapidly developed, and then scanned while the film was still wet. The Scophony Company's success with their mechanical system in the 1930s enabled them to take their operations to the USA when World War II curtailed their business in Britain. An American inventor, Charles Francis Jenkins, also pioneered the television. He published an article on "Motion Pictures by Wireless" in 1913, but it was not until December 1923 that he transmitted moving silhouette images for witnesses. On June 13, 1925, Jenkins publicly demonstrated the synchronized transmission of silhouette pictures. In 1925, Jenkins used a Nipkow disk and transmitted the silhouette image of a toy windmill in motion, over a distance of five miles (from a naval radio station in Maryland to his laboratory in Washington, D.C.), using a lensed disk scanner with a 48-line resolution. He was granted U.S. patent 1,544,156 (Transmitting Pictures over Wireless) on June 30, 1925 (filed March 13, 1922). On December 25, 1926, Kenjiro Takayanagi demonstrated a television system with a 40-line resolution that employed a Nipkow disk scanner and CRT display at Hamamatsu Industrial High School in Japan. This prototype is still on display at the Takayanagi Memorial Museum at Shizuoka University, Hamamatsu Campus. By 1927, Takayanagi improved the resolution to 100 lines, which was not surpassed until 1931. By 1928, he was the first to transmit human faces in halftones. His work had an influence on the later work of Vladimir K. Zworykin. In Japan he is viewed as the man who completed the first all-electronic television. His research toward creating a production model was halted by the US after Japan lost World War II. In 1927, a team from Bell Telephone Laboratories demonstrated television transmission from Washington to New York, using a prototype flat panel plasma display to make the images visible to an audience. The monochrome display measured two feet by three feet and had 2500 pixels. Herbert E. Ives and Frank Gray of Bell Telephone Laboratories gave a dramatic demonstration of mechanical television on April 7, 1927. The reflected-light television system included both small and large viewing screens. The small receiver had a two-inch-wide by 2.5-inch-high screen. The large receiver had a screen 24 inches wide by 30 inches high. Both sets were capable of reproducing reasonably accurate, monochromatic moving images. Along with the pictures, the sets also received synchronized sound. The system transmitted images over two paths: first, a copper wire link from Washington to New York City, then a radio link from Whippany, New Jersey. Comparing the two transmission methods, viewers noted no difference in quality. Subjects of the telecast included Secretary of Commerce Herbert Hoover. A flying-spot scanner beam illuminated these subjects. The scanner that produced the beam had a 50-aperture disk. The disc revolved at a rate of 18 frames per second, capturing one frame about every 56 milliseconds. (Today's systems typically transmit 30 or 60 frames per second, or one frame every 33.3 or 16.7 milliseconds respectively.) Television historian Albert Abramson underscored the significance of the Bell Labs demonstration: "It was in fact the best demonstration of a mechanical television system ever made to this time. It would be several years before any other system could even begin to compare with it in picture quality." In 1928, WRGB (then W2XB) was started as the world's first television station. It broadcast from the General Electric facility in Schenectady, New York. It was popularly known as "WGY Television". Meanwhile, in the Soviet Union, Léon Theremin had been developing a mirror drum-based television, starting with 16-line resolution in 1925, then 32 lines and eventually 64 using interlacing in 1926. As part of his thesis on May 7, 1926, Theremin electrically transmitted and then projected near-simultaneous moving images on a five-foot square screen. By 1927 he achieved an image of 100 lines, a resolution that was not surpassed until 1931 by RCA, with 120 lines. Because only a limited number of holes could be made in the disks, and disks beyond a certain diameter became impractical, image resolution in mechanical television broadcasts was relatively low, ranging from about 30 lines up to about 120. Nevertheless, the image quality of 30-line transmissions steadily improved with technical advances, and by 1933 the UK broadcasts using the Baird system were remarkably clear. A few systems ranging into the 200-line region also went on the air. Two of these were the 180-line system that Compagnie des Compteurs (CDC) installed in Paris in 1935, and the 180-line system that Peck Television Corp. started in 1935 at station VE9AK in Montreal. Anton Codelli (22 March 1875 – 28 April 1954), a Slovenian nobleman, was a passionate inventor. Among other things, he had devised a miniature refrigerator for cars and a new rotary engine design. Intrigued by television, he decided to apply his technical skills to the new medium. At the time, the biggest challenge in television technology was to transmit images with sufficient resolution to reproduce recognizable figures. As recounted by media historian Melita Zajc, most inventors were determined to increase the number of lines used by their systems – some were approaching what was then the magic number of 100 lines. But Baron Codelli had a different idea. In 1929, he developed a television device with a single line – but one that formed a continuous spiral on the screen. Codelli based his ingenious design on his understanding of the human eye. He knew that objects seen in peripheral vision don't need to be as sharp as those in the center. The baron's mechanical television system, whose image was sharpest in the middle, worked well, and he was soon able to transmit images of his wife, Ilona von Drasche-Lazar, over the air. Despite the backing of the German electronics giant Telefunken, however, Codelli's television system never became a commercial reality. Electronic television ultimately emerged as the dominant system, and Codelli moved on to other projects. His invention was largely forgotten. The advancement of all-electronic television (including image dissectors and other camera tubes and cathode ray tubes for the reproducer) marked the beginning of the end for mechanical systems as the dominant form of television. Mechanical TV usually only produced small images. It was the main type of TV until the 1930s. The last mechanical television broadcasts ended in 1939 at stations run by a handful of public universities in the United States. Electronic television In 1897 J. J. Thomson, an English physicist, in his three famous experiments was able to deflect cathode rays, a fundamental function of the modern cathode-ray tube (CRT). The earliest version of the CRT was invented by the German physicist Karl Ferdinand Braun in 1897 and is also known as the Braun tube. It was a cold-cathode diode, a modification of the Crookes tube with a phosphor-coated screen. A cathode ray tube was successfully demonstrated as a displaying device by the German Professor Max Dieckmann in 1906, his experimental results were published by the journal Scientific American in 1909. In 1908 Alan Archibald Campbell-Swinton, fellow of the UK Royal Society, published a letter in the scientific journal Nature in which he described how "distant electric vision" could be achieved by using a cathode ray tube (or "Braun" tube) as both a transmitting and receiving device. He expanded on his vision in a speech given in London in 1911 and reported in The Times and the Journal of the Röntgen Society. In a letter to Nature published in October 1926, Campbell-Swinton also announced the results of some "not very successful experiments" he had conducted with G. M. Minchin and J. C. M. Stanton. They had attempted to generate an electrical signal by projecting an image onto a selenium-coated metal plate that was simultaneously scanned by a cathode ray beam. These experiments were conducted before March 1914, when Minchin died. They were later repeated in 1937 by two different teams, H. Miller and J. W. Strange from EMI, and H. Iams and A. Rose from RCA. Both teams succeeded in transmitting "very faint" images with the original Campbell-Swinton's selenium-coated plate. Although others had experimented with using a cathode ray tube as a receiver, the concept of using one as a transmitter was novel. The first cathode ray tube to use a hot cathode was developed by John B. Johnson (who gave his name to the term Johnson noise) and Harry Weiner Weinhart of Western Electric, and became a commercial product in 1922. The problem of low sensitivity to light resulting in low electrical output from transmitting or "camera" tubes would be solved with the introduction of charge-storage technology by the Hungarian engineer Kálmán Tihanyi in the beginning of 1924. In 1926, Tihanyi designed a television system utilizing fully electronic scanning and display elements and employing the principle of "charge storage" within the scanning (or "camera") tube. His solution was a camera tube that accumulated and stored electrical charges ("photoelectrons") within the tube throughout each scanning cycle. The device was first described in a patent application he filed in Hungary in March 1926 for a television system he dubbed "Radioskop". After further refinements included in a 1928 patent application, Tihanyi's patent was declared void in Great Britain in 1930, and so he applied for patents in the United States. Although his breakthrough would be incorporated into the design of RCA's "iconoscope" in 1931, the U.S. patent for Tihanyi's transmitting tube would not be granted until May 1939. The patent for his receiving tube had been granted the previous October. Both patents had been purchased by RCA prior to their approval. Tihanyi's charge storage idea remains a basic principle in the design of imaging devices for television to the present day. On December 25, 1926, Kenjiro Takayanagi demonstrated a TV system with a 40-line resolution that employed a CRT display at Hamamatsu Industrial High School in Japan. Takayanagi did not apply for a patent. On September 7, 1927, Philo Farnsworth's image dissector camera tube transmitted its first image, a simple straight line, at his laboratory at 202 Green Street in San Francisco. By September 3, 1928, Farnsworth had developed the system sufficiently to hold a demonstration for the press. This is widely regarded as the first electronic television demonstration. In 1929, the system was further improved by elimination of a motor generator, so that his television system now had no mechanical parts. That year, Farnsworth transmitted the first live human images with his system, including a three and a half-inch image of his wife Elma ("Pem") with her eyes closed (possibly due to the bright lighting required). Meanwhile, Vladimir Zworykin was also experimenting with the cathode ray tube to create and show images. While working for Westinghouse Electric in 1923, he began to develop an electronic camera tube. But in a 1925 demonstration, the image was dim, had low contrast and poor definition, and was stationary. Zworykin's imaging tube never got beyond the laboratory stage. But RCA, which acquired the Westinghouse patent, asserted that the patent for Farnsworth's 1927 image dissector was written so broadly that it would exclude any other electronic imaging device. Thus RCA, on the basis of Zworykin's 1923 patent application, filed a patent interference suit against Farnsworth. The U.S. Patent Office examiner disagreed in a 1935 decision, finding priority of invention for Farnsworth against Zworykin. Farnsworth claimed that Zworykin's 1923 system would be unable to produce an electrical image of the type to challenge his patent. Zworykin received a patent in 1928 for a color transmission version of his 1923 patent application, he also divided his original application in 1931. Zworykin was unable or unwilling to introduce evidence of a working model of his tube that was based on his 1923 patent application. In September 1939, after losing an appeal in the courts and determined to go forward with the commercial manufacturing of television equipment, RCA agreed to pay Farnsworth US$1 million over a ten-year period, in addition to license payments, to use Farnsworth's patents. In 1933 RCA introduced an improved camera tube that relied on Tihanyi's charge storage principle. Dubbed the Iconoscope by Zworykin, the new tube had a light sensitivity of about 75,000 lux, and thus was claimed to be much more sensitive than Farnsworth's image dissector. However, Farnsworth had overcome his power problems with his Image Dissector through the invention of a unique "multipactor" device that he began work on in 1930, and demonstrated in 1931. This small tube could amplify a signal reportedly to the 60th power or better and showed great promise in all fields of electronics. A problem with the multipactor, unfortunately, was that it wore out at an unsatisfactory rate. At the Berlin Radio Show in August 1931, Manfred von Ardenne gave a public demonstration of a television system using a CRT for both transmission and reception. However, Ardenne had not developed a camera tube, using the CRT instead as a flying-spot scanner to scan slides and film. Philo Farnsworth gave the world's first public demonstration of an all-electronic television system, using a live camera, at the Franklin Institute of Philadelphia on August 25, 1934, and for ten days afterwards. In Britain the EMI engineering team led by Isaac Shoenberg applied in 1932 for a patent for a new device they dubbed "the Emitron", which formed the heart of the cameras they designed for the BBC. In November 1936, a 405-line broadcasting service employing the Emitron began at studios in Alexandra Palace and transmitted from a specially built mast atop one of the Victorian building's towers. It alternated for a short time with Baird's mechanical system in adjoining studios but was more reliable and visibly superior. This was the world's first regular high-definition television service. The original American iconoscope was noisy, had a high ratio of interference to signal, and ultimately gave disappointing results, especially when compared to the high definition mechanical scanning systems then becoming available. The EMI team under the supervision of Isaac Shoenberg analyzed how the iconoscope (or Emitron) produces an electronic signal and concluded that its real efficiency was only about 5% of the theoretical maximum. They solved this problem by developing and patenting in 1934 two new camera tubes dubbed super-Emitron and CPS Emitron. The super-Emitron was between ten and fifteen times more sensitive than the original Emitron and iconoscope tubes and, in some cases, this ratio was considerably greater. It was used for an outside broadcasting by the BBC, for the first time, on Armistice Day 1937, when the general public could watch on a television set how the King laid a wreath at the Cenotaph. This was the first time that anyone could broadcast a live street scene from cameras installed on the roof of neighbor buildings, because neither Farnsworth nor RCA could do the same before the 1939 New York World's Fair. On the other hand, in 1934, Zworykin shared some patent rights with the German licensee company Telefunken. The "image iconoscope" ("Superikonoskop" in Germany) was produced as a result of the collaboration. This tube is essentially identical to the super-Emitron. The production and commercialization of the super-Emitron and image iconoscope in Europe were not affected by the patent war between Zworykin and Farnsworth, because Dieckmann and Hell had priority in Germany for the invention of the image dissector, having submitted a patent application for their Lichtelektrische Bildzerlegerröhre für Fernseher (Photoelectric Image Dissector Tube for Television) in Germany in 1925, two years before Farnsworth did the same in the United States. The image iconoscope (Superikonoskop) became the industrial standard for public broadcasting in Europe from 1936 until 1960, when it was replaced by the vidicon and plumbicon tubes. Indeed, it was the representative of the European tradition in electronic tubes competing against the American tradition represented by the image orthicon. The German company Heimann produced the Superikonoskop for the 1936 Berlin Olympic Games, later Heimann also produced and commercialized it from 1940 to 1955, finally the Dutch company Philips produced and commercialized the image iconoscope and multicon from 1952 to 1958. American television broadcasting at the time consisted of a variety of markets in a wide range of sizes, each competing for programming and dominance with separate technology, until deals were made and standards agreed upon in 1941. RCA, for example, used only Iconoscopes in the New York area, but Farnsworth Image Dissectors in Philadelphia and San Francisco. In September 1939, RCA agreed to pay the Farnsworth Television and Radio Corporation royalties over the next ten years for access to Farnsworth's patents. With this historic agreement in place, RCA integrated much of what was best about the Farnsworth Technology into their systems. In 1941, the United States implemented 525-line television. The world's first 625-line television standard was designed in the Soviet Union in 1944, and became a national standard in 1946. The first broadcast in 625-line standard occurred in 1948 in Moscow. The concept of 625 lines per frame was subsequently implemented in the European CCIR standard. In 1936, Kálmán Tihanyi described the principle of plasma display, the first flat panel display system. In 1978, James P. Mitchell described, prototyped and demonstrated what was perhaps the earliest monochromatic flat panel LED television display LED display targeted at replacing the CRT. Color television The basic idea of using three monochrome images to produce a color image had been experimented with almost as soon as black-and-white televisions had first been built. Older televisions have the RGB (Red-Green-Blue) color scheme while modern televisions focus on LEDs to create the image. Among the earliest published proposals for television was one by Maurice Le Blanc in 1880 for a color system, including the first mentions in television literature of line and frame scanning, although he gave no practical details. Polish inventor Jan Szczepanik patented a color television system in 1897, using a selenium photoelectric cell at the transmitter and an electromagnet controlling an oscillating mirror and a moving prism at the receiver. But his system contained no means of analyzing the spectrum of colors at the transmitting end, and could not have worked as he described it. Another inventor, Hovannes Adamian, also experimented with color television as early as 1907. The first color television project is claimed by him, and was patented in Germany on March 31, 1908, patent No. 197183, then in Britain, on April 1, 1908, patent No. 7219, in France (patent No. 390326) and in Russia in 1910 (patent No. 17912). Scottish inventor John Logie Baird demonstrated the world's first color transmission on July 3, 1928, using scanning discs at the transmitting and receiving ends with three spirals of apertures, each spiral with filters of a different primary color; and three light sources at the receiving end, with a commutator to alternate their illumination. Baird also made the world's first color broadcast on February 4, 1938, sending a mechanically scanned 120-line image from Baird's Crystal Palace studios to a projection screen at London's Dominion Theatre. Mechanically scanned color television was also demonstrated by Bell Laboratories in June 1929 using three complete systems of photoelectric cells, amplifiers, glow-tubes and color filters, with a series of mirrors to superimpose the red, green and blue images into one full color image. The first practical hybrid system was again pioneered by John Logie Baird. In 1940 he publicly demonstrated a color television combining a traditional black-and-white display with a rotating colored disc. This device was very "deep", but was later improved with a mirror folding the light path into an entirely practical device resembling a large conventional console. However, Baird was not happy with the design, and as early as 1944 had commented to a British government committee that a fully electronic device would be better. Mexican inventor Guillermo González Camarena also played an important role in early TV. His experiments with TV (known as telectroescopía at first) began in 1931 and led to a patent for the "trichromatic field sequential system" color television in 1940. In 1939, Hungarian engineer Peter Carl Goldmark introduced an electro-mechanical system while at CBS, which contained an Iconoscope sensor. The CBS field-sequential color system was partly mechanical, with a disc made of red, blue, and green filters spinning inside the television camera at 1,200 rpm, and a similar disc spinning in synchronization in front of the cathode ray tube inside the receiver set. The system was first demonstrated to the Federal Communications Commission (FCC) on August 29, 1940, and shown to the press on September 4. CBS began experimental color field tests using film as early as August 28, 1940, and live cameras by November 12. NBC (owned by RCA) made its first field test of color television on February 20, 1941. CBS began daily color field tests on June 1, 1941. These color systems were not compatible with existing black-and-white television sets, and as no color television sets were available to the public at this time, viewing of the color field tests was restricted to RCA and CBS engineers and the invited press. The War Production Board halted the manufacture of television and radio equipment for civilian use from April 22, 1942, to August 20, 1945, limiting any opportunity to introduce color television to the general public. As early as 1940, Baird had started work on a fully electronic system he called the "Telechrome". Early Telechrome devices used two electron guns aimed at either side of a phosphor plate. Using cyan and magenta phosphors, a reasonable limited-color image could be obtained. He also demonstrated the same system using monochrome signals to produce a 3D image (called "stereoscopic" at the time). A demonstration on August 16, 1944 was the first example of a practical color television system. Work on the Telechrome continued and plans were made to introduce a three-gun version for full color. This used a patterned version of the phosphor plate, with the guns aimed at ridges on one side of the plate. However, Baird's untimely death in 1946 ended development of the Telechrome system. Similar concepts were common through the 1940s and 1950s, differing primarily in the way they re-combined the colors generated by the three guns. The Geer tube was similar to Baird's concept, but used small pyramids with the phosphors deposited on their outside faces, instead of Baird's 3D patterning on a flat surface. The Penetron used three layers of phosphor on top of each other and increased the power of the beam to reach the upper layers when drawing those colors. The Chromatron used a set of focusing wires to select the colored phosphors arranged in vertical stripes on the tube. One of the great technical challenges of introducing color broadcast television was the desire to conserve bandwidth, potentially three times that of the existing black-and-white standards, and not use an excessive amount of radio spectrum. In the United States, after considerable research, the National Television Systems Committee approved an all-electronic Compatible color system developed by RCA, which encoded the color information separately from the brightness information and greatly reduced the resolution of the color information in order to conserve bandwidth. The brightness image remained compatible with existing black-and-white television sets at slightly reduced resolution, while color televisions could decode the extra information in the signal and produce a limited-resolution color display. The higher resolution black-and-white and lower resolution color images combine in the brain to produce a seemingly high-resolution color image. The NTSC standard represented a major technical achievement. Although all-electronic color was introduced in the U.S. in 1953, high prices and the scarcity of color programming greatly slowed its acceptance in the marketplace. The first national color broadcast (the 1954 Tournament of Roses Parade) occurred on January 1, 1954, but during the following ten years most network broadcasts, and nearly all local programming, continued to be in black-and-white. It was not until the mid-1960s that color sets started selling in large numbers, due in part to the color transition of 1965 in which it was announced that over half of all network prime-time programming would be broadcast in color that fall. The first all-color prime-time season came just one year later. In 1972, the last holdout among daytime network programs converted to color, resulting in the first completely all-color network season. Early color sets were either floor-standing console models or tabletop versions nearly as bulky and heavy, so in practice they remained firmly anchored in one place. The introduction of GE's relatively compact and lightweight Porta-Color set in the spring of 1966 made watching color television a more flexible and convenient proposition. In 1972, sales of color sets finally surpassed sales of black-and-white sets. Color broadcasting in Europe was also not standardized on the PAL format until the 1960s. By the mid-1970s, the only stations broadcasting in black-and-white were a few high-numbered UHF stations in small markets and a handful of low-power repeater stations in even smaller markets, such as vacation spots. By 1979, even the last of these had converted to color and by the early 1980s, black-and-white sets had been pushed into niche markets, notably low-power uses, small portable sets, or use as video monitor screens in lower-cost consumer equipment. By the late 1980s, even these areas switched to color sets. Digital television Digital television (DTV) is the transmission of audio and video by digitally processed and multiplexed signal, in contrast to the totally analog and channel separated signals used by analog television. Digital TV can support more than one program in the same channel bandwidth. It is an innovative service that represents the first significant evolution in television technology since color television in the 1950s. Digital TV's roots have been tied very closely to the availability of inexpensive, high-performance computers. It wasn't until the 1990s that digital TV became a real possibility. In the mid-1980s Japanese consumer electronics firm Sony Corporation developed HDTV technology and the equipment to record at such resolution, and the MUSE analog format proposed by NHK, a Japanese broadcaster, was seen as a pacesetter that threatened to eclipse U.S. electronics companies. Sony's system produced images at 1125-line resolution (or in digital terms, 1875x1125, close to the resolution of Full HD video) Until June 1990, the Japanese MUSE standard—based on an analog system—was the front-runner among the more than 23 different technical concepts under consideration. Then, an American company, General Instrument, demonstrated the feasibility of a digital television signal. This breakthrough was of such significance that the FCC was persuaded to delay its decision on an ATV standard until a digitally based standard could be developed. In March 1990, when it became clear that a digital standard was feasible, the FCC made a number of critical decisions. First, the Commission declared that the new ATV standard must be more than an enhanced analog signal, but be able to provide a genuine HDTV signal with at least twice the resolution of existing television images. Then, to ensure that viewers who did not wish to buy a new digital television set could continue to receive conventional television broadcasts, it dictated that the new ATV standard must be capable of being "simulcast" on different channels. The new ATV standard also allowed the new DTV signal to be based on entirely new design principles. Although incompatible with the existing NTSC standard, the new DTV standard would be able to incorporate many improvements. The final standard adopted by the FCC did not require a single standard for scanning formats, aspect ratios, or lines of resolution. This outcome resulted from a dispute between the consumer electronics industry (joined by some broadcasters) and the computer industry (joined by the film industry and some public interest groups) over which of the two scanning processes—interlaced or progressive—is superior. Interlaced scanning, which is used in televisions worldwide, scans even-numbered lines first, then odd-numbered ones. Progressive scanning, which is the format used in computers, scans lines in sequences, from top to bottom. The computer industry argued that progressive scanning is superior because it does not "flicker" in the manner of interlaced scanning. It also argued that progressive scanning enables easier connections with the Internet, and is more cheaply converted to interlaced formats than vice versa. The film industry also supported progressive scanning because it offers a more efficient means of converting filmed programming into digital formats. For their part, the consumer electronics industry and broadcasters argued that interlaced scanning was the only technology that could transmit the highest quality pictures then feasible, that is, 1080 lines per picture and 1920 pixels per line. William F. Schreiber, who was a director of the Advanced Television Research Program at the Massachusetts Institute of Technology from 1983 until his retirement in 1990, thought that the continued advocacy of interlaced equipment originated from consumer electronics companies that were trying to get back the substantial investments they made in the interlaced technology. Digital television transition started in the late 2000s. All the governments across the world set the deadline for analog shutdown by the 2010s. Initially the adoption rate was low. But soon, more and more households were converting to digital televisions. The transition was expected to be complete worldwide by the mid to late 2010s. Smart television Advent of digital television allowed innovations like smart TVs. A smart television, sometimes referred to as connected TV or hybrid television, is a television set with integrated Internet and Web 2.0 features, and is an example of technological convergence between computers and television sets and set-top boxes. Besides the traditional functions of television sets and set-top boxes provided through traditional broadcasting media, these devices can also provide Internet TV, online interactive media, over-the-top content, as well as on-demand streaming media, and home networking access. These TVs come pre-loaded with an operating system. Smart TV should not to be confused with Internet TV, IPTV or with Web TV. Internet television refers to the receiving television content over internet instead of traditional systems (terrestrial, cable and satellite) (although internet itself is received by these methods). Internet Protocol television (IPTV) is one of the emerging Internet television technology standards for use by television broadcasters. Web television (WebTV) is a term used for programs created by a wide variety of companies and individuals for broadcast on Internet TV. A first patent was filed in 1994 (and extended the following year) for an "intelligent" television system, linked with data processing systems, by means of a digital or analog network. Apart from being linked to data networks, one key point is its ability to automatically download necessary software routines, according to a user's demand, and process their needs. Major TV manufacturers have announced production of smart TVs only, for middle-end and high-end TVs in 2015. 3D television Stereoscopic 3D television was demonstrated for the first time on August 10, 1928, by John Logie Baird in his company's premises at 133 Long Acre, London. Baird pioneered a variety of 3D television systems using electro-mechanical and cathode-ray tube techniques. The first 3D TV was produced in 1935. The advent of digital television in the 2000s greatly improved 3D TVs. Although 3D TV sets are quite popular for watching 3D home media such as on Blu-ray discs, 3D programming has largely failed to make inroads among the public. Many 3D television channels that started in the early 2010s were shut down by the mid-2010s. Terrestrial television Overview Programming is broadcast by television stations, sometimes called "channels", as stations are licensed by their governments to broadcast only over assigned channels in the television band. At first, terrestrial broadcasting was the only way television could be widely distributed, and because bandwidth was limited, i.e., there were only a small number of channels available, government regulation was the norm. Canada The Canadian Broadcasting Corporation (CBC) adopted the American NTSC 525-line B/W 60 field per second system as its broadcast standard. It began television broadcasting in Canada in September 1952. The first broadcast was on September 6, 1952 from its Montreal station CBFT. The premiere broadcast was bilingual, spoken in English and French. Two days later, on September 8, 1952, the Toronto station CBLT went on the air. This became the English-speaking flagship station for the country, while CBFT became the French-language flagship after a second English-language station was licensed to CBC in Montreal later in the decade. The CBC's first privately owned affiliate television station, CKSO in Sudbury, Ontario, launched in October 1953 (at the time, all private stations were expected to affiliate with the CBC, a condition that was relaxed in 1960–61 when CTV, Canada's second national English-language network, was formed). Czechoslovakia In former Czechoslovakia (now the Czech Republic and Slovakia) the first experimental television sets were produced in 1948. In the same year the first test television transmission was performed. Regular television broadcasts in Prague area started on May 1, 1953. Television service expanded in the following years as new studios were built in Ostrava, Bratislava, Brno and Košice. By 1961 more than a million citizens owned a television set. The second channel of the state-owned Czechoslovak Television started broadcasting in 1970. Preparations for color transmissions in the PAL color system started in the second half of the 1960s. However, due to the Warsaw Pact invasion of Czechoslovakia and the following normalization period, the broadcaster was ultimately forced to adopt the SECAM color system used by the rest of the Eastern Bloc. Regular color transmissions eventually started in 1973, with television studios using PAL equipment and the output signal only being transcoded to SECAM at transmitter sites. After the Velvet Revolution, it was decided to switch to the PAL standard. The new OK3 channel was launched by Czechoslovak Television in May 1990 and broadcast in the format from the very start. The remaining channels switched to PAL by July 1, 1992. Commercial television didn't start broadcasting until after the dissolution of Czechoslovakia. France The first experiments in television broadcasting began in France in the 1930s, although the French did not immediately employ the new technology. In November 1929, Bernard Natan established France's first television company, Télévision-Baird-Natan. On April 14, 1931, there took place the first transmission with a thirty-line standard by . On December 6, 1931, Henri de France created the Compagnie Générale de Télévision (CGT). In December 1932, Barthélemy carried out an experimental program in black and white (definition: 60 lines) one hour per week, "Paris Télévision", which gradually became daily from early 1933. The first official channel of French television appeared on February 13, 1935, the date of the official inauguration of television in France, which was broadcast in 60 lines from 8:15 to 8:30 pm. The program showed the actress Béatrice Bretty in the studio of Radio-PTT Vision at 103 rue de Grenelle in Paris. The broadcast had a range of . On November 10, George Mandel, Minister of Posts, inaugurated the first broadcast in 180 lines from the transmitter of the Eiffel Tower. On the 18th, Susy Wincker, the first announcer since the previous June, carried out a demonstration for the press from 5:30 to 7:30 pm. Broadcasts became regular from January 4, 1937 from 11:00 to 11:30 am and 8:00 to 8:30 pm during the week, and from 5:30 to 7:30 pm on Sundays. In July 1938, a decree defined for three years a standard of 455 lines VHF (whereas three standards were used for the experiments: 441 lines for Gramont, 450 lines for the Compagnie des Compteurs and 455 for Thomson). In 1939, there were about only 200 to 300 individual television sets, some of which were also available in a few public places. With the entry of France into World War II the same year, broadcasts ceased and the transmitter of the Eiffel Tower was sabotaged. On September 3, 1940, French television was seized by the German occupation forces. A technical agreement was signed by the Compagnie des Compteurs and Telefunken, and a financing agreement for the resuming of the service is signed by German Ministry of Post and Radiodiffusion Nationale (Vichy's radio). On May 7, 1943 at 3:00 evening broadcasts. The first broadcast of Fernsehsender Paris (Paris Télévision) was transmitted from rue Cognac-Jay. These regular broadcasts (5 hours a day) lasted until August 16, 1944. One thousand 441-line sets, most of which were installed in soldiers' hospitals, picked up the broadcasts. These Nazi-controlled television broadcasts from the Eiffel Tower in Paris were able to be received on the south coast of England by R.A.F. and BBC engineers, who photographed the station identification image direct from the screen. In 1944, René Barthélemy developed an 819-line television standard. During the years of occupation, Barthélemy reached 1015 and even 1042 lines. On October 1, 1944, television service resumed after the liberation of Paris. The broadcasts were transmitted from the Cognacq-Jay studios. In October 1945, after repairs, the transmitter of the Eiffel Tower was back in service. On November 20, 1948, François Mitterrand decreed a broadcast standard of 819 lines; broadcasting began at the end of 1949 in this definition. Besides France, this standard was later adopted by Algeria, Monaco, and Morocco. Belgium and Luxembourg used a modified version of this standard with bandwidth narrowed to 7 MHz. Germany Electromechanical broadcasts began in Germany in 1929, but were without sound until 1934. Network electronic service started on March 22, 1935, on 180 lines using telecine transmission of film, intermediate film system, or cameras using the Nipkow Disk. Transmissions using cameras based on the iconoscope began on January 15, 1936. The Berlin Summer Olympic Games were televised, using both all-electronic iconoscope-based cameras and intermediate film cameras, to Berlin and Hamburg in August 1936. Twenty-eight public television rooms were opened for anybody who did not own a television set. The Germans had a 441-line system on the air in February 1937, and during World War II brought it to France, where they broadcast from the Eiffel Tower. After the end of World War II, the victorious Allies imposed a general ban on all radio and television broadcasting in Germany. Radio broadcasts for information purposes were soon permitted again, but television broadcasting was allowed to resume only in 1948. In East Germany, the head of broadcasting in the Soviet occupation zone, Hans Mahler, predicted in 1948 that in the near future 'a new and important technical step forward in the field of broadcasting in Germany will begin its triumphant march: television.' In 1950, the plans for a nationwide television service got off the ground, and a Television Centre in Berlin was approved. Transmissions began on December 21, 1952 using the 625-line standard developed in the Soviet Union in 1944, although at that time there were probably no more than 75 television receivers capable of receiving the programming. In West Germany, the British occupation forces as well as NWDR (Nordwestdeutscher Rundfunk), which had started work in the British zone straight after the war, agreed to the launch of a television station. Even before this, German television specialists had agreed on 625 lines as the future standard. This standard had narrower channel bandwidth (7 MHz) compared to the Soviet specification (8 MHz), allowing three television channels to fit into the VHF I band. In 1963 a second broadcaster (ZDF) started. Commercial stations began programming in the 1980s. When color was introduced, West Germany (1967) chose a variant of the NTSC color system, modified by Walter Bruch and called PAL. East Germany (1969) accepted the French SECAM system, which was used in Eastern European countries. With the reunification of Germany, it was decided to switch to the PAL color system. The system was changed in December 1990. Italy In Italy, the first experimental tests on television broadcasts were made in Turin since 1934. The city already hosted the Center for Management of the EIAR (lately renamed as RAI) at the premises of the Theatre of Turin. Subsequently, the EAIR established offices in Rome and Milan. On July 22, 1939 comes into operation in Rome the first television transmitter at the EIAR station, which performed a regular broadcast for about a year using a 441-line system that was developed in Germany. In September of the same year, a second television transmitter was installed in Milan, making experimental broadcasts during major events in the city. The broadcasts were suddenly ended on May 31, 1940, by order of the government, allegedly because of interferences encountered in the first air navigation systems. Also, the imminent participation in the war is believed to have played a role in this decision. EIAR transmitting equipment was relocated to Germany by the German troops. Lately, it was returned to Italy. The first official television broadcast began on January 3, 1954 by the RAI. Japan Television broadcasting in Japan started on August 28, 1953, making the country one of the first in the world with an experimental television service. The first television tests were conducted as early as 1926 using a combined mechanical Nipkow disk and electronic Braun tube system, later switching to an all-electronic system in 1935 using a domestically developed iconoscope system. In spite of that, because of the beginning of World War II in the Pacific region, this first full-fledged TV broadcast experimentation lasted only a few months. Regular television broadcasts would eventually start in 1953. In 1979, NHK first developed a consumer high-definition television with a 5:3 display aspect ratio. The system, known as Hi-Vision or MUSE after its Multiple sub-Nyquist sampling encoding for encoding the signal, required about twice the bandwidth of the existing NTSC system but provided about four times the resolution (1080i/1125 lines). Satellite test broadcasts started in 1989, with regular testing starting in 1991 and regular broadcasting of BS-9ch commenced on November 25, 1994, which featured commercial and NHK television programming. Sony first demonstrated a wideband analog high-definition television system HDTV capable video camera, monitor and video tape recorder (VTR) in April 1981 at an international meeting of television engineers in Algiers. The Sony HDVS range was launched in April 1984, with the HDC-100 camera, HDV-100 video recorder and HDS-100 video switcher all working in the 1125-line component video format with interlaced video and a 5:3 aspect ratio. Mexico The first testing television station in Mexico signed on in 1935. When KFMB-TV in San Diego signed on in 1949, Baja California became the first state to receive a commercial television station over the air. Within a year, the Mexican government would adopt the U.S. NTSC 525-line B/W 60-field-per-second system as the country's broadcast standard. In 1950, the first commercial television station within Mexico, XHTV in Mexico City, signed on the air, followed by XEW-TV in 1951 and XHGC in 1952. Those three were not only the first television stations in the country, but also the flagship stations of Telesistema Mexicano, which was formed in 1955. That year, Emilio Azcárraga Vidaurreta, who had signed on XEW-TV, entered into a partnership with Rómulo O'Farrill who had signed on XHTV, and Guillermo González Camarena, who had signed on XHGC. The earliest 3D television broadcasts in the world were broadcast over XHGC in 1954. Color television was introduced in 1962, also over XHGC-TV. One of Telesistema Mexicano's earliest broadcasts as a network, over XEW-TV, on June 25, 1955, was the first international North American broadcast in the medium's history, and was jointly aired with NBC in the United States, where it aired as the premiere episode of Wide Wide World, and the Canadian Broadcasting Corporation. Except for a brief period between 1969 and 1973, nearly every commercial television station in Mexico, with exceptions in the border cities, was expected to affiliate with a subnetwork of Telesistema Mexicano or its successor, Televisa (formed by the 1973 merger of Telesistema Mexicano and Television Independiente de Mexico). This condition would not be relaxed for good until 1993, when Imevision was privatized to become TV Azteca. Soviet Union (U.S.S.R.) The Soviet Union began offering 30-line electromechanical test broadcasts in Moscow on October 31, 1931, and a commercially manufactured television set in 1932. First electronic television system on 180 lines at 25 fps was created in the beginning of 1935 in Leningrad (St. Petersburg). In September 1937 the experimental Leningrad TV Center (OLTC) was put in action. OLTC worked with 240 lines at 25 fps progressive scan. In Moscow, experimental transmissions of electronic television took place on March 9, 1937, using equipment manufactured by RCA. Regular broadcasting began on December 31, 1938. It was quickly realized that 343 lines of resolution offered by this format would have become insufficient in the long run, thus a specification for 441-line format at 25 fps interlaced was developed in 1940. Television broadcasts were suspended during Great Patriotic War. In 1944, while the war was still raging, a new standard, offering 625 lines of vertical resolution was prepared. This format was ultimately accepted as a national standard. The transmissions in 625-line format started in Moscow on November 4, 1948. Regular broadcasting began on June 16, 1949. Details for this standard were formalized in 1955 specification called GOST 7845-55, basic parameters for black-and-white television broadcast. In particular, frame size was set to 625 lines, frame rate to 25 frames/s interlaced, and video bandwidth to 6 MHz. These basic parameters were accepted by most countries having 50 Hz mains frequency and became the foundation of television systems presently known as PAL and SECAM. Starting in 1951, broadcasting in the 625-line standard was introduced in other major cities of the Soviet Union. Color television broadcast started in 1967, using SECAM color system. Turkey The first Turkish television channel, ITU TV, was launched in 1952. The first national television is TRT 1 and was launched in 1964. Color television was introduced in 1981. Before 1989 there was the only channel, the state broadcasting company TRT, and it broadcast in several times of the dateline. Turkey's first private television channel Star started it broadcast on 26 May 1989. Until then there was only one television channel controlled by the state, but with the wave of liberalization, privately owned broadcasting began. Turkey's television market is defined by a handful of big channels, led by Kanal D, ATV and Show, with 14%, 10% and 9.6% market share, respectively. The most important reception platforms are terrestrial and satellite, with almost 50% of homes using satellite (of these 15% were pay services) at the end of 2009. Three services dominate the multi-channel market: the satellite platforms Digitürk and D-Smart and the cable TV service Türksat. United Kingdom The first British television broadcast was made by Baird Television's electromechanical system over the BBC radio transmitter in September 1929. Baird provided a limited amount of programming five days a week by 1930. During this time, Southampton earned the distinction of broadcasting the first-ever live television interview, which featured Peggy O'Neil, an actress and singer from Buffalo, New York. On August 22, 1932, BBC launched its own regular service using Baird's 30-line electromechanical system, continuing until September 11, 1935. On November 2, 1936, the BBC began transmitting the world's first public regular high-definition service from the Victorian Alexandra Palace in north London. It therefore claims to be the birthplace of TV broadcasting as we know it today. It was a dual-system service, alternating between Marconi-EMI's 405-line standard and Baird's improved 240-line standard, from Alexandra Palace in London. The BBC Television Service continues to this day. The government, on advice from a special advisory committee, decided that Marconi-EMI's electronic system gave the superior picture, and the Baird system was dropped in February 1937. TV broadcasts in London were on the air an average of four hours daily from 1936 to 1939. There were 12,000 to 15,000 receivers. Some sets in restaurants or bars might have 100 viewers for sport events (Dunlap, p56). The outbreak of the Second World War caused the BBC service to be abruptly suspended on September 1, 1939, at 12:35 pm, after a Mickey Mouse cartoon and test signals were broadcast, so that transmissions could not be used as a beacon to guide enemy aircraft to London. It resumed, again from Alexandra Palace on June 7, 1946 after the end of the war, began with a live programme that opened with the line "Good afternoon everybody. How are you? Do you remember me, Jasmine Bligh?" and was followed by the same Mickey Mouse cartoon broadcast on the last day before the war. At the end of 1947 there were 54,000 licensed television receivers, compared with 44,000 television sets in the United States at that time. The first transatlantic television signal was sent in 1928 from London to New York by the Baird Television Development Company/Cinema Television, although this signal was not broadcast to the public. The first live satellite signal to Britain from the United States was broadcast via the Telstar satellite on July 23, 1962. The first live broadcast from the European continent was made on August 27, 1950. United States WRGB claims to be the world's oldest television station, tracing its roots to an experimental station founded on January 13, 1928, broadcasting from the General Electric factory in Schenectady, NY, under the call letters W2XB. It was popularly known as "WGY Television" after its sister radio station. Later in 1928, General Electric started a second facility, this one in New York City, which had the call letters W2XBS and which today is known as WNBC. The two stations were experimental in nature and had no regular programming, as receivers were operated by engineers within the company. The image of a Felix the Cat doll rotating on a turntable was broadcast for 2 hours every day for several years as new technology was being tested by the engineers. The first regularly scheduled television service in the United States began on July 2, 1928, fifteen months before the United Kingdom. The Federal Radio Commission authorized C. F. Jenkins to broadcast from experimental station W3XK in Wheaton, Maryland, a suburb of Washington, D.C. For at least the first eighteen months, 48-line silhouette images from motion picture film were broadcast, although beginning in the summer of 1929 he occasionally broadcast in halftones. Hugo Gernsback's New York City radio station began a regular, if limited, schedule of live television broadcasts on August 14, 1928, using 48-line images. Working with only one transmitter, the station alternated radio broadcasts with silent television images of the station's call sign, faces in motion, and wind-up toys in motion. Speaking later that month, Gernsback downplayed the broadcasts, intended for amateur experimenters. "In six months we may have television for the public, but so far we have not got it." Gernsback also published Television, the world's first magazine about the medium. General Electric's experimental station in Schenectady, New York, on the air sporadically since January 13, 1928, was able to broadcast reflected-light, 48-line images via shortwave as far as Los Angeles, and by September was making four television broadcasts weekly. It is considered to be the direct predecessor of current television station WRGB. The Queen's Messenger, a one-act play broadcast on September 11, 1928, was the world's first live drama on television. Radio giant RCA began daily experimental television broadcasts in New York City in March 1929 over station W2XBS, the predecessor of current television station WNBC. The 60-line transmissions consisted of pictures, signs, and views of persons and objects. Experimental broadcasts continued to 1931. General Broadcasting System's WGBS radio and W2XCR television aired their regular broadcasting debut in New York City on April 26, 1931, with a special demonstration set up in Aeolian Hall at Fifth Avenue and Fifty-fourth Street. Thousands waited to catch a glimpse of the Broadway stars who appeared on the six-inch (15 cm) square image, in an evening event to publicize a weekday programming schedule offering films and live entertainers during the four-hour daily broadcasts. Appearing were boxer Primo Carnera, actors Gertrude Lawrence, Louis Calhern, Frances Upton and Lionel Atwill, WHN announcer Nils Granlund, the Forman Sisters, and a host of others. CBS's New York City station W2XAB began broadcasting their first regular seven-day-a-week television schedule on July 21, 1931, with a 60-line electromechanical system. The first broadcast included Mayor Jimmy Walker, the Boswell Sisters, Kate Smith, and George Gershwin. The service ended in February 1933. Don Lee Broadcasting's station W6XAO in Los Angeles went on the air in December 1931. Using the UHF spectrum, it broadcast a regular schedule of filmed images every day except Sundays and holidays for several years. By 1935, low-definition electromechanical television broadcasting had ceased in the United States except for a handful of stations run by public universities that continued to 1939. The Federal Communications Commission (FCC) saw television in the continual flux of development with no consistent technical standards, hence all such stations in the U.S. were granted only experimental and non-commercial licenses, hampering television's economic development. Just as importantly, Philo Farnsworth's August 1934 demonstration of an all-electronic system at the Franklin Institute in Philadelphia pointed out the direction of television's future. On June 15, 1936, Don Lee Broadcasting began a one-month-long demonstration of high definition (240+ line) television in Los Angeles on W6XAO (later KTSL, now KCBS-TV) with a 300-line image from motion picture film. By October, W6XAO was making daily television broadcasts of films. By 1934 RCA increased the definition to 343 interlaced lines and the frame rate to 30 per second. On July 7, 1936 RCA and its subsidiary NBC demonstrated in New York City a 343-line electronic television broadcast with live and film segments to its licensees, and made its first public demonstration to the press on November 6. Irregularly scheduled broadcasts continued through 1937 and 1938. Regularly scheduled electronic broadcasts began in April 1938 in New York (to the second week of June, and resuming in August) and Los Angeles. NBC officially began regularly scheduled television broadcasts in New York on April 30, 1939, with a broadcast of the opening of the 1939 New York World's Fair. In 1937 RCA raised the frame definition to 441 lines, and its executives petitioned the FCC for approval of the standard. By June 1939, regularly scheduled 441-line electronic television broadcasts were available in New York City and Los Angeles, and by November on General Electric's station in Schenectady. From May through December 1939, the New York City NBC station (W2XBS) of RCA broadcast twenty to fifty-eight hours of programming per month, Wednesday through Sunday of each week. The programming was 33% news, 29% drama, and 17% educational programming, with an estimated 2,000 receiving sets by the end of the year, and an estimated audience of five to eight thousand. A remote truck could cover outdoor events from up to away from the transmitter, which was located atop the Empire State Building. Coaxial cable was used to cover events at Madison Square Garden. The coverage area for reliable reception was a radius of 40 to from the Empire State Building, an area populated by more than 10,000,000 people. The FCC adopted NTSC television engineering standards on May 2, 1941, calling for 525 lines of vertical resolution, 30 frames per second with interlaced scanning, 60 fields per second, and sound carried by frequency modulation. Sets sold since 1939 that were built for slightly lower resolution could still be adjusted to receive the new standard. (Dunlap, p31). The FCC saw television ready for commercial licensing, and the first such licenses were issued to NBC- and CBS-owned stations in New York on July 1, 1941, followed by Philco's station WPTZ in Philadelphia. In the U.S., the Federal Communications Commission (FCC) allowed stations to broadcast advertisements beginning in July 1941, but required public service programming commitments as a requirement for a license. By contrast, the United Kingdom chose a different route, imposing a television license fee on owners of television reception equipment to fund the British Broadcasting Corporation (BBC), which had public service as part of its royal charter. The first official, paid advertising to appear on American commercial television occurred on the afternoon of July 1, 1941, over New York station WNBT (now WNBC) before a baseball game between the Brooklyn Dodgers and Philadelphia Phillies. The announcement for Bulova watches, for which the company paid anywhere from $4.00 to $9.00 (reports vary), displayed a WNBT test pattern modified to look like a clock with the hands showing the time. The Bulova logo, with the phrase "Bulova Watch Time", was shown in the lower right-hand quadrant of the test pattern while the second hand swept around the dial for one minute. After the U.S. entry into World War II, the FCC reduced the required minimum air time for commercial television stations from 15 hours per week to 4 hours. Most TV stations suspended broadcasting; of the ten original television stations only six continued through the war. On the few that remained, programs included entertainment such as boxing and plays, events at Madison Square Garden, and illustrated war news as well as training for air raid wardens and first aid providers. In 1942, there were 5,000 sets in operation, but production of new TVs, radios, and other broadcasting equipment for civilian purposes was suspended from April 1942 to August 1945 (Dunlap). By 1947, when there were 40 million radios in the U.S., there were about 44,000 television sets (with probably 30,000 in the New York area). Regular network television broadcasts began on NBC on a three-station network linking New York with the Capital District and Philadelphia in 1944; on the DuMont Television Network in 1946, and on CBS and ABC in 1948. Following the rapid rise of television after the war, the Federal Communications Commission was flooded with applications for television station licenses. With more applications than available television channels, the FCC ordered a freeze on processing station applications in 1948 that remained in effect until April 14, 1952. By 1949, the networks stretched from New York to the Mississippi River, and by 1951 to the West Coast. Commercial color television broadcasts began on CBS in 1951 with a field-sequential color system that was suspended four months later for technical and economic reasons. The television industry's National Television System Committee (NTSC) developed a color television system based on RCA technology that was compatible with existing black and white receivers, and commercial color broadcasts reappeared in 1953. With the widespread adoption of cable across the United States in the 1970s and 80s, terrestrial television broadcasts have been in decline; in 2013 it was estimated that about 7% of US households used an antenna.<ref>"CEA Study Says Seven Percent of TV Households Use Antennas" , '"TVTechnology, 30 July 2013</ref> A slight increase in use began around 2010 due to a switchover to digital terrestrial television broadcasts, which offer pristine image quality over very large areas, and offered an alternate to CATV for cord cutters. Cable television Cable television is a system of broadcasting television programming to paying subscribers via radio frequency (RF) signals transmitted through coaxial cables or light pulses through fiber-optic cables. This contrasts with traditional terrestrial television, in which the television signal is transmitted over the air by radio waves and received by a television antenna attached to the television. FM radio programming, high-speed Internet, telephone service, and similar non-television services may also be provided through these cables. The abbreviation CATV is often used for cable television. It originally stood for "community access television" or "community antenna television", from cable television's origins in 1948: in areas where over-the-air reception was limited by distance from transmitters or mountainous terrain, large "community antennas" were constructed, and cable was run from them to individual homes. The origins of cable broadcasting are even older as radio programming was distributed by cable in some European cities as far back as 1924. Early cable television was analog, but since the 2000s all cable operators have switched to, or are in the process of switching to, digital cable television. Satellite television Overview Satellite television is a system of supplying television programming using broadcast signals relayed from communication satellites. The signals are received via an outdoor parabolic reflector antenna usually referred to as a satellite dish and a low-noise block downconverter (LNB). A satellite receiver then decodes the desired television programme for viewing on a television set. Receivers can be external set-top boxes, or a built-in television tuner. Satellite television provides a wide range of channels and services, especially to geographic areas without terrestrial television or cable television. The most common method of reception is direct-broadcast satellite television (DBSTV), also known as "direct to home" (DTH). In DBSTV systems, signals are relayed from a direct broadcast satellite on the Ku wavelength and are completely digital. Satellite TV systems formerly used systems known as television receive-only. These systems received analog signals transmitted in the C-band spectrum from FSS type satellites, and required the use of large dishes. Consequently, these systems were nicknamed "big dish" systems, and were more expensive and less popular. The direct-broadcast satellite television signals were earlier analog signals and later digital signals, both of which require a compatible receiver. Digital signals may include high-definition television (HDTV). Some transmissions and channels are free-to-air or free-to-view, while many other channels are pay television requiring a subscription. In 1945 British science fiction writer Arthur C. Clarke proposed a worldwide communications system that would function by means of three satellites equally spaced apart in earth orbit. This was published in the October 1945 issue of the Wireless World magazine and won him the Franklin Institute's Stuart Ballantine Medal in 1963. The first satellite television signals from Europe to North America were relayed via the Telstar satellite over the Atlantic ocean on July 23, 1962. The signals were received and broadcast in North American and European countries and watched by over 100 million. Launched in 1962, the Relay 1 satellite was the first satellite to transmit television signals from the US to Japan. The first geosynchronous communication satellite, Syncom 2, was launched on July 26, 1963. The world's first commercial communications satellite, called Intelsat I and nicknamed "Early Bird", was launched into geosynchronous orbit on April 6, 1965. The first national network of television satellites, called Orbita, was created by the Soviet Union in October 1967, and was based on the principle of using the highly elliptical Molniya satellite for rebroadcasting and delivering of television signals to a network of twenty ground downlink stations each equipped with a parabolic antenna in diameter. The first commercial North American satellite to carry television transmissions was Canada's geostationary Anik 1, which was launched on 9 November 1972. ATS-6, the world's first experimental educational and Direct Broadcast Satellite (DBS), was launched on May 30, 1974. It transmitted at 860 MHz using wideband FM modulation and had two sound channels. The transmissions were focused on the Indian subcontinent but experimenters were able to receive the signal in Western Europe using home constructed equipment that drew on UHF television design techniques already in use. In the Soviet Union, the Moskva (or Moscow) system of broadcasting and delivering of TV signals via satellites was launched in 1979. Stationary and mobile downlink stations with parabolic antennas in diameter were receiving signal from Gorizont communication satellites deployed to geostationary orbits. The first in a series of Soviet geostationary satellites to carry Direct-To-Home television, Ekran 1, was launched on October 26, 1976. It used a 714 MHz UHF downlink frequency so that the transmissions could be received with existing UHF television technology rather than microwave technology. Beginning of the satellite TV industry In the United States, the satellite television industry developed from the cable television industry as communication satellites were being used to distribute television programming to remote cable television headends. Home Box Office (HBO), Turner Broadcasting System (TBS), and Christian Broadcasting Network (CBN, later The Family Channel) were among the first to use satellite television to deliver programming. Taylor Howard of San Andreas, California became the first person to receive C-band satellite signals with his home-built system in 1976. PBS, a non-profit public broadcasting service, began to distribute its television programming by satellite in 1978. On October 18, 1979, the Federal Communications Commission (FCC) began allowing people to have home satellite earth stations without a federal government license. The front cover of the 1979 Neiman-Marcus Christmas catalogue featured the first home satellite TV stations on sale for $36,500. The dishes were nearly in diameter and were remote-controlled. The price went down by half soon after that, but there were only eight more channels. The Society for Private and Commercial Earth Stations (SPACE), an organisation that represented consumers and satellite TV system owners was established in 1980. Early satellite television systems were not very popular due to their expense and large dish size. The satellite television dishes of the systems in the late 1970s and early 1980s were in diameter, made of fibreglass or solid aluminum or steel, and in the United States cost more than $5,000, sometimes as much as $10,000. Programming sent from ground stations was relayed from eighteen satellites in geostationary orbit located above the Earth. TVRO/C-band satellite era By 1980, satellite television was well established in the USA and Europe. On April 26, 1982, the first satellite channel in the UK, Satellite Television Ltd. (later Sky1), was launched. Its signals were transmitted from the ESA's Orbital Test Satellites. Between 1981 and 1985, TVRO systems' sales rates increased as prices fell. Advances in receiver technology and the use of Gallium Arsenide FET technology enabled the use of smaller dishes. 500,000 systems, some costing as little as $2000, were sold in the US in 1984. Dishes pointing to one satellite were even cheaper. People in areas without local broadcast stations or cable television service could obtain good-quality reception with no monthly fees. The large dishes were a subject of much consternation, as many people considered them eyesores, and in the US most condominiums, neighborhoods, and other homeowner associations tightly restricted their use, except in areas where such restrictions were illegal. These restrictions were altered in 1986 when the Federal Communications Commission ruled all of them illegal. A municipality could require a property owner to relocate the dish if it violated other zoning restrictions, such as a setback requirement, but could not outlaw their use. The necessity of these restrictions would slowly decline as the dishes got smaller. Originally, all channels were broadcast in the clear (ITC) because the equipment necessary to receive the programming was too expensive for consumers. With the growing number of TVRO systems, the program providers and broadcasters had to scramble their signal and develop subscription systems. In October 1984, the U.S. Congress passed the Cable Communications Policy Act of 1984, which gave those using TVRO systems the right to receive signals for free unless they were scrambled, and required those who did scramble to make their signals available for a reasonable fee. Since cable channels could prevent reception by big dishes, other companies had an incentive to offer competition. In January 1986, HBO began using the now-obsolete VideoCipher II system to encrypt their channels. Other channels uses less secure television encryption systems. The scrambling of HBO was met with much protest from owners of big-dish systems, most of which had no other option at the time for receiving such channels, claiming that clear signals from cable channels would be difficult to receive. Eventually HBO allowed dish owners to subscribe directly to their service for $12.95 per month, a price equal to or higher than what cable subscribers were paying, and required a descrambler to be purchased for $395. This led to the attack on HBO's transponder Galaxy 1 by John R. MacDougall in April 1986. One by one, all commercial channels followed HBO's lead and began scrambling their channels. The Satellite Broadcasting and Communications Association SBCA was founded on December 2, 1986 as the result of a merger between SPACE and the Direct Broadcast Satellite Association (DBSA). Videocipher II used analog scrambling on its video signal and Data Encryption Standard based encryption on its audio signal. VideoCipher II was defeated, and there was a black market for descrambler devices, which were initially sold as "test" devices. Late 1980s and 1990s to present By 1987, nine channels were scrambled, but 99 others were available free-to-air. While HBO initially charged a monthly fee of $19.95, soon it became possible to unscramble all channels for $200 a year. Dish sales went down from 600,000 in 1985 to 350,000 in 1986, but pay television services were seeing dishes as something positive since some people would never have cable service, and the industry was starting to recover as a result. Scrambling also led to the development of pay-per-view events. On November 1, 1988, NBC began scrambling its C-band signal but left its Ku band signal unencrypted in order for affiliates to not lose viewers who could not see their advertising. Most of the two million satellite dish users in the United States still used C-band. ABC and CBS were considering scrambling, though CBS was reluctant due to the number of people unable to receive local network affiliates. The piracy on satellite television networks in the US led to the introduction of the Cable Television Consumer Protection and Competition Act of 1992. This legislation enabled anyone caught engaging in signal theft to be fined up to $50,000 and to be sentenced to a maximum of two years in prison. A repeat offender can be fined up to $100,000 and be imprisoned for up to five years. Satellite television had also developed in Europe but it initially used low power communication satellites and it required dish sizes of over . On December 11, 1988 Luxembourg launched Astra 1A, the first satellite to provide medium power satellite coverage to Western Europe. This was one of the first medium-powered satellites, transmitting signals in Ku band and allowing reception with small dishes (90 cm). The launch of Astra beat the winner of the UK's state Direct Broadcast Satellite licence holder, British Satellite Broadcasting, to the market. In the US in the early 1990s, four large cable companies launched PrimeStar, a direct broadcasting company using medium power satellite. The relatively strong transmissions allowed the use of smaller (90 cm) dishes. Its popularity declined with the 1994 launch of the Hughes DirecTV and Dish Network satellite television systems. On March 4, 1996 EchoStar introduced Digital Sky Highway (Dish Network) using the EchoStar 1 satellite. EchoStar launched a second satellite in September 1996 to increase the number of channels available on Dish Network to 170. These systems provided better pictures and stereo sound on 150-200 video and audio channels, and allowed small dishes to be used. This greatly reduced the popularity of TVRO systems. In the mid-1990s, channels began moving their broadcasts to digital television transmission using the DigiCipher conditional access system. In addition to encryption, the widespread availability, in the US, of DBS services such as PrimeStar and DirecTV had been reducing the popularity of TVRO systems since the early 1990s. Signals from DBS satellites (operating in the more recent Ku band) are higher in both frequency and power (due to improvements in the solar panels and energy efficiency of modern satellites) and therefore require much smaller dishes than C-band, and the digital modulation methods now used require less signal strength at the receiver than analog modulation methods. Each satellite also can carry up to 32 transponders in the Ku band, but only 24 in the C band, and several digital subchannels can be multiplexed (MCPC) or carried separately (SCPC) on a single transponder. Advances in noise reduction due to improved microwave technology and semiconductor materials have also had an effect. However, one consequence of the higher frequencies used for DBS services is rain fade where viewers lose signal during a heavy downpour. C-band satellite television signals are less prone to rain fade. Internet television Internet television (Internet TV), (online television) or IPTV (Internet Protocol Television) is the digital distribution of television content via the Internet as opposed to traditional systems like terrestrial, cable and satellite, although internet itself is received by terrestrial, cable or satellite methods. Internet television is a general term that covers the delivery of television shows and other video content over the Internet by video streaming technology, typically by major traditional television broadcasters. Internet television should not to be confused with Smart TV, IPTV or with Web TV. Smart television refers to the TV set that has an inbuilt operating system. Internet Protocol television (IPTV) is one of the emerging Internet television technology standards for use by television broadcasters. Web television is a term used for programs created by a wide variety of companies and individuals for broadcast on Internet TV. Television sets A television set, also called a television receiver, television, TV set, TV, or telly, is a device that combines a tuner, display, and speakers for the purpose of viewing television. Introduced in the late 1920s in mechanical form, television sets became a popular consumer product after World War II in electronic form, using cathode ray tubes. The addition of color to broadcast television after 1953 further increased the popularity of television sets in the 1960s, and an outdoor antenna became a common feature of suburban homes. The ubiquitous television set became the display device for the first recorded media in the 1970s, such as VHS and later DVD, as well as for early home computers and videogame consoles. In the late 2000s flat panel television incorporating liquid-crystal displays largely replaced cathode ray tubes. Modern flat panel TVs are typically capable of high-definition display (720p, 1080p or 2160p) and can also play content from a USB device. Mechanical televisions were commercially sold from 1928 to 1934 in the United Kingdom, United States, and Soviet Union. The earliest commercially made televisions sold by Baird called Televisors in the UK in 1928 were radios with the addition of a television device consisting of a neon tube behind a mechanically spinning disk (patented by German engineer Paul Nipkow in 1884) with a spiral of apertures first mass-produced television set, selling about a thousand units. The first commercially made electronic televisions with cathode ray tubes were manufactured by Telefunken in Germany in 1934,1934–35 Telefunken, Television History: The First 75 Years. followed by other makers in France (1936), Britain (1936), and the United States (1938).America's First Electronic Television Set, Television History: The First 75 Years. The cheapest model with a 12-inch (30 cm) screen was $445 (). An estimated 19,000 electronic televisions were manufactured in Britain, and about 1,600 in Germany, before World War II. About 7,000–8,000 electronic sets were made in the U.S. before the War Production Board halted manufacture in April 1942, production resuming in August 1945. Television usage in the western world skyrocketed after World War II with the lifting of the manufacturing freeze, war-related technological advances, the drop in television prices caused by mass production, increased leisure time, and additional disposable income. While only 0.5% of U.S. households had a television in 1946, 55.7% had one in 1954, and 90% by 1962. In Britain, there were 15,000 television households in 1947, 1.4 million in 1952, and 15.1 million by 1968. By the late 1960s and early 1970s, color television had come into wide use. In Britain, BBC1, BBC2 and ITV were regularly broadcasting in color by 1969. By the late 2000s, CRT display technology was largely supplanted worldwide by flat-panel displays such as LCD. Flat-panel television, especially LCD, has become the dominant form of television since the early 2010s. Technological innovations The first national live television broadcast in the U.S. took place on September 4, 1951 when President Harry Truman's speech at the Japanese Peace Treaty Conference in San Francisco was transmitted over AT&T's transcontinental cable and microwave radio relay system to broadcast stations in local markets."Television Highlights", The Washington Post, September 4, 1951, p. B13. The first live coast-to-coast commercial television broadcast in the U.S. took place on November 18, 1951 during the premiere of CBS's See It Now, which showed a split-screen view of the Brooklyn Bridge in New York City and the Golden Gate Bridge in San Francisco. The Eurovision Song Contest held yearly from 1956 by the European Broadcasting Union was launched, among other goals, with the aim to make technical improvements in the field of simultaneous sharing of TV signals across main national European broadcasters, a technical challenge by that time. It is the longest-running annual international televised music competition. In 1958, the CBC completed the longest television network in the world, from Sydney, Nova Scotia to Victoria, British Columbia. Reportedly, the first continuous live broadcast of a "breaking" news story in the world was conducted by the CBC during the Springhill mining disaster, which began on October 23, 1958. The development of cable television and satellite television in the 1970s allowed for more channels and encouraged companies to target programming toward specific audiences. It also enabled the rise of subscription television channels, such as Home Box Office (HBO) and Showtime in the U.S., and Sky Television in the U.K. Television pioneers Important people in the development and contributions of TV technology. Manfred von Ardenne John Logie Baird Alan Blumlein Walter Bruch (PAL television) Guillermo González Camarena Alan Archibald Campbell-Swinton Karl Ferdinand Braun Allen B. DuMont Philo T. Farnsworth Boris Grabovsky Charles Francis Jenkins Siegmund and David Loewe, founders of Loewe AG in 1923 Earl Muntz Paul Gottlieb Nipkow Constantin Perskyi Boris Rosing Ulises Armand Sanabria David Sarnoff Isaac Shoenberg Kenjiro Takayanagi Léon Theremin Kálmán Tihanyi Vladimir Zworykin Television museums Paley Center for Media (US) Early Television Museum (US) Museum of Broadcast Communications (US) National Science and Media Museum (UK) National Museum of Australia See also Archive of American Television BBC Archives Geographical usage of television Golden Age of Television, c. 1949–1960 in the U.S. Golden Age of Television (2000s–present) History of broadcasting History of radio History of telecommunication History of videotelephony List of experimental television stations List of years in television List of years in American television Muntzing Oldest television station Television Hall of Fame Timeline of the introduction of color television in countries Timeline of the introduction of television in countries Notes References Further reading {{cite book |last=Dunlap |first=Orrin E. |title=The Future of Television. New York and London: Harper Brothers |date=1942}} External links NAB: How It All Got Started Bairdtelevision.com Mechanical TV and Illusion Generators including a description of what mechanical TV viewing was like History of European Television – online exhibition Journal of European Television History and Culture Television history — inventors Technology Review – Who Really Invented Television? Who Invented Television – Reconciling The Historical Origins of Electronic Video Photos of early TV receivers Early television museum (extensive online presence) Ed Reitan's Color Television History Erics Vintage Television Sets Detailed timeline of communications media (including the TV) The History of Australian Television EUscreen: Discover Europe's television heritage A Visit to Our Studios: a television program exploring the studios at Johns Hopkins University in 1951 Archive of American Television (information and links to videotaped oral history interviews with TV legends and pioneers) Canadian Broadcasting Corporation Archives History of West Australian Television MZTV Museum of Television & Archive Television Early Patents and Inventions Littleton, Cynthia. "Happy 70th Birthday, TV Commercial broadcasts bow on July 1, 1941; Variety calls it 'corney'", Variety, July 1, 2011. WebCitation archive. Booknotes interview with Daniel Stashower on The Boy Genius and the Mogul: The Untold Story of Television, July 21, 2002. History of TV Infographic Experimental television stations Television
51861231
https://en.wikipedia.org/wiki/Lucy%20Sanders
Lucy Sanders
Lucinda "Lucy" Sanders (born 1954) is the current CEO and a co-founder of the National Center for Women & Information Technology. She is the recipient of many distinguished honors in the STEM fields, including induction into the US News STEM Leadership Hall of Fame in 2013. Early age and education At an early age, Sanders displayed an interest in the STEM fields. Sanders had three main influences that led her to pursue an education in computer science: her father, her high school math teacher, and her sister. Her father was an early adopter of computer science when it first began to develop as a large scale field, her high school teacher taught Sanders skills required for computer programming, and her sister became successful after receiving one of the early degrees in computer science. Upon graduating from high school, Sanders attended Louisiana State University and received her bachelor's degree in computer science. Sanders then attended the University of Colorado Boulder where she attained a master's degree in computer science. Professional career In her early career, Sanders worked as a Research and Development (R&D) Manager at Bell Labs. She later became an executive vice president and worked as the CTO of Lucent Customer Care Solutions until 1999. She moved on from Bell Labs to work at Inc CRM Solutions at Avaya Labs for two years, until she founded the National Center for Women and Information Technology in 2004, where she currently works as the CEO. She also previously held a position in the board of the Alliance for Technology, Learning, and Society (Atlas), the Denver Public Schools Computer Magnet Advisory Board, the MSRI, the Engineering Advisory Council at the University of Colorado at Boulder, and is a Trustee for the Center for American Entrepreneurship and the International Computer Science Institute. NCWIT Sanders initially co-founded the National Center for Women & Information Technology in 2004, when she was given a grant from the National Science Foundation. Along with Telle Whitney and Robert Schnabel, Sanders hoped to use NCWIT to increase the number of women in computer fields. Sanders is currently working as the day to day CEO of the National Center for Women & Information Technology. Publications Improving Gender Composition in Computing, Jill Ross, Liz Litzler, Joanne Cohoon and Lucy Sanders, Communications of the ACM, April 2012 Strategy Trumps Money: Recruiting Undergraduate Women into Computing, Lecia J. Barker, J. McGrath Cohoon, and Lucy Sanders, IEEE Computer Magazine, 2010. Committee on Assessing the Impacts of Changes in the Information Technology Research and Development Ecosystem: Retaining Leadership in an Increasingly Global Environment, National Research Council of the National Academies, January 2009. IT Innovation and the Role of Diversity, Lucinda Sanders, Black IT Professional Magazine, Summer 2006. Ahuja, Sid and Sanders, Lucinda M., “Multimedia Collaboration”, AT&T Technical Journal, October 1995. Katz, Bryan and Sanders, Lucinda M., “MMCX Server Delivers Multimedia Here and Now”, AT&T Technology, Winter 1995 – 1996. Glass, Kathleen K. and Sanders, Lucinda M. (1992). “Managing Organizational Handoffs with Empowered Teams”. AT&T Technical Journal (22) Volume 71 Number 3, pp. 22 – 29. Awards and recognition Bob Newman Lifetime Achievement Award, Colorado Technology Association, 2016 US News STEM Leadership Hall of Fame, 2013 A. Nico Habermann Award, 2012 George Norlin Distinguished Service Award, 2011 Boulder County Business Review Outstanding Women, 2010 Community Partner, Microsoft, 2009 Girl Scouts Woman of Distinction, 2008 WITI Hall of Fame, 2007 Soroptimist International of Los Angeles Women of Vision Award, 2006 Aspen Institute Executive Seminar Academic Scholarship, 2005 CU Boulder Distinguished Engineering Alumni Award for "Industry and Commerce", 2004 Silicon Valley Tribute to Women in Industry Award for business excellence and community outreach, 2000 Bell Labs Fellow Award, 1996 Interviews See also Women in Computing Information Technology Computer Science References American women computer scientists American computer scientists Living people Women nonprofit executives University of Colorado Boulder alumni Louisiana State University alumni 1954 births 21st-century American women
57314
https://en.wikipedia.org/wiki/Adi%20Shamir
Adi Shamir
Adi Shamir (; born July 6, 1952) is an Israeli cryptographer. He is a co-inventor of the Rivest–Shamir–Adleman (RSA) algorithm (along with Ron Rivest and Len Adleman), a co-inventor of the Feige–Fiat–Shamir identification scheme (along with Uriel Feige and Amos Fiat), one of the inventors of differential cryptanalysis and has made numerous contributions to the fields of cryptography and computer science. Education Born in Tel Aviv, Shamir received a Bachelor of Science (BSc) degree in mathematics from Tel Aviv University in 1973 and obtained his Master of Science (MSc) and Doctor of Philosophy (PhD) degrees in Computer Science from the Weizmann Institute in 1975 and 1977 respectively. Career and research After a year as a postdoctoral researcher at the University of Warwick, he did research at Massachusetts Institute of Technology (MIT) from 1977–1980 before returning to be a member of the faculty of Mathematics and Computer Science at the Weizmann Institute. Starting from 2006, he is also an invited professor at École Normale Supérieure in Paris. In addition to RSA, Shamir's other numerous inventions and contributions to cryptography include the Shamir secret sharing scheme, the breaking of the Merkle-Hellman knapsack cryptosystem, visual cryptography, and the TWIRL and TWINKLE factoring devices. Together with Eli Biham, he discovered differential cryptanalysis in the late 1980s, a general method for attacking block ciphers. It later emerged that differential cryptanalysis was already known — and kept a secret — by both IBM and the National Security Agency (NSA). Shamir has also made contributions to computer science outside of cryptography, such as finding the first linear time algorithm for 2-satisfiability and showing the equivalence of the complexity classes PSPACE and IP. Awards and honors Shamir has received a number of awards, including the following: the 2002 ACM Turing Award, together with Rivest and Adleman, in recognition of his contributions to cryptography the Paris Kanellakis Theory and Practice Award; the Erdős Prize of the Israel Mathematical Society, the 1986 IEEE W.R.G. Baker Award the UAP Scientific Prize The Vatican's PIUS XI Gold Medal the 2000 IEEE Koji Kobayashi Computers and Communications Award the Israel Prize, in 2008, for computer sciences. an honorary DMath (Doctor of Mathematics) degree from the University of Waterloo 2017 (33rd) Japan Prize in the field of Electronics, Information and Communication for his contribution to information security through pioneering research on cryptography he was elected a Foreign Member of the Royal Society (ForMemRS) in 2018 for substantial contribution to the improvement of natural knowledge. He was elected a Member of the American Philosophical Society in 2019. References 1952 births Living people People from Tel Aviv Alumni of the University of Warwick Tel Aviv University alumni Modern cryptographers Public-key cryptographers Israeli mathematicians Israeli Jews Israel Prize in computer sciences recipients 20th-century mathematicians 21st-century mathematicians Israeli computer scientists Turing Award laureates Israeli cryptographers Weizmann Institute of Science faculty Members of the Israel Academy of Sciences and Humanities Members of the French Academy of Sciences Foreign associates of the National Academy of Sciences Jewish scientists International Association for Cryptologic Research fellows Foreign Members of the Royal Society Members of the American Philosophical Society
8516674
https://en.wikipedia.org/wiki/Comparison%20of%20spreadsheet%20software
Comparison of spreadsheet software
Spreadsheet is a class of application software design to analyze tabular data called "worksheets". A collection of worksheets is called a "workbook". Online spreadsheets do not depend on a particular operating system but require a standards-compliant web browser instead. One of the incentives for the creation of online spreadsheets was offering worksheet sharing and public sharing or workbooks as part of their features which enables collaboration between multiple users. Some on-line spreadsheets provide remote data update, allowing data values to be extracted from other users' spreadsheets even though they may be inactive at the time. General Operating system support The operating systems the software can run on natively (without emulation). iOS and Android apps can be optimized for iPads and Chromebooks which run different operating systems; iPadOS and Chrome OS respectively, optimizations may include things like multitasking capabilities, large and multiple displays, better keyboard and mouse support. Supported file formats This table gives a comparison of what file formats each spreadsheet can import and export. "Yes" means can both import and export. See also List of spreadsheet software List of online spreadsheets Comparison of word processors References Spreadsheets
32336601
https://en.wikipedia.org/wiki/Timeline%20of%20the%20Casey%20Anthony%20case
Timeline of the Casey Anthony case
The timeline of the Casey Anthony case chronicles the events surrounding the death of Caylee Marie Anthony and the trial of her mother, Casey Anthony. 2008 June 9, 2008 – Casey Anthony and her daughter, Caylee, move out of Casey's parents' - Cindy and George Anthony - home, and in with her ex-boyfriend, Ricardo Morales, and friend, Amy Huizenga. June 15, 2008 – Caylee is videotaped visiting an assisted living facility with grandmother Cindy Anthony that morning, who is visiting her father. Cindy swims with Caylee in the Anthonys' pool later that day, afterwards removing the ladder and closing the gate. June 16, 2008 – Caylee is last seen alive at the Anthony family residence. According to the defense, Caylee drowned in the family's above-ground swimming pool sometime during this day and both Casey and George Anthony panicked upon finding the body and covered up her death. The timeline of that day follows: 7:00 a.m. Cindy Anthony testified that she left for work a few minutes before 7:00 a.m. while everyone in the home was still asleep. 7:52 a.m. Activity from Casey Anthony's password-protected account on MySpace and research for "shot girls" costumes for Tony Lazarro's night club events. 7:56 a.m. AIM account was used to chat on the computer. 12:50 p.m. According to George Anthony, Caylee departed with Casey by car around 12:50 p.m. with backpacks on their shoulders. (Note: Although George testified that Casey and Caylee left the house at 12:50, there is further computer activity on the home computer associated with Casey's account and her cell phone pings do not leave the area of the Anthony family home until 4:11 pm.) 1:39 p.m. Activity associated with Casey's AIM, MySpace, and Facebook accounts at 1:39 p.m. on the home computer. The last browser activity during that session is at 1:42 p.m. 1:44 p.m. Casey calls friend Amy Huizenga. 2:21 p.m. Call with Amy Huizenga ends. 2:30 p.m. George Anthony testified that he left the home at this time to go to work. 2:49 p.m. Casey Anthony's cellphone connects with a tower nearest to the home, and the Anthony family's desktop computer is activated by someone using a password-protected account Casey Anthony used. 2:51 p.m. A Google search is made for the term "fool-proof suffocation," misspelling the last word as "suffication". The user clicks on an article criticizing pro-suicide websites that promote "foolproof" ways to die. 2:52 p.m. Activity on MySpace. 2:52 p.m. Casey answers phone call from Jesse Grund. He describes this conversation as "abnormal", where Casey stated to him that her parents were divorcing and she had to find a new place to live. 3:04 p.m. Casey disconnected the phone call from Jesse Grund to take an incoming call from George Anthony. According to the defense, the 26-second call from her father took place as soon as he got to work to tell her "I took care of everything," telling her he disposed of the body and warning her not to tell her mother about the child's death. 3:34 p.m. Casey made a phone call to her boyfriend, Tony Lazarro. Unanswered. Between 4:10 and 4:14 p.m. Casey made six unanswered phone calls to her mother. 4:11 p.m. Casey's cellphone pings indicate it was at or near the house until she headed toward Lazzaro's apartment at 4:11 p.m. 7:54 p.m. She and Lazzaro are seen entering and walking around casually at a Blockbuster video store. Caylee is not with them. June 17, 2008 – George and Cindy Anthony notice that the gate to the swimming pool is open and the ladder is next to the pool. June 20, 2008 – Casey Anthony is captured in various photos partying at the Fusion nightclub and participating in a "hot body contest". June 23, 2008 – Anthony Lazzaro testified that he helped Casey break into the shed at her parents' home to take gas cans for Casey's car, which had run out of gas. Lazarro said he watched Casey open the trunk of her car. Although he did not see the inside of the trunk, he said there was no odor that he could detect. June 24, 2008 – George Anthony called police to report the break-in and report the gas cans missing. Later this day, he saw Casey at the Anthony residence and confronted her about taking them. George said that when he went to get them out of his daughter's car, she ran past him, quickly opened the trunk and retrieved the gas cans herself, yelling, "Here's your fucking gas cans!" George testified that he smelled gasoline in the car, but did not detect any other odors. June 30, 2008 – Casey's car is towed from a parking lot after being there for several days; her purse and a child's car seat are found in the car's back seat. July 2, 2008 – Casey gets a tattoo on her back saying "Bella Vita" which means "beautiful life" in Italian. July 15, 2008 – George and Cindy Anthony pick up Casey's car from the impound yard. George Anthony observes a strong odor emanating from the vehicle. An inspection of the car trunk reveals a plastic bag containing trash. Distressed because Casey has not brought Caylee home in a month, Cindy tracks down and meets with Amy Huizenga who takes Cindy to the apartment where Casey is staying and makes Casey come home. Casey tells her parents that she hasn't seen Caylee in a month and that a babysitter named Zenaida Fernandez Gonzalez ("Zanny") may have kidnapped her. Cindy Anthony immediately calls 911 and reports her granddaughter Caylee missing. July 16, 2008 – Police investigators discover Casey Anthony has been lying about her place of employment and where she says her nanny lives. As a result, Casey is arrested and charged with child neglect, making false official statements, and obstructing an investigation. July 17, 2008 – Casey appears in court, during which time the judge denies bail, saying she showed a "woeful disregard for the welfare of her child." The policemen from the Sheriff's Office search Casey's car and takes several items of evidence. July 18, 2008 – Casey Anthony hires Jose Baez as her legal attorney, who writes a letter to Orange County Sheriff's Office about Casey's willingness to cooperate with law enforcement. July 22, 2008 – Because of police testimony about allegedly incriminating evidence from the car, Circuit Court Judge Stan Strickland sets Casey Anthony's bail at $500,000. July 29, 2008 – Judge denies defense motion to ban the release to the media all jailhouse recordings, 911 tapes and visitor logs. Florida public records law mandates record requests by media be honored promptly. Over the next three years thousands of pages of audio, video, forensic information and legal documents detailing the criminal investigations will be released. August 5, 2008 – The State Attorney's Office files formal charges against Casey Anthony for one felony count of child neglect. August 8, 2008 – WFTV reports that investigators suspect Caylee may have drowned in the family swimming pool on June 16. August 11, 12, 13 – Meter reader Roy Kronk reports suspicious bag to police. A police officer meets Kronk at the scene and Kronk tells him he had seen a skull and bones in a bag. However, the officer was rude and conducted only a cursory search. August 21, 2008 – After bail bondsman Leonard Padilla pays Casey Anthony's $500,000 bail she is fitted with an electronic monitoring device and released. August 29, 2008 – Casey Anthony is arrested again on charges of writing four checks worth nearly $650 on Amy Huizenga's checking account without permission. Orange County police said the charges are "unrelated to the investigation." Prosecutors offer Casey a limited immunity deal related to "the false statements given to law enforcement about locating her child." She refuses it soon after. (The offer is renewed on August 25 and again refused.) September 5, 2008 – Casey Anthony's parents post a $500,000 bond and she is released from county jail into their custody after being fitted with an electronic tracking device. September 6, 2008 – Deputies seize a handgun from the trunk of George Anthony's car because having a gun on the property violates Casey's bail. George says he planned to use it to force Casey's friends to tell him what happened to Caylee. September 10, 2008 – The whole family allegedly refuses to take a lie detector test offered by both the FBI and local authorities. September 15–16, 2008 – Casey Anthony turns herself in on new check fraud charges, fraudulent use of identification, and petty theft. She is released the next day on $1,250 bond, and again fitted with an electronic tracking device. September 25, 2008 – Zenaida Fernandez-Gonzalez, the woman Casey Anthony reportedly named as an alleged baby sitter and suspect in the case, files a defamation lawsuit against her. October 14, 2008 – Casey Anthony is indicted by a grand jury on charges of first degree murder, aggravated child abuse, aggravated manslaughter of a child, and four counts of providing false information to police. She is arrested later that day. Judge John Jordan orders that she be held without bond. Because it is a capital crime, Casey Anthony faces possible death penalty. October 21, 2008 – Charges of child neglect are dropped against Casey Anthony on assumption the child is dead. On October 28 Casey Anthony is arraigned and pleads not guilty to all charges. November 8–9, 2008 – Texas EquuSearch leads hundreds of volunteers in a search of a grid for Caylee, but when nothing is found they suspend their search. November 15, 2008 – The Anthony family's private investigator, Dominic Casey, searches the area where Caylee's remains later are found. The search is videotaped. The family's attorney denies asking Dominic Casey to search there. The defense questioned who sent him to the area; he said that a psychic gave him the tip. According to the prosecution, the area was under several inches of water at the time. December 5, 2008 – The state initially says it will not seek the death penalty against Casey Anthony. December 11, 2008 – After yet a fourth tip from Roy Kronk, skeletal remains of what appeared to be a small child are found one-quarter mile from the Anthony home. Orange County Sheriff's Office obtains warrant and searches Anthony residence. December 19, 2008 – Police announce DNA testing confirms that the remains belong to Caylee Anthony. 2009 January 23, 2009 – Police discover George Anthony, who had been text messaging family members, despondent and possibly under the influence of alcohol and medication in a hotel room. Police also discovered a lengthy suicide note. January 29, 2009 – Judge Stan Strickland orders Casey Anthony to appear at all court hearings in her case. April 14, 2009 – The State of Florida reverses self and will seek the imposition of the death penalty. September 17, 2009 – Casey Anthony's defense team files a motion to dismiss the murder charges against her because the state allegedly failed to preserve evidence in the case. The motion is denied. November 24, 2009 – Defense attorneys accuse Texas EquuSearch's Tim Miller of lying to the court in their claim that only 32 people searched in the area where Caylee's remains were eventually found and that the number was much higher. December 18, 2009 – Judge Stan Strickland denies a request to take the death penalty off the table in the prosecution of Casey Anthony. 2010 January 26, 2010 – Casey pleads guilty for 13 fraudulent check charges, takes responsibility for her actions, and makes full restitution. The judge sentences her to time served. April 19, 2010 – Judge Stan Strickland steps down after Casey Anthony's defense team files a motion accusing him of having inappropriate conversations with a writer, Dave Knechel, who blogged about the case. Strickland granted the motion because the accusation would "generate renewed allegations of bias". Judge Belvin Perry, Jr. is appointed to take over the case. May 11, 2010 – Judge Perry will allow the state to seek the death penalty. August 14, 2010 – Cindy Anthony appears as a guest on the "Today" Show where she calls Casey a victim, and also claims she's not involved with what happened to Caylee's remains. August 16, 2010 – George and Cindy Anthony's attorney, Brad Conway, steps down because he disputes a Jose Baez motion claiming Conway was given unrestricted access to documents belonging to Texas EquuSearch to which he was not given the same access. 2011 January 4, 2011 – Judge Belvin Perry postpones ruling on over two dozen defense motions to exclude evidence from the trial. Over the next several months Perry rules for or against these various motions to exclude evidence. He admonishes, and later financially sanctions, defense attorney Jose Baez for failing to turn over expert witness discovery information to prosecutors before a certain deadline. April 1, 2011 – After numerous outbursts by lawyers in court over what is and is not scientific evidence, Judge Perry ruled more such behavior would result in a fine of $100 per outburst, with the proceeds to go to the United Way. May 20, 2011 – After eleven days of jury selection, Judge Perry swears in a jury of five men and seven women, plus three men and two women as alternate jurors. May 24, 2011 – Trial begins in Orlando, Florida. The prosecution states Casey Anthony used duct tape to suffocate Caylee Anthony. The defense contends the child actually drowned in her grandparents' swimming pool, that Casey's father George Anthony warned Casey she would be imprisoned for life for child neglect and then covered up the death; thus she failed to report the incident for 31 days. Also, because George Anthony had sexually molested Casey as a child she had a habit of hiding her pain and lying. Baez admits Casey had fabricated the story of the nanny named Zenaida Fernandez-Gonzales. Baez also questions whether Roy Kronk, who found the remains, actually removed them from elsewhere and questioned police motivations for pursuing a murder investigation. Prosecutors call George Anthony as their first witness and he denies to them having ever sexually abused his daughter Casey or covering up the death of Caylee. May 25, 2011 – The prosecution calls various friends of Casey Anthony who testify about her fabricated stories during June and July 2008 of having a job and employing a nanny for Caylee. A neighbor testifies that in mid-June 2008 Casey and a boyfriend borrowed a shovel from him to dig up a bamboo root. May 26, 2011 – Former boyfriend testifies Casey told him her brother, Lee Anthony, sexually groped her. George Anthony is called back to the witness stand where he says that he did not smell decomposition in Casey's car on June 24, 2008 and states he put duct tape over a hole in one of the plastic gas cans she had returned to him. May 27, 2011 – A tow truck company manager and George Anthony testify that from their experience the smell from Casey's car resembled human decomposition. During cross-examination, George Anthony tells Jose Baez that he did not sexually abuse Casey. May 28, 2011 – Former boyfriend testifies about Casey's normal behavior on June 16, 2008. Cindy Anthony testifies that they swam that day and that Caylee could get up the ladder into the pool. She also believed Casey worked at Universal Studios Orlando Resort and had a babysitter named Zanny. May 31, 2011 – Cindy Anthony says her description of Casey's car smelling "like someone died" was just a "figure of speech." She tried to get rid of the smell by spraying Febreze household odor eliminator. She says she found the pool ladder in the pool the evening of June 16. Casey's friend Amy Huizenga talks of Casey's frustration about getting help with Caylee and reveals that on June 27, Casey texted her about a dead animal on the frame of the car. June 1, 2011 – The first officers to arrive at the Anthony home on July 16, 2008 testify that they did not smell human decomposition in Casey's car and admit they did not search the other two family cars. They also testify about Casey going to Universal Orlando Resort with Casey that day, where she confessed she no longer worked there and did not have a nanny named Zenaida Fernandez-Gonzalez. June 2, 2011 – Video tapes are shown of Casey lying to her parents in jail and denying to an officer on July 16, 2008 that Caylee had drowned in the pool, as he suggested. June 3, 2011 – Investigators describe evidence collection from Casey's car and obtaining from the towing yard the plastic garbage bag that had been in it. One investigator states he smelled human decomposition. June 4, 2011 – An FBI forensic scientist testifies the single hair removed from the car trunk was similar to a hair from Caylee's hair brush and had "root-banding" consistent with that from a decomposing body. June 6, 2011 – Dr. Arpad Vass of the Oak Ridge National Laboratory describes using a gas chromatograph mass spectrometer to find signs of human decomposition and a high level of chloroform in the trunk of Casey's car. The defense challenges Vass' financial motivation and the chain of evidence. June 7, 2011 – FBI forensic chemist confirms chloroform residue in trunk of Casey's car, but also states that household cleaning products leave traces of chloroform. A dog handler describes dog alerting to human decomposition in the trunk, as well as Caylee's playhouse. June 8, 2011 – Second dog handler says his dog smelled decomposition in back yard. Computer analysts confirm a search for "chloroform" on Casey's computer March 17, 2008 and "how to make chloroform" on March 21, 2008. June 9, 2011 – Software analyst John Bradley states someone used the Anthony computer to search the website Sci-spot.com for "chloroform" 84 times on March 21, 2008. During cross-examination, he admits that automatic page reloading could account for that number and there was no way of knowing who performed the searches. Investigators show photographs of the remains, including of duct tape that appears to be over the mouth area. One admits that duct tape might not originally have been on the mouth and could have shifted position as he collected remains. Casey Anthony becomes ill looking at the photographs and the jury is dismissed for the day. June 10, 2011 – Medical examiner states that the death is ruled a homicide because of the delay in reporting the disappearance, the fact the body was hidden, and the existence of duct tape, but states under cross-examination she did not know how the child died. Crime scene investigators describe similar maggots found in the car trunk and at the crime scene. June 11, 2011 – An expert in forensic entomology states he found flies related to decomposition in the trunk of Casey's car. Orange County, Florida crime scene investigators identify a piece of Henkel brand duct tape found at the scene and testify it is the same brand as George Anthony put on the red gas can. One states that no Henkel brand tape was found elsewhere in the Anthonys' home. June 13, 2011 – FBI examiner states a hair from the child's skull is consistent with but not identical to the single hair found in the trunk. FBI agent could not find any fingerprints on duct tape found near the remains but initially did find adhesive in the shape of a heart on a corner of a piece of duct tape; later she could not find it again. June 14, 2011 – FBI quality assurance specialist says the hair found in trunk could have come from any member of the Anthony family. A crime scene investigator says heart shaped stickers were found in Casey's room but did not link them to the one alleged to be on the duct tape. Testimony about and photo of Casey's "Bella Vita" tattoo made on July 2, 2008. June 15, 2011 – Prosecution rests its case. Defense makes a motion to acquit based on insufficient evidence a murder was committed, which the judge denies based on previous case rulings. June 16, 2011 – Defense begins its case, often recalling state witnesses for further testimony. Crime scene investigator says no blood was found in Casey's car or incriminating stains on her clothes. FBI analyst states no DNA evidence was found in the car or at the crime scene. She states FBI did a paternity test that showed Lee Anthony was not Caylee's father. Crime scene investigator and forensics supervisor state a heart-shaped sticker was found far from the body. An FBI forensic document examiner found no evidence of a heart shaped sticker on the duct tape found near the remains. June 17, 2011 – Forensic entomologist called by the defense states if there was a body in the trunk, there should have been hundreds or even thousands of blow flies trapped in the trunk as well. June 18, 2011 – Defense calls a new witness, Dr. Werner Spitz, who questions the medical examiner's autopsy, including the failure to open the skull, and says there was no indication the death was a homicide. He believes the duct tape was placed on the skull after decomposition and that the crime scene photos of the position of the hair on the skull were staged, possibly by the medical examiner. June 21, 2011 – A defense-called forensic botanist challenges the prosecution's theory of when the body was placed at the crime scene. An expert in analytical chemistry who works with Dr. Vass challenges the process of testing for the presence of chloroform. June 22, 2011 – An FBI forensic examiner says no dirt from the crime scene was found on shoes at the Anthony home or a neighbor's borrowed shovel. FBI forensic toxicologist found no toxins in the hair from Caylee Anthony's skull. A scientist who worked with Dr. Vass who testifies tests did not conclusively prove there was a body in the trunk. The FBI's forensic chemist examiner could not find traces of chloroform in the car. The FBI forensics expert found no hair in the trunk liner showing signs of decomposition. She also testified the duct tape at the crime scene was dissimilar to that in the Anthony home. June 23, 2011 – A FBI hair and fiber expert says only one hair from the car trunk had a sign of decomposition. There is a long debate among prosecutors and defense over the reliability of "root banding." An expert in forensic toxicology testifies Dr. Vass's test "lacked organization and planning" and had "minimal standards of quality control." He also mentions that chloroform is a byproduct of chlorinated swimming pool water. June 24, 2011 – Cindy Anthony is recalled to the stand where the defense shows her a photograph of Caylee on the pool ladder and she again mentions the ladder was in the pool on June 16 when she returned home from work that evening, adding that she called George to ask about it since she took out the ladder from the pool on the previous day after swimming there with Caylee. The defense also showed the jury a picture of Caylee appearing to open a sliding-glass door at her home. Cindy says Caylee was capable of opening the sliding door to the back yard and the pool. Lee Anthony states he was not told Casey was pregnant until days before Caylee's birth. Search volunteer testifies about duct tape being used at search headquarters. June 25, 2011 – Judge Belvin Perry, Jr. temporarily halts proceedings after defense motion to determine if Anthony was competent to proceed with trial, based on a privileged communication from Casey Anthony. June 27, 2011 – Casey Anthony is found competent to continue after psychological evaluation. June 27 also is the date the prosecution states it discussed with defense attorney Jose Baez software analyst John Bradley's post-testimony admission to prosecutors that there was only one search for chloroform, not 84. In testimony, the lead detective admits cadaver dogs had not searched inside the Anthonys' home, or in two other Anthony cars. A professor of chemistry called by the defense says there is no scientifically valid instrument that can identify decomposition, that there is no consensus on what chemicals are typical of human decomposition and that chemical compounds identified by Dr. Vass in air samples can be found in household products and garbage. Three witnesses discuss the November 2008 videotaped search by Anthony family private investigators in the woods where Caylee's body later was found. June 28, 2011 – A Texas EquuSearch team letter discusses their November search for Caylee of the site where the body later was found. George Anthony denies he had an affair with Krystal Holloway, borrowed money from her, or told her Caylee's death was "an accident that snowballed out of control." He admits going to her home and sending her a text message. He testifies he bought a gun to threaten Casey's friends into telling him where Caylee was, even though he knew having one violated Casey's bail. Cindy Anthony denies she sent private investigators to search the site where Caylee's body later was found; her son Lee Anthony and the case's lead detective then testify she did so, after talking to a psychic. Roy Kronk testifies about his calls to police and finding the body. He denies he told his son finding the body would make him rich and famous, but admits he did receive $5,000 after Caylee's remains were identified. Judge Perry does not allow jury to hear Casey's ex-fiancée say that Casey told him Lee had once tried to grope her while she was sleeping. June 29, 2011 – Cindy Anthony says Casey's response to the media theory that Caylee drowned was "Surprise. Surprise." Baez asks George Anthony about his suicide attempt in January 2009 and the next day the judge allows the jury to see the suicide note. Roy Kronk's son states that Kronk did say that finding Caylee Anthony would make him rich and famous. Kronk testifies about why he changed his story about lifting the skull. An expert on grief and trauma testifies that pretending nothing had happened and partying was one of many different ways people, especially young people, express grief. June 30, 2011 – Casey Anthony tells Judge Perry she does not want to testify. Perry will not allow the jury to sniff air samples from the car trunk. Defense calls search volunteer Krystal Holloway who states that she had an affair with George Anthony. She states that George Anthony told her that Caylee's death was "an accident that snowballed out of control." Under cross-examination, she also agreed with her earlier statement to police in which she said George Anthony did not say he knew it was an accident. After Holloway steps down, Judge Perry tells jurors that her testimony could be used to impeach George Anthony's credibility, but that is not the proof of how Caylee died. George, Cindy and Lee Anthony all testify that their pets had been buried in the back yard. Cindy calls it a "tradition" to wrap them in blankets and a plastic bag; duct tape was used to keep the plastic bags from opening. After this final witness, the defense rests. The prosecutor rebuttal begins with showing the jury photographs of Caylee's clothes and George's suicide note. July 1, 2011 – The prosecution continues rebuttal with two representatives of Cindy Anthony's former employer explaining why their computer login system shows Cindy was at work the afternoon she said she went home early and searched her computer for information about chloroform. A police computer analyst says someone had purposely searched online for "neck + breaking." Another analyst testifies she did not find evidence that Cindy Anthony had searched certain terms she claimed to have searched. An anthropology professor is recalled to rebut a defense witness on the need to open a skull during an autopsy. The lead detective states that there were no phone calls between Cindy and George Anthony during the week of June 16, 2008, but admits he did not know that George had a second cell phone. July 3, 2011 – Judge Perry rules that during closing arguments the defense could argue there was a drowning involved in the death of Caylee because there was sufficient evidence of that, but could not argue George had sexually abused Casey. Prosecution does an hour and a half of closing arguments, offering a timeline of events and asserting that Casey intentionally suffocated Caylee to death by putting three pieces of duct tape place over her face. The alleged motive was that the child interfered with her partying lifestyle and spending time with her boyfriend. The prosecution states the defense' story that Caylee drowned and George encouraged Casey to cover up the accident made no sense. The defense counters with four hours of arguments insisting there was no proof of how Caylee died, challenging the prosecutors' most important evidence as "fantasy," and emphasizing the reasonable doubt that Casey killed Caylee. It again insists that after the child drowned, Casey panicked and George Anthony made the death look like a murder and that he was the one who put the body in the nearby woods. July 4, 2011 – Prosecutors Jeff Ashton and Linda Drane Burdick present a rebuttal to the defense closing, telling jurors their forensic evidence had proved their case, while the defense made claims they did not prove. The case then goes to the jury. Judge Belvin Perry, Jr. issues final instructions to the jury. July 5, 2011 – After about ten hours of deliberation, the jury acquits Casey Anthony of all felony charges (i.e., of first-degree murder, aggravated manslaughter, and aggravated child abuse), but convicts her of all four misdemeanor charges of giving false information to a law enforcement officer. July 7, 2011 – Judge Perry sentences Casey Anthony to one year in county jail and $1,000 in fines for each of the four misdemeanor counts of providing false information to a law enforcement officer. The judge orders all sentences to run consecutive to each other, with credit for time served. Based on three years credit for time served plus additional credit for good behavior, her release date is set for July 17, 2011. Judge Perry announces he will not release the juror's names for seven days saying some people "disagree with their verdict" and "would like to take something out on them." July 13, 2011 – Texas EquuSearch, which assisted in the search for Caylee, sues Casey Anthony for the costs of the search. July 15, 2011 – Casey Anthony appeals convictions of providing false information to a law enforcement officer. July 17, 2011 – Casey Anthony is released from jail at 12:10 AM, with $537.68 in cash. July 19, 2011 – Prosecutors write a letter responding to a New York Times article about alleged withholding of exculpatory evidence about the chloroform searches and says they were about to give the jury a Notice of Supplemental Discovery but did not do so because jurors had reached a verdict. July 26, 2011 – Judge Belvin Perry, Jr. rules juror names will remain secret until October 2011, citing public "outrage and distress" over the not guilty verdict. He also appeals to Florida legislators to bar the release of juror's names in some cases "in order to protect the safety and well-being of those citizens willing to serve." August 1, 2011 – Orange County, Florida Circuit Judge Stan Strickland signs amended court documents that order Casey Anthony to return to Orlando within 72 hours to serve one year of supervised probation for the check fraud charge that Anthony pleaded guilty to in January 2010. Jose Baez accuses Strickland of bias in the ruling. Strickland recuses himself from the case. August 5, 2011 – Baez obtains an emergency hearing with Judge Perry arguing Anthony already had served her probation and that Strickland no longer had jurisdiction over her. Perry postpones a decision calling the situation "a maze." August 10, 2011 – The Florida Department of Children and Families releases report concluding that Casey Anthony failed to protect Caylee, and that Casey's actions or lack of actions resulted in the death of the child. The finding has little legal relevance. August 12, 2011 – Judge Belvin Perry upholds Judge Strickland's order, ruling that Casey Anthony must return to Orlando to serve one year's probation for check fraud, reporting no later than noon on August 26. The judge declares that her residential information during the probation period may be kept confidential because of threats made against her life. August 23, 2011 – After defense attorneys file motion to appeal Judge Perry's probation ruling, the Florida Fifth District Court of Appeals upholds it. Casey Anthony reports for probation at a secret location on August 24. September 15, 2011 – Judge Belvin Perry rules Casey Anthony must pay $97,000 of the $517,000 the state of Florida wanted her to pay for investigative and prosecution costs to the state under a provision of Florida sentencing law. He ruled she only had to pay those costs directly related to lying to law enforcement about the death of Caylee, i.e., search costs up to September 30, 2008, when the Sheriff's Office stopped investigating a missing-child case. In earlier arguments Attorney Cheney Mason had called the prosecutors' attempts to exact the larger sum "sour grapes" because the prosecution lost its case. He told reporters that Anthony is indigent. September 23, 2011 – Judge Belvin Perry rules Casey Anthony must pay an additional $119,000 for the recalculated costs of the sheriff's search for Caylee Anthony, for a total of $217,000. October 8, 2011 – Casey Anthony answers a few questions and takes the Fifth Amendment repeatedly in a video deposition regarding the Zenaida Fernandez-Gonzales lawsuit. October 25, 2011 – Judge Perry releases names of jurors in Casey Anthony trial. 2012 February 15, 2012 – Casey's first monthly court payment of $20 is due. June 11, 2012 – Casey motions for a new trial to have convictions of counts of lying to police overturned. November 20, 2012 – WKMG-TV television in Orlando reported that police never investigated Firefox browser information on Casey Anthony's computer the day of Caylee's death; they only looked at Internet Explorer evidence. The station learned about this information from Casey Anthony's attorney Jose Baez who mentioned it in his book on the case. References External links Casey Anthony's Attorney is running for Broward County Judge in Florida Attorney Johnathan Kasen defended Casey Anthony 2000s in Florida 2010s in Florida Contemporary history timelines Personal timelines Women in Florida
27144703
https://en.wikipedia.org/wiki/Aestiva%20Software
Aestiva Software
Aestiva Software is a computer technology corporation that develops, manufactures, and licenses business process automation software products. Headquartered in Torrance, California, USA, the company is the originator of HTML/OS. History The company was founded on January 11, 1996, to develop and sell an HTML-centric operating platform for the emerging World-Wide-Web. The platform, was originally called Aestiva Overlays and then was renamed to HTML/OS. HTML/OS includes a web programming language and 4GL database engine in a single integrated, multi-platform environment. The price for the engine has fluctuated between $799, $1499, and $4995. In 2010 a free version of the engine was released to the web development community. The engine is supported by Aestiva and the independent development community. As of the first quarter of 2010, the company marketed Aestiva Array, the development environment for the HTML/OS engine, and office automation products. The company fields business products in six distinct areas; Purchasing and Procurement, Expense management, HR and Operations, Inventory Management, RFQ and Quotation, and Time Tracking. Terminology HTML/OS - Acronym for HyperText Markup Language (HTML) Operating System (OS). The programming language and kernel used to perform database and programming operations. Not to be confused with hardware-centric operating systems such as Linux, Microsoft Windows, and Mac OS X Aestiva Array - The HTML/OS Virtual machine supported by the HTML/OS kernel which includes a web-based desktop, web-based file manager, and HTML/OS programming tools. Products The company's business process automation products are delivered as software and as hosted software as a service (SaaS). Products include: Aestiva Array (aka HTML/OS) Aestiva Expense Report Aestiva Help Desk Aestiva Inventory Aestiva Leave Request Aestiva Purchase Order Aestiva Quotation Aestiva Sourcing RFQ Aestiva Timesheet See also Database management system 4GL Business process automation Purchase order Timesheet Inventory Expense management Cloud computing Web application Request for quotation Help desk References Software companies based in California Software companies of the United States
2322898
https://en.wikipedia.org/wiki/Getaddrinfo
Getaddrinfo
The functions getaddrinfo() and getnameinfo() convert domain names, hostnames, and IP addresses between human-readable text representations and structured binary formats for the operating system's networking API. Both functions are contained in the POSIX standard application programming interface (API). getaddrinfo and getnameinfo are inverse functions of each other. They are network protocol agnostic, and support both IPv4 and IPv6. It is the recommended interface for name resolution in building protocol independent applications and for transitioning legacy IPv4 code to the IPv6 Internet. Internally, the functions perform resolutions using the Domain Name System (DNS) by calling other, lower level functions, such as gethostbyname(). On February 16 2016 a security bug was announced in the glibc implementation of getaddrinfo(), using a buffer overflow technique, that may allow execution of arbitrary code by the attacker. struct addrinfo The C data structure used to represent addresses and hostnames within the networking API is the following: struct addrinfo { int ai_flags; int ai_family; int ai_socktype; int ai_protocol; socklen_t ai_addrlen; struct sockaddr* ai_addr; char* ai_canonname; /* canonical name */ struct addrinfo* ai_next; /* this struct can form a linked list */ }; In some older systems the type of ai_addrlen is size_t instead of socklen_t. Most socket functions, such as accept() and getpeername(), require the parameter to have type socklen_t * and programmers often pass the address to the ai_addrlen element of the addrinfo structure. If the types are incompatible, e.g., on a 64-bit Solaris 9 system where size_t is 8 bytes and socklen_t is 4 bytes, then run-time errors may result. The structure contains structures ai_family and sockaddr with its own sa_family field. These are set to the same value when the structure is created with function getaddrinfo in some implementations. getaddrinfo() getaddrinfo() converts human-readable text strings representing hostnames or IP addresses into a dynamically allocated linked list of struct addrinfo structures. The function prototype for this function is specified as follows: int getaddrinfo(const char* hostname, const char* service, const struct addrinfo* hints, struct addrinfo** res); hostname can be either a domain name, such as "example.com", an address string, such as "127.0.0.1", or NULL, in which case the address 0.0.0.0 or 127.0.0.1 is assigned depending on the hints flags. service can be a port number passed as string, such as "80", or a service name, e.g. "echo". In the latter case a typical implementation uses getservbyname() to query the file /etc/services to resolve the service to a port number. hints can be either NULL or an addrinfo structure with the type of service requested. res is a pointer that points to a new addrinfo structure with the information requested after successful completion of the function. The function returns 0 upon success and non-zero error value if it fails. Although implementations vary among platforms, the function first attempts to obtain a port number usually by branching on service. If the string value is a number, it converts it to an integer and calls htons(). If it is a service name, such as www, the service is looked up with getservbyname(), using the protocol derived from hints->ai_socktype as the second parameter to that function. Then, if hostname is given (not NULL), a call to gethostbyname() resolves it, or otherwise the address 0.0.0.0 is used, if hints->ai_flags is set to AI_PASSIVE, and 127.0.0.1 otherwise. It allocated a new addrinfo structure filled with the appropriate sockaddr_in in one of these conditions and also adds the port retrieved at the beginning to it. Finally, the **res parameter is dereferenced to make it point to a newly allocated addrinfo structure. In some implementations, such as the Unix version for Mac OS, the hints->ai_protocol overrides the hints->ai_socktype value while in others it is the opposite, so both need to be defined with equivalent values for the code to be work across multiple platforms. freeaddrinfo() This function frees the memory allocated by function getaddrinfo(). As the result of the latter is a linked list of addrinfo structures starting at the address ai, freeaddrinfo() loops through the list and frees each one in turn. void freeaddrinfo(struct addrinfo *ai); getnameinfo() The function getnameinfo() converts the internal binary representation of an IP address in the form of a pointer to a struct sockaddr into text strings consisting of the hostname or, if the address cannot be resolved into a name, a textual IP address representation, as well as the service port name or number. The function prototype is specified as follows: int getnameinfo(const struct sockaddr* sa, socklen_t salen, char* host, size_t hostlen, char* serv, size_t servlen, int flags); Example The following example uses getaddrinfo() to resolve the domain name www.example.com into its list of addresses and then calls getnameinfo() on each result to return the canonical name for the address. In general, this produces the original hostname, unless the particular address has multiple names, in which case the canonical name is returned. In this example, the domain name is printed three times, once for each of the three results obtained. #include <stdio.h> #include <stdlib.h> #include <netdb.h> #include <netinet/in.h> #include <sys/socket.h> #ifndef NI_MAXHOST #define NI_MAXHOST 1025 #endif int main(void) { struct addrinfo* result; struct addrinfo* res; int error; /* resolve the domain name into a list of addresses */ error = getaddrinfo("www.example.com", NULL, NULL, &result); if (error != 0) { if (error == EAI_SYSTEM) { perror("getaddrinfo"); } else { fprintf(stderr, "error in getaddrinfo: %s\n", gai_strerror(error)); } exit(EXIT_FAILURE); } /* loop over all returned results and do inverse lookup */ for (res = result; res != NULL; res = res->ai_next) { char hostname[NI_MAXHOST]; error = getnameinfo(res->ai_addr, res->ai_addrlen, hostname, NI_MAXHOST, NULL, 0, 0); if (error != 0) { fprintf(stderr, "error in getnameinfo: %s\n", gai_strerror(error)); continue; } if (*hostname != '\0') printf("hostname: %s\n", hostname); } freeaddrinfo(result); return 0; } See also Network address References External links RFC 3493, Basic Socket Interface Extensions for IPv6 Internet Protocol Network addressing Domain Name System C POSIX library Articles with example C code