id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
14921 | https://en.wikipedia.org/wiki/IP%20address | IP address | An Internet Protocol address (IP address) is a numerical label such as that is connected to a computer network that uses the Internet Protocol for communication. An IP address serves two main functions: network interface identification and location addressing.
Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. However, because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP (IPv6), using 128 bits for the IP address, was standardized in 1998. IPv6 deployment has been ongoing since the mid-2000s.
IP addresses are written and displayed in human-readable notations, such as in IPv4, and in IPv6. The size of the routing prefix of the address is designated in CIDR notation by suffixing the address with the number of significant bits, e.g., , which is equivalent to the historically used subnet mask .
The IP address space is managed globally by the Internet Assigned Numbers Authority (IANA), and by five regional Internet registries (RIRs) responsible in their designated territories for assignment to local Internet registries, such as Internet service providers (ISPs), and other end users. IPv4 addresses were distributed by IANA to the RIRs in blocks of approximately 16.8 million addresses each, but have been exhausted at the IANA level since 2011. Only one of the RIRs still has a supply for local assignments in Africa. Some IPv4 addresses are reserved for private networks and are not globally unique.
Network administrators assign an IP address to each device connected to a network. Such assignments may be on a static (fixed or permanent) or dynamic basis, depending on network practices and software features.
Function
An IP address serves two principal functions: it identifies the host, or more specifically its network interface, and it provides the location of the host in the network, and thus the capability of establishing a path to that host. Its role has been characterized as follows: "A name indicates what we seek. An address indicates where it is. A route indicates how to get there."
The header of each IP packet contains the IP address of the sending host and that of the destination host.
IP versions
Two versions of the Internet Protocol are in common use on the Internet today. The original version of the Internet Protocol that was first deployed in 1983 in the ARPANET, the predecessor of the Internet, is Internet Protocol version 4 (IPv4).
The rapid exhaustion of IPv4 address space available for assignment to Internet service providers and end-user organizations by the early 1990s, prompted the Internet Engineering Task Force (IETF) to explore new technologies to expand the addressing capability on the Internet. The result was a redesign of the Internet Protocol which became eventually known as Internet Protocol Version 6 (IPv6) in 1995.
IPv6 technology was in various testing stages until the mid-2000s when commercial production deployment commenced.
Today, these two versions of the Internet Protocol are in simultaneous use. Among other technical changes, each version defines the format of addresses differently. Because of the historical prevalence of IPv4, the generic term IP address typically still refers to the addresses defined by IPv4. The gap in version sequence between IPv4 and IPv6 resulted from the assignment of version 5 to the experimental Internet Stream Protocol in 1979, which however was never referred to as IPv5.
Other versions v1 to v9 were defined, but only v4 and v6 ever gained widespread use. v1 and v2 were names for TCP protocols in 1974 and 1977, as there was no separate IP specification at the time. v3 was defined in 1978, and v3.1 is the first version where TCP is separated from IP. v6 is a synthesis of several suggested versions, v6 Simple Internet Protocol, v7 TP/IX: The Next Internet, v8 PIP — The P Internet Protocol, and v9 TUBA — Tcp & Udp with Big Addresses.
Subnetworks
IP networks may be divided into subnetworks in both IPv4 and IPv6. For this purpose, an IP address is recognized as consisting of two parts: the network prefix in the high-order bits and the remaining bits called the rest field, host identifier, or interface identifier (IPv6), used for host numbering within a network. The subnet mask or CIDR notation determines how the IP address is divided into network and host parts.
The term subnet mask is only used within IPv4. Both IP versions however use the CIDR concept and notation. In this, the IP address is followed by a slash and the number (in decimal) of bits used for the network part, also called the routing prefix. For example, an IPv4 address and its subnet mask may be and , respectively. The CIDR notation for the same IP address and subnet is , because the first 24 bits of the IP address indicate the network and subnet.
IPv4 addresses
An IPv4 address has a size of 32 bits, which limits the address space to (232) addresses. Of this number, some addresses are reserved for special purposes such as private networks (~18 million addresses) and multicast addressing (~270 million addresses).
IPv4 addresses are usually represented in dot-decimal notation, consisting of four decimal numbers, each ranging from 0 to 255, separated by dots, e.g., . Each part represents a group of 8 bits (an octet) of the address. In some cases of technical writing, IPv4 addresses may be presented in various hexadecimal, octal, or binary representations.
Subnetting history
In the early stages of development of the Internet Protocol, the network number was always the highest order octet (most significant eight bits). Because this method allowed for only 256 networks, it soon proved inadequate as additional networks developed that were independent of the existing networks already designated by a network number. In 1981, the addressing specification was revised with the introduction of classful network architecture.
Classful network design allowed for a larger number of individual network assignments and fine-grained subnetwork design. The first three bits of the most significant octet of an IP address were defined as the class of the address. Three classes (A, B, and C) were defined for universal unicast addressing. Depending on the class derived, the network identification was based on octet boundary segments of the entire address. Each class used successively additional octets in the network identifier, thus reducing the possible number of hosts in the higher order classes (B and C). The following table gives an overview of this now-obsolete system.
Classful network design served its purpose in the startup stage of the Internet, but it lacked scalability in the face of the rapid expansion of networking in the 1990s. The class system of the address space was replaced with Classless Inter-Domain Routing (CIDR) in 1993. CIDR is based on variable-length subnet masking (VLSM) to allow allocation and routing based on arbitrary-length prefixes. Today, remnants of classful network concepts function only in a limited scope as the default configuration parameters of some network software and hardware components (e.g. netmask), and in the technical jargon used in network administrators' discussions.
Private addresses
Early network design, when global end-to-end connectivity was envisioned for communications with all Internet hosts, intended that IP addresses be globally unique. However, it was found that this was not always necessary as private networks developed and public address space needed to be conserved.
Computers not connected to the Internet, such as factory machines that communicate only with each other via TCP/IP, need not have globally unique IP addresses. Today, such private networks are widely used and typically connect to the Internet with network address translation (NAT), when needed.
Three non-overlapping ranges of IPv4 addresses for private networks are reserved. These addresses are not routed on the Internet and thus their use need not be coordinated with an IP address registry. Any user may use any of the reserved blocks. Typically, a network administrator will divide a block into subnets; for example, many home routers automatically use a default address range of through ().
IPv6 addresses
In IPv6, the address size was increased from 32 bits in IPv4 to 128 bits, thus providing up to 2128 (approximately ) addresses. This is deemed sufficient for the foreseeable future.
The intent of the new design was not to provide just a sufficient quantity of addresses, but also redesign routing in the Internet by allowing more efficient aggregation of subnetwork routing prefixes. This resulted in slower growth of routing tables in routers. The smallest possible individual allocation is a subnet for 264 hosts, which is the square of the size of the entire IPv4 Internet. At these levels, actual address utilization ratios will be small on any IPv6 network segment. The new design also provides the opportunity to separate the addressing infrastructure of a network segment, i.e. the local administration of the segment's available space, from the addressing prefix used to route traffic to and from external networks. IPv6 has facilities that automatically change the routing prefix of entire networks, should the global connectivity or the routing policy change, without requiring internal redesign or manual renumbering.
The large number of IPv6 addresses allows large blocks to be assigned for specific purposes and, where appropriate, to be aggregated for efficient routing. With a large address space, there is no need to have complex address conservation methods as used in CIDR.
All modern desktop and enterprise server operating systems include native support for IPv6, but it is not yet widely deployed in other devices, such as residential networking routers, voice over IP (VoIP) and multimedia equipment, and some networking hardware.
Private addresses
Just as IPv4 reserves addresses for private networks, blocks of addresses are set aside in IPv6. In IPv6, these are referred to as unique local addresses (ULAs). The routing prefix is reserved for this block, which is divided into two blocks with different implied policies. The addresses include a 40-bit pseudorandom number that minimizes the risk of address collisions if sites merge or packets are misrouted.
Early practices used a different block for this purpose (), dubbed site-local addresses. However, the definition of what constituted a site remained unclear and the poorly defined addressing policy created ambiguities for routing. This address type was abandoned and must not be used in new systems.
Addresses starting with , called link-local addresses, are assigned to interfaces for communication on the attached link. The addresses are automatically generated by the operating system for each network interface. This provides instant and automatic communication between all IPv6 hosts on a link. This feature is used in the lower layers of IPv6 network administration, such as for the Neighbor Discovery Protocol.
Private and link-local address prefixes may not be routed on the public Internet.
IP address assignment
IP addresses are assigned to a host either dynamically as they join the network, or persistently by configuration of the host hardware or software. Persistent configuration is also known as using a static IP address. In contrast, when a computer's IP address is assigned each time it restarts, this is known as using a dynamic IP address.
Dynamic IP addresses are assigned by network using Dynamic Host Configuration Protocol (DHCP). DHCP is the most frequently used technology for assigning addresses. It avoids the administrative burden of assigning specific static addresses to each device on a network. It also allows devices to share the limited address space on a network if only some of them are online at a particular time. Typically, dynamic IP configuration is enabled by default in modern desktop operating systems.
The address assigned with DHCP is associated with a lease and usually has an expiration period. If the lease is not renewed by the host before expiry, the address may be assigned to another device. Some DHCP implementations attempt to reassign the same IP address to a host, based on its MAC address, each time it joins the network. A network administrator may configure DHCP by allocating specific IP addresses based on MAC address.
DHCP is not the only technology used to assign IP addresses dynamically. Bootstrap Protocol is a similar protocol and predecessor to DHCP. Dialup and some broadband networks use dynamic address features of the Point-to-Point Protocol.
Computers and equipment used for the network infrastructure, such as routers and mail servers, are typically configured with static addressing.
In the absence or failure of static or dynamic address configurations, an operating system may assign a link-local address to a host using stateless address autoconfiguration.
Sticky dynamic IP address
Address autoconfiguration
Address block is defined for the special use of link-local addressing for IPv4 networks. In IPv6, every interface, whether using static or dynamic addresses, also receives a link-local address automatically in the block . These addresses are only valid on the link, such as a local network segment or point-to-point connection, to which a host is connected. These addresses are not routable and, like private addresses, cannot be the source or destination of packets traversing the Internet.
When the link-local IPv4 address block was reserved, no standards existed for mechanisms of address autoconfiguration. Filling the void, Microsoft developed a protocol called Automatic Private IP Addressing (APIPA), whose first public implementation appeared in Windows 98. APIPA has been deployed on millions of machines and became a de facto standard in the industry. In May 2005, the IETF defined a formal standard for it.
Addressing conflicts
An IP address conflict occurs when two devices on the same local physical or wireless network claim to have the same IP address. A second assignment of an address generally stops the IP functionality of one or both of the devices. Many modern operating systems notify the administrator of IP address conflicts. When IP addresses are assigned by multiple people and systems with differing methods, any of them may be at fault. If one of the devices involved in the conflict is the default gateway access beyond the LAN for all devices on the LAN, all devices may be impaired.
Routing
IP addresses are classified into several classes of operational characteristics: unicast, multicast, anycast and broadcast addressing.
Unicast addressing
The most common concept of an IP address is in unicast addressing, available in both IPv4 and IPv6. It normally refers to a single sender or a single receiver, and can be used for both sending and receiving. Usually, a unicast address is associated with a single device or host, but a device or host may have more than one unicast address. Sending the same data to multiple unicast addresses requires the sender to send all the data many times over, once for each recipient.
Broadcast addressing
Broadcasting is an addressing technique available in IPv4 to address data to all possible destinations on a network in one transmission operation as an all-hosts broadcast. All receivers capture the network packet. The address is used for network broadcast. In addition, a more limited directed broadcast uses the all-ones host address with the network prefix. For example, the destination address used for directed broadcast to devices on the network is .
IPv6 does not implement broadcast addressing and replaces it with multicast to the specially defined all-nodes multicast address.
Multicast addressing
A multicast address is associated with a group of interested receivers. In IPv4, addresses through (the former Class D addresses) are designated as multicast addresses. IPv6 uses the address block with the prefix for multicast. In either case, the sender sends a single datagram from its unicast address to the multicast group address and the intermediary routers take care of making copies and sending them to all interested receivers (those that have joined the corresponding multicast group).
Anycast addressing
Like broadcast and multicast, anycast is a one-to-many routing topology. However, the data stream is not transmitted to all receivers, just the one which the router decides is closest in the network. Anycast addressing is a built-in feature of IPv6. In IPv4, anycast addressing is implemented with Border Gateway Protocol using the shortest-path metric to choose destinations. Anycast methods are useful for global load balancing and are commonly used in distributed DNS systems.
Geolocation
A host may use geolocation software to deduce the geographic position of its communicating peer.
A public IP address is a globally routable unicast IP address, meaning that the address is not an address reserved for use in private networks, such as those reserved by , or the various IPv6 address formats of local scope or site-local scope, for example for link-local addressing. Public IP addresses may be used for communication between hosts on the global Internet.
In a home situation, a public IP address is the IP address assigned to the home's network by the ISP. In this case, it is also locally visible by logging into the router configuration.
Most public IP addresses change, and relatively often. Any type of IP address that changes is called a dynamic IP address. In home networks, the ISP usually assigns a dynamic IP. If an ISP gave a home network an unchanging address, it's more likely to be abused by customers who host websites from home, or by hackers who can try the same IP address over and over until they breach a network.
Firewalling
For security and privacy considerations, network administrators often desire to restrict public Internet traffic within their private networks. The source and destination IP addresses contained in the headers of each IP packet are a convenient means to discriminate traffic by IP address blocking or by selectively tailoring responses to external requests to internal servers. This is achieved with firewall software running on the network's gateway router. A database of IP addresses of restricted and permissible traffic may be maintained in blacklists and whitelists, respectively.
Address translation
Multiple client devices can appear to share an IP address, either because they are part of a shared web hosting service environment or because an IPv4 network address translator (NAT) or proxy server acts as an intermediary agent on behalf of the client, in which case the real originating IP address is masked from the server receiving a request. A common practice is to have a NAT mask many devices in a private network. Only the public interface(s) of the NAT needs to have an Internet-routable address.
The NAT device maps different IP addresses on the private network to different TCP or UDP port numbers on the public network. In residential networks, NAT functions are usually implemented in a residential gateway. In this scenario, the computers connected to the router have private IP addresses and the router has a public address on its external interface to communicate on the Internet. The internal computers appear to share one public IP address.
Diagnostic tools
Computer operating systems provide various diagnostic tools to examine network interfaces and address configuration. Microsoft Windows provides the command-line interface tools ipconfig and netsh and users of Unix-like systems may use ifconfig, netstat, route, lanstat, fstat, and iproute2 utilities to accomplish the task.
See also
Hostname
IP address spoofing
IP aliasing
IP multicast
List of assigned /8 IPv4 address blocks
Reverse DNS lookup
Virtual IP address
WHOIS
References
IP address
IPv6 |
2239157 | https://en.wikipedia.org/wiki/Wheel%20%28computing%29 | Wheel (computing) | In Unix operating systems, the term wheel refers to a user account with a wheel bit, a system setting that provides additional special system privileges that empower a user to execute restricted commands that ordinary user accounts cannot access.
Origins
The term wheel was first applied to computer user privilege levels after the introduction of the TENEX operating system, later distributed under the name TOPS-20 in the 1960s and early 1970s. The term was derived from the slang phrase big wheel, referring to a person with great power or influence.
In the 1980s, the term was imported into Unix culture due to the migration of operating system developers and users from TENEX/TOPS-20 to Unix.
Wheel group
Modern Unix systems generally use user groups as a security protocol to control access privileges. The wheel group is a special user group used on some Unix systems, mostly BSD systems, to control access to the su or sudo command, which allows a user to masquerade as another user (usually the super user). Debian-like operating systems create a group called sudo with purpose similar to that of a wheel group.
Wheel war
The phrase wheel war, which originated at Stanford University, is a term used in computer culture, first documented in the 1983 version of The Jargon File. A 'wheel war' was a user conflict in a multi-user (see also: multiseat) computer system, in which students with administrative privileges would attempt to lock each other out of a university's computer system, sometimes causing unintentional harm to other users.
References
Unix
Computer jargon |
3445265 | https://en.wikipedia.org/wiki/Athanasios%20Tsakalidis | Athanasios Tsakalidis | Prof. Athanasios K. Tsakalidis (; born 1950) is a Greek computer scientist, a professor at the Graphics, Multimedia and GIS Laboratory, Computer Engineering and Informatics Department (CEID), University of Patras, Greece.
His scientific contributions extend diverse fields of computer science, including data structures, computational geometry, graph algorithms, GIS, bioinformatics, medical informatics, expert systems, databases, multimedia, information retrieval and more. Especially significant contributions include co-authoring Chapter 6: "Data Structures" in the Handbook of Theoretical Computer Science with his advisor prof. Kurt Mehlhorn, as well as numerous other elementary theoretical results that are cataloged in the article Some Results for Elementary Operations published in Efficient Algorithms in celebration of prof. K. Mehlhorn's 60th birthday.
Scientific Research
His research interests include: Data Structures, Graph Algorithms, Computational Geometry, GIS, Medical Informatics, Expert Systems, Databases, Multimedia, Information Retrieval, and Bioinformatics.
He has participated in many EU research programs, such as ESPRIT, RACE, AIM, STRIDE, Basic Research Actions in ESPRIT, ESPRIT Special Actions, TELEMATICS Applications, ADAPT, HORIZON, ΕΠΕΤ ΙΙ, ΥΠΕΡ, ΤΕΝ – TELECOM, IST, LEONARDO DA VINCI, MARIE CURIE, SOCRATES.
He is one of the 48 writers (6 of whom have received the ACM Turing Award) of the ground-laying computer science book, Handbook of Theoretical Computer Science, Vol A Elsevier Science publishers, co-published by MIT Press, his work being, along with professor Kurt Mehlhorn, in Chapter 6: Data Structures (his favourite field).
His pioneering results on the list manipulation and localized search problems in the 1980s led to the foundation of the ubiquitous persistence theory on data structures, developed by prof. Robert E. Tarjan.
Other significant results on the design and analysis of data structures were contributed on the problems of interpolation search, negative cycle and nearest common ancestor, the latter being referenced as "Tsakalidis' Algorithm" in the optimal results of prof. Mikkel Thorup.
His extensive work on algorithms, data structures, computational geometry and graph algorithms has been cited and acknowledged by prominent computer scientists like Robert E. Tarjan, Ian J. Munro, Dan Willard, Jon Bentley, Jan van Leeuwen, Timothy M. Chan, Lars Arge, Mihai Patrascu, Erik Demaine, Mikkel Thorup, Prosenjit Bose, Gerth S. Brodal, Haim Kaplan, Peter Widmayer, Giuseppe F. Italiano, Peyman Afshani, Kasper Larsen and more.
Academic career
Athanasios Tsakalidis obtained his Ph.D. degree in informatics in 1983 at the Computer Science department of Saarland University, Germany. His thesis is entitled "Some Results for the Dictionary Problem" and was completed under the supervision Professor Kurt Mehlhorn, director of the Max Planck Institute for Informatics. Prior to that he had earned a master's degree (thesis: "Sorting Presorted Files", 1980) and an undergraduate degree in informatics (1977) by the same university. In fact, the latter was his second undergraduate degree, as he had previously graduated from the Mathematics Department of the Aristotle University of Thessaloniki, Greece (1973).
Since 1983, he participated in research for the DFG (Deutsche Forschungsgemeinschaft, the German community of research) and professional teaching at the University of Saarland related to Data Structures, Graph Algorithms, Computational Geometry and programming, until 1989, when he returned to Greece to become an associate professor (and later in 1992 a full professor) at the Computer Engineering and Informatics Department (CEID), University of Patras, where he remains professionally active until today. He was also a visiting professor at King's College London (2003–2006).
Besides significant scientific work, Athanasios Tsakalidis has nominated 26 Ph.D. Fellows, 13 of whom have pursued a successful academic career themselves. Furthermore, he has awarded 63 Master's degrees in computer science and appointed 630 undergraduate majors.
Short Biography
Athanasios Tsakalidis was born in 1950 in Katerini, Pieria, northern Greece, and studied mathematics at the Aristotle University of Thessaloniki. In 1973 he embarked on a journey around Europe which led him to Saarbrücken, Germany, where he was introduced by prof. Günter Hotz to the novel (at the time) field of computer science that was then being coined informatics. After 28 months of national service, he was enrolled in 1976 to the Computer Science department of Saarland University becoming the oldest undergraduate student (26 years old freshman) to be advised by the youngest professor at the time (27 years old) prof. Kurt Mehlhorn.
Completing a 13 years long academic career in Germany, he returned to Patras, Greece in 1989, when he practically introduced theoretical computer science to the Greek academia and public. Until today he remains an influential academic figure, fundamentally promoting computer science in Greece, either by serving CEID (also as a Chairman in different periods) and also by supporting the establishment and development of computer science departments in many universities across the country.
Arts
Beyond computer science, Athanasios Tsakalidis has also created hundreds of paintings. A sample is found on his homepage.
References
External links
Homepage of Athanasios Tsakalidis
List of Publications
Mathematical Genealogy Tree entry
1950 births
Greek computer scientists
Academics of King's College London
Living people
People from Katerini
Aristotle University of Thessaloniki alumni
Saarland University alumni
Saarland University faculty
University of Patras faculty |
7200934 | https://en.wikipedia.org/wiki/Silicon%20photonics | Silicon photonics | Silicon photonics is the study and application of photonic systems which use silicon as an optical medium. The silicon is usually patterned with sub-micrometre precision, into microphotonic components. These operate in the infrared, most commonly at the 1.55 micrometre wavelength used by most fiber optic telecommunication systems. The silicon typically lies on top of a layer of silica in what (by analogy with a similar construction in microelectronics) is known as silicon on insulator (SOI).
Silicon photonic devices can be made using existing semiconductor fabrication techniques, and because silicon is already used as the substrate for most integrated circuits, it is possible to create hybrid devices in which the optical and electronic components are integrated onto a single microchip. Consequently, silicon photonics is being actively researched by many electronics manufacturers including IBM and Intel, as well as by academic research groups, as a means for keeping on track with Moore's Law, by using optical interconnects to provide faster data transfer both between and within microchips.
The propagation of light through silicon devices is governed by a range of nonlinear optical phenomena including the Kerr effect, the Raman effect, two-photon absorption and interactions between photons and free charge carriers. The presence of nonlinearity is of fundamental importance, as it enables light to interact with light, thus permitting applications such as wavelength conversion and all-optical signal routing, in addition to the passive transmission of light.
Silicon waveguides are also of great academic interest, due to their unique guiding properties, they can be used for communications, interconnects, biosensors, and they offer the possibility to support exotic nonlinear optical phenomena such as soliton propagation.
Applications
Optical communications
In a typical optical link, data is first transferred from the electrical to the optical domain using an electro-optic modulator or a directly-modulated laser. An electro-optic modulator can vary the intensity and/or the phase of the optical carrier. In silicon photonics, a common technique to achieve modulation is to vary the density of free charge carriers. Variations of electron and hole densities change the real and the imaginary part of the refractive index of silicon as described by the empirical equations of Soref and Bennett. Modulators can consist of both forward-biased PIN diodes, which generally generate large phase-shifts but suffer of lower speeds, as well as of reverse-biased PN junctions. A prototype optical interconnect with microring modulators integrated with germanium detectors has been demonstrated.
Non-resonant modulators, such as Mach-Zehnder interferometers, have typical dimensions in the millimeter range and are usually used in telecom or datacom applications. Resonant devices, such as ring-resonators, can have dimensions of few tens of micrometers only, occupying therefore much smaller areas. In 2013, researchers demonstrated a resonant depletion modulator that can be fabricated using standard Silicon-on-Insulator Complementary Metal-Oxide-Semiconductor (SOI CMOS) manufacturing processes. A similar device has been demonstrated as well in bulk CMOS rather than in SOI.
On the receiver side, the optical signal is typically converted back to the electrical domain using a semiconductor photodetector. The semiconductor used for carrier generation has usually a band-gap smaller than the photon energy, and the most common choice is pure germanium. Most detectors utilize a PN junction for carrier extraction, however, detectors based on metal–semiconductor junctions (with germanium as the semiconductor) have been integrated into silicon waveguides as well. More recently, silicon-germanium avalanche photodiodes capable of operating at 40 Gbit/s have been fabricated.
Complete transceivers have been commercialized in the form of active optical cables.
Optical communications are conveniently classified by the reach, or length, of their links. The majority of silicon photonic communications have so far been limited to telecom
and datacom applications, where the reach is of several kilometers or several meters respectively.
Silicon photonics, however, is expected to play a significant role in computercom as well, where optical links have a reach in the centimeter to meter range. In fact, progress in computer technology (and the continuation of Moore's Law) is becoming increasingly dependent on faster data transfer between and within microchips. Optical interconnects may provide a way forward, and silicon photonics may prove particularly useful, once integrated on the standard silicon chips. In 2006 Former Intel senior vice president Pat Gelsinger stated that, "Today, optics is a niche technology. Tomorrow, it's the mainstream of every chip that we build."
The first microprocessor with optical input/output (I/O) was demonstrated in December 2015 using an approach known as "zero-change" CMOS photonics.
This first demonstration was based on a 45 nm SOI node, and the bi-directional chip-to-chip link was operated at a rate of 2×2.5 Gbit/s. The total energy consumption of the link was calculated to be of 16 pJ/b and was dominated by the contribution of the off-chip laser.
Some researchers believe an on-chip laser source is required. Others think that it should remain off-chip because of thermal problems (the quantum efficiency decreases with temperature, and computer chips are generally hot) and because of CMOS-compatibility issues. One such device is the hybrid silicon laser, in which the silicon is bonded to a different semiconductor (such as indium phosphide) as the lasing medium. Other devices include all-silicon Raman laser or an all-silicon Brillouin lasers wherein silicon serves as the lasing medium.
In 2012, IBM announced that it had achieved optical components at the 90 nanometer scale that can be manufactured using standard techniques and incorporated into conventional chips. In September 2013, Intel announced technology to transmit data at speeds of 100 gigabits per second along a cable approximately five millimeters in diameter for connecting servers inside data centers. Conventional PCI-E data cables carry data at up to eight gigabits per second, while networking cables reach 40 Gbit/s. The latest version of the USB standard tops out at ten Gbit/s. The technology does not directly replace existing cables in that it requires a separate circuit board to interconvert electrical and optical signals. Its advanced speed offers the potential of reducing the number of cables that connect blades on a rack and even of separating processor, storage and memory into separate blades to allow more efficient cooling and dynamic configuration.
Graphene photodetectors have the potential to surpass germanium devices in several important aspects, although they remain about one order of magnitude behind current generation capacity, despite rapid improvement. Graphene devices can work at very high frequencies, and could in principle reach higher bandwidths. Graphene can absorb a broader range of wavelengths than germanium. That property could be exploited to transmit more data streams simultaneously in the same beam of light. Unlike germanium detectors, graphene photodetectors do not require applied voltage, which could reduce energy needs. Finally, graphene detectors in principle permit a simpler and less expensive on-chip integration. However, graphene does not strongly absorb light. Pairing a silicon waveguide with a graphene sheet better routes light and maximizes interaction. The first such device was demonstrated in 2011. Manufacturing such devices using conventional manufacturing techniques has not been demonstrated.
Optical routers and signal processors
Another application of silicon photonics is in signal routers for optical communication. Construction can be greatly simplified by fabricating the optical and electronic parts on the same chip, rather than having them spread across multiple components. A wider aim is all-optical signal processing, whereby tasks which are conventionally performed by manipulating signals in electronic form are done directly in optical form. An important example is all-optical switching, whereby the routing of optical signals is directly controlled by other optical signals. Another example is all-optical wavelength conversion.
In 2013, a startup company named "Compass-EOS", based in California and in Israel, was the first to present a commercial silicon-to-photonics router.
Long range telecommunications using silicon photonics
Silicon microphotonics can potentially increase the Internet's bandwidth capacity by providing micro-scale, ultra low power devices. Furthermore, the power consumption of datacenters may be significantly reduced if this is successfully achieved. Researchers at Sandia, Kotura, NTT, Fujitsu and various academic institutes have been attempting to prove this functionality. A 2010 paper reported on a prototype 80 km, 12.5 Gbit/s transmission using microring silicon devices.
Light-field displays
As of 2015, US startup company Magic Leap is working on a light-field chip using silicon photonics for the purpose of an augmented reality display.
Physical properties
Optical guiding and dispersion tailoring
Silicon is transparent to infrared light with wavelengths above about 1.1 micrometres. Silicon also has a very high refractive index, of about 3.5. The tight optical confinement provided by this high index allows for microscopic optical waveguides, which may have cross-sectional dimensions of only a few hundred nanometers. Single mode propagation can be achieved, thus (like single-mode optical fiber) eliminating the problem of modal dispersion.
The strong dielectric boundary effects that result from this tight confinement substantially alter the optical dispersion relation. By selecting the waveguide geometry, it is possible to tailor the dispersion to have desired properties, which is of crucial importance to applications requiring ultrashort pulses. In particular, the group velocity dispersion (that is, the extent to which group velocity varies with wavelength) can be closely controlled. In bulk silicon at 1.55 micrometres, the group velocity dispersion (GVD) is normal in that pulses with longer wavelengths travel with higher group velocity than those with shorter wavelength. By selecting a suitable waveguide geometry, however, it is possible to reverse this, and achieve anomalous GVD, in which pulses with shorter wavelengths travel faster. Anomalous dispersion is significant, as it is a prerequisite for soliton propagation, and modulational instability.
In order for the silicon photonic components to remain optically independent from the bulk silicon of the wafer on which they are fabricated, it is necessary to have a layer of intervening material. This is usually silica, which has a much lower refractive index (of about 1.44 in the wavelength region of interest), and thus light at the silicon-silica interface will (like light at the silicon-air interface) undergo total internal reflection, and remain in the silicon. This construct is known as silicon on insulator. It is named after the technology of silicon on insulator in electronics, whereby components are built upon a layer of insulator in order to reduce parasitic capacitance and so improve performance.
Kerr nonlinearity
Silicon has a focusing Kerr nonlinearity, in that the refractive index increases with optical intensity. This effect is not especially strong in bulk silicon, but it can be greatly enhanced by using a silicon waveguide to concentrate light into a very small cross-sectional area. This allows nonlinear optical effects to be seen at low powers. The nonlinearity can be enhanced further by using a slot waveguide, in which the high refractive index of the silicon is used to confine light into a central region filled with a strongly nonlinear polymer.
Kerr nonlinearity underlies a wide variety of optical phenomena. One example is four wave mixing, which has been applied in silicon to realise optical parametric amplification, parametric wavelength conversion, and frequency comb generation.,
Kerr nonlinearity can also cause modulational instability, in which it reinforces deviations from an optical waveform, leading to the generation of spectral-sidebands and the eventual breakup of the waveform into a train of pulses. Another example (as described below) is soliton propagation.
Two-photon absorption
Silicon exhibits two-photon absorption (TPA), in which a pair of photons can act to excite an electron-hole pair. This process is related to the Kerr effect, and by analogy with complex refractive index, can be thought of as the imaginary-part of a complex Kerr nonlinearity. At the 1.55 micrometre telecommunication wavelength, this imaginary part is approximately 10% of the real part.
The influence of TPA is highly disruptive, as it both wastes light, and generates unwanted heat. It can be mitigated, however, either by switching to longer wavelengths (at which the TPA to Kerr ratio drops), or by using slot waveguides (in which the internal nonlinear material has a lower TPA to Kerr ratio). Alternatively, the energy lost through TPA can be partially recovered (as is described below) by extracting it from the generated charge carriers.
Free charge carrier interactions
The free charge carriers within silicon can both absorb photons and change its refractive index. This is particularly significant at high intensities and for long durations, due to the carrier concentration being built up by TPA. The influence of free charge carriers is often (but not always) unwanted, and various means have been proposed to remove them. One such scheme is to implant the silicon with helium in order to enhance carrier recombination. A suitable choice of geometry can also be used to reduce the carrier lifetime. Rib waveguides (in which the waveguides consist of thicker regions in a wider layer of silicon) enhance both the carrier recombination at the silica-silicon interface and the diffusion of carriers from the waveguide core.
A more advanced scheme for carrier removal is to integrate the waveguide into the intrinsic region of a PIN diode, which is reverse biased so that the carriers are attracted away from the waveguide core. A more sophisticated scheme still, is to use the diode as part of a circuit in which voltage and current are out of phase, thus allowing power to be extracted from the waveguide. The source of this power is the light lost to two photon absorption, and so by recovering some of it, the net loss (and the rate at which heat is generated) can be reduced.
As is mentioned above, free charge carrier effects can also be used constructively, in order to modulate the light.
Second-order nonlinearity
Second-order nonlinearities cannot exist in bulk silicon because of the centrosymmetry of its crystalline structure. By applying strain however, the inversion symmetry of silicon can be broken. This can be obtained for example by depositing a silicon nitride layer on a thin silicon film.
Second-order nonlinear phenomena can be exploited for optical modulation, spontaneous parametric down-conversion, parametric amplification, ultra-fast optical signal processing and mid-infrared generation. Efficient nonlinear conversion however requires phase matching between the optical waves involved. Second-order nonlinear waveguides based on strained silicon can achieve phase matching by dispersion-engineering.
So far, however, experimental demonstrations are based only on designs which are not phase matched.
It has been shown that phase matching can be obtained as well in silicon double slot waveguides coated with a highly nonlinear organic cladding
and in periodically strained silicon waveguides.
The Raman effect
Silicon exhibits the Raman effect, in which a photon is exchanged for a photon with a slightly different energy, corresponding to an excitation or a relaxation of the material. Silicon's Raman transition is dominated by a single, very narrow frequency peak, which is problematic for broadband phenomena such as Raman amplification, but is beneficial for narrowband devices such as Raman lasers. Early studies of Raman amplification and Raman lasers started at UCLA which led to demonstration of net gain Silicon Raman amplifiers and silicon pulsed Raman laser with fiber resonator (Optics express 2004). Consequently, all-silicon Raman lasers have been fabricated in 2005.
The Brillouin effect
In the Raman effect, photons are red- or blue-shifted by optical phonons with a frequency of about 15 THz. However, silicon waveguides also support acoustic phonon excitations. The interaction of these acoustic phonons with light is called Brillouin scattering. The frequencies and mode shapes of these acoustic phonons are dependent on the geometry and size of the silicon waveguides, making it possible to produce strong Brillouin scattering at frequencies ranging from a few MHz to tens of GHz. Stimulated Brillouin scattering has been used to make narrowband optical amplifiers as well as all-silicon Brillouin lasers. The interaction between photons and acoustic phonons is also studied in the field of cavity optomechanics, although 3D optical cavities are not necessary to observe the interaction. For instance, besides in silicon waveguides the optomechanical coupling has also been demonstrated in fibers and in chalcogenide waveguides.
Solitons
The evolution of light through silicon waveguides can be approximated with a cubic Nonlinear Schrödinger equation, which is notable for admitting sech-like soliton solutions. These optical solitons (which are also known in optical fiber) result from a balance between self phase modulation (which causes the leading edge of the pulse to be redshifted and the trailing edge blueshifted) and anomalous group velocity dispersion. Such solitons have been observed in silicon waveguides, by groups at the universities of Columbia, Rochester, and Bath.
References
Nonlinear optics
Photonics
Silicon |
47765854 | https://en.wikipedia.org/wiki/Name%20resolution%20%28semantics%20and%20text%20extraction%29 | Name resolution (semantics and text extraction) | In semantics and text extraction, name resolution refers to the ability of text mining software to determine which actual person, actor, or object a particular use of a name refers to. It can also be referred to as entity resolution.
Name resolution in simple text
For example, in the text mining field, software frequently needs to interpret the following text:
John gave Edward the book. He then stood up and called to John to come back into the room.
In these sentences, the software must determine whether the pronoun "he" refers to "John", or "Edward" from the first sentence. The software must also determine whether the "John" referred to in the second sentence is the same as the "John" in the first sentence, or a third person whose name also happens to be "John". Such examples apply to almost all languages, and not only English.
Name resolution across documents
Frequently, this type of name resolution is also used across documents, for example to determine whether the "George Bush" referenced in an old newspaper article as President of the United States (George H. W. Bush) is the same person as the "George Bush" mentioned in a separate news article years later about a man who is running for President (George W. Bush.) Because many people may have the same name, analysts and software must take into account substantially more information than only a name to determine whether two identical references ("George Bush") actually refer to the same specific entity or person.
Name/entity resolution in text extraction and semantics is a notoriously difficult problem, in part because in many cases there is not sufficient information to make an accurate determination. Numerous partial solutions exist that rely on specific contextual clues found in the data, but there is no currently known general solution.
The problem is sometimes referred to as name disambiguation and, for digital libraries, author disambiguation.
For examples of software that might provide name resolution benefits, see also:
AeroText
AlchemyAPI
Attensity
Autonomy
Basis Technology
Dandelion API, providing a customizable approach for name resolution using an internal knowledge graph (built on Wikipedia, DBpedia and other sources)
DBpedia Spotlight, providing a simple approach for name resolution using DBpedia and Wikipedia
NetOwl
See also
Identity resolution
Named entity recognition
Naming collision
Anaphor resolution
References
Computational linguistics
Tasks of natural language processing |
915528 | https://en.wikipedia.org/wiki/Center%20for%20Democracy%20and%20Technology | Center for Democracy and Technology | Center for Democracy & Technology (CDT) is a Washington, D.C.-based 501(c)(3) nonprofit The Organization which focuses on topics such as the rights of individual users in relation to technology policy , with the potential to affect the architecture of the Internet. The CDT has established a set of five key objectives which the organization is centered around. As described on the organization’s website, their first objective is to focus on promoting any legislation that may enable individuals to use technology for purposes of well-intent, while at the same time preventing its usage for harmful purposes. CDT’s second objective is to advocate for transparency, accountability, and a regard for human rights in the context of online platforms; this objective places a particular emphasis on setting a precedent for limiting the collection of the personal information of the average user. CDT’s third objective is to mitigate issues of online media censorship by governments, along with enabling individuals to access information freely without retaliation or punishment. Their fourth objective is to limit the ability of governments to perform surveillance activities on citizens. Lastly, their fifth objective is to highlight the importance of maintaining and supporting the globalized nature of the internet. These objectives are further defined within in the CDT’s six areas of focus including, Cybersecurity and standards, Equity in civic technology, free expression, government surveillance, Open internet, and Privacy and data.
As an organization with expertise in law, technology, and policy, the CDT works to preserve the unique nature of the Internet, enhance freedom of expression globally, protect fundamental rights of privacy, and protect stronger legal controls on government surveillance by finding practical and innovative solutions to public policy challenges while protecting civil liberties. CDT is dedicated to building consensus among all parties interested in the future of the Internet and other new communications media. In addition to its office in Washington, D.C., CDT has a full-time presence in Brussels.
Today, CDT has expanded its scope to include tech policy issues across disciplines and borders, continuing to work to protect privacy, promote security, and draw attention to all the ways in which technology changes the landscape of democracy. CDT relies on an expertise-based advocacy model and acts as a non-partisan body, drawing together perspectives and voices from varying backgrounds to emphasize the importance of technology's role in the freedom, expression, security, privacy, and integrity of the individual. CDT advises government officials, agencies, corporations, and civil society on technology and technology-related policy.
History
Founding
CDT was founded in 1994 by Jerry Berman, the former executive director and former policy director of the Electronic Frontier Foundation. Specifically, passage of Communications Assistance for Law Enforcement Act (CALEA), which expanded law enforcement wiretapping capabilities by requiring telephone companies to design their networks to ensure a certain basic level of government access, spurred Berman to found CDT. Recognizing a threat to privacy and innovation in CALEA's design mandates, CDT fought the passage of the CALEA and later worked to ensure that its implementation would not extend to the Internet. In the end, CALEA did not contain wiretapping design mandates for the Internet and required transparency surrounding design standards. CDT's launch was assisted by seed donations from AT&T Corporation, Bell Atlantic, Nynex, Apple, and Microsoft.
1994–1999
In its early years, CDT fought the Communications Decency Act (CDA) in its attempt to restrict free expression online for the sake of child safety. CDT founded the Citizens Internet Empowerment Coalition (CIEC), a coalition of free speech groups and tech companies for the advancement of free speech. Against the proposed government censorship of the CDA, the CIEC maintained that both child safety and free speech could be protected by giving users the right to control their own content access. To provide further context for the case, CDT wired the courtroom so that the judges of Philadelphia's District Court could see the Internet. After combining forces with the ACLU, the CIEC's counsel argued the case before the Supreme Court. The CDA was struck down unanimously in 1997.
In the following year, CDT helped to craft the Children's Online Privacy Protection Act. Testifying before Congress, CDT argued that the Federal Trade Commission (FTC) should be able to develop rules to protect both adults' and children's privacy online. Forming a coalition of free expression and youth rights groups, CDT and its coalition secured an amendment to limit parental consent to children 12 and under, allowing teenagers to enjoy more freedom online.
In a 1999 report, CDT made clear that the Federal Election Commission's (FEC) attempts to regulate online political speech according to campaign finance laws were impractical and to the detriment of civic political engagement. CDT worked against the FEC's proposal with an organized group of online activists and bloggers. In collaboration with the Institute for Politics, Democracy, and the Internet, CDT created guidelines to help the FEC and Congress consider their treatment of citizens' political speech online. In support, hundreds of concerned parties signed onto the listing of principles, urging the FEC to drop its proposed rules and Congress to end the rule-making. CDT's grassroots advocacy reversed the tide. The FEC abandoned its proposal and issued a new rule that applied campaign finance regulations only to paid online advertising, protecting the online political speech of citizens.
2000–2007
CDT launched the Global Internet Policy Initiative in 2000, partnering with Internews to survey 11 developing countries to assess their telecom and Internet policies. CDT staff worked with Frank LaRue to shape a report on Internet human rights and the U.S. Ambassador to the UN Human Rights Council to educate members of the Council on Internet freedom in advance of the successful Resolution on Internet Freedom.
Following an influx of spyware in 2003, CDT filed complaints against egregious actors with the FTC, resulting in historic settlements against spyware companies. CDT pulled together anti-spyware and anti-virus companies, leading security product distributors, and public interest groups to create the Anti-Spyware Coalition (ASC). The ASC developed a self-regulatory model for companies based on shared definitions of spyware, a comprehensive risk model, best practices for software companies, and a concise vendor conflict resolution process. Using the ASC outputs, anti-spyware companies could label malicious software and protect consumers without fear of being sued by the companies they were targeting, and advertisers could keep better track of where their advertisements were displayed.
In 2006, CDT united with the Business for Social Responsibility to assemble human rights advocates, companies, researchers, and investors to deal with government calls for censorship and restriction of information access. The pairing has successfully worked together to create an accountability framework and principles for the Global Network Initiative (GNI), a human rights organization that promotes the privacy of individual users while preventing online censorship by authoritarian governments.
In 2007, CDT was among the first advocacy organizations to formally call for a Do Not Track (DNT) list from the Federal Trade Commission (FTC). In addition, CDT has played an integral role in pushing for a standardized DNT header at the World Wide Web Consortium (W3C). In 2010, the FTC requested a system that would allow consumers to control whether they were tracked online. In response, all five major browsers put DNT features into place, granting users the ability to surf the web incognito. The W3C formed a Tracking Protection Working Group in order to standardize the DNT compliance, which CDT leadership had a prominent role in.
2008–2012
In 2008 CDT was given a grant of $125,000 over two years from the MacArthur Foundation to support the Internet, Human Rights and Corporate Responsibility Initiative. CDT has also voiced privacy concerns about "deep packet inspection" (DPI), a technology that allows companies to collect data from Internet service providers (ISPs) and categorize individual Internet traffic streams to service ads based on that information without user consent. CDT conducted legal analysis to show how DPI advertising practices could violate the Electronic Communications Privacy Act (ECPA) and testified before Congress. In 2009, major ISPs affirmed that they would not use DPI-based behavioral advertising without robust opt-in provisions. In the same year, CDT started the Health Privacy Project to bring expertise to complex privacy issues surrounding technology use in health care. A year later, CDT recommended new guidelines for reporting data breaches and for protecting health data used in marketing. These guidelines were incorporated into the American Recovery and Reinvestment Act.
Later in 2009, the CDT commented on aspects of smart home appliances and the Smart Grid, and potential dangers with the then growing technology. More specifically, the potential risks of the information that could be collected, noting "device identifiers that uniquely identify a smart device and the manufacturer, control signals that reveal the function of smart devices, energy consumption at frequent time intervals at both the household and device level, temperature inside customers’ home, status of smart devices such as IP address and firmware version, and customers’ geographic region". They drew concern over the usage of the data collected, citing said concern over the potential benefits of smart appliances but the dangers of "invading the traditionally protected zone of he home and home life. Without planning, such adverse impacts could drive opposition to the Smart Grid and prompt a backlash against data collection that could be socially beneficial when limited to the narrow purposes of improving efficiency".
Later in 2010, CDT launched the Digital Due Process Coalition, establishing four principles for Electronic Communications Privacy Act (ECPA) reform. Currently, the coalition has over one hundred members including some of the biggest Internet companies to advocacy groups across the entire political spectrum. The campaign for ECPA reform has brought the need for extending full constitutional protections to the Internet to the forefront of the national debate and has resulted in 2013 coalition-supported bipartisan bills in both houses of Congress.[7]
Two copyright enforcement bills, the Stop Online Piracy Act (SOPA) and the Protect IP Act (PIPA), were introduced to the US Congress in 2010 and 2011. Both bills posed serious threats to the technical grounds of the Internet, as well as freedom of expression online, by increasing the role of ISPs and Internet intermediaries in combating online copyright infringement. In opposition to SOPA and PIPA, the CDT gathered organizations from technical and civil society backgrounds. The efforts of CDT provided critical legal analysis which laid the foundation for 2012's surge of grassroots resistance against SOPA and PIPA.
CDT was one of the few civil society organizations involved in the founding of the Internet Corporation for Assigned Names and Numbers (ICANN), encouraging a bottom-up style of governance and making certain that the voice of the Internet users be included at the table. In the ICANN deliberations, CDT argued for public representation and for the placement of a Civil Society representative on its Board. In helping to form the Organization for Economic Cooperation and Development's (OECD) Principles for Internet Policy Making in 2011, CDT also pushed for a multi-stakeholder approach to Internet governance. OECD's 34 member states committed to respect human rights, open governance, rule of law, and consideration of numerous viewpoints by accepting the principles.
2012–present
At the International Telecommunication Union's (ITU) World Conference on International Telecommunications (WCIT) in 2012, CDT brought inclusive Internet governance into focus. Though many governments came bearing proposals to escalate government and ITU control over Internet governance, CDT defeated all such proposals through organized civil society advocacy. In addition, CDT fortified relationships between organizations for the sake of future advocacy efforts. Even now, inclusive Internet governance faces serious obstacles as nations scramble to respond to news of the National Security Agency (NSA) surveillance. CDT continues to work with an expanding group of partners motivated by civil society concern to preserve the free and open nature of the Internet.
CDT requested Congress make Congressional Research Services Reports (CRS) publicly available and easily accessible. When Congress failed to do so, CDT started a website, OpenCRS.com, that made CRS reports freely available online. OpenCRS.com was one of the leading sources of CRS reports. By collecting CRS reports acquired by organizations and citizens through direct appeals to their representatives, the OpenCRS website served as a valuable repository of information. Though no longer in operation, the OpenCRS inspired other CRS websites and open resources.
CDT has long been an active supporter of Internet neutrality. In a brief filed in 2012, CDT supported the Federal Communications Commission's (FCC) Open Internet Rules. The rules detailed the limited role of the agency by blocking discrimination by broadband providers. In this way, the FCC intended to protect online free expression and innovation. In 2014, the Open Internet Rules were struck down, drawing CDT back into the fight for Internet neutrality on a global scale. By offering up extensive expertise, CDT has ensured that any EU regulation on Internet neutrality takes into account the central tenet of nondiscrimination.
In the early 1990s, the NSA developed and promoted the "clipper chip", an encryption device for telephone calls. The NSA argued that government access to cryptographic keys was essential to national security – CDT and its allies claimed that the clipper chip would introduce greater vulnerabilities into the country's communications networks. In 2013, on behalf of a coalition of Internet companies such as Apple, Google, Facebook, and Twitter and advocates for free speech and privacy rights such as ACLU, EFF, and Mozilla, CDT delivered a "We Need To Know" letter to U.S. government officials, demanding greater transparency in matters of national security-related surveillance of Internet and telephone communications.[10] Advocating for reform, CDT's firm stance is that the NSA's surveillance programs and its interference with Internet security infringe on privacy, are chilling free speech and association, and threaten the free flow of information that is the foundation of the open Internet. As an advocacy organization, CDT has outlined key reforms to NSA surveillance. On March 17, 2016, the CDT released a statement about the senates unwillingness to vote in favor of or against Merrick Garlands nomination to the Supreme Court, opting to wait for the election cycle. They did applaud his contributions in the federal court circuits of recognizing the merits of the Freedom of Information Act (FOIA) requests, and ordering the CIA to release information about their drone strikes. They did not outright support him, only in favor of the senate making a decision after the longwinded period. In February 2017, stating the importance of privacy on smart televisions as they became more prevalent. This was following Vizio being fined by the FTC for tracking what users were viewing, and matching it to other identifying information collected about the consumer, eventually getting sold to third-parties interested in such data. In 2018, the CDT played a role in opposing the Stop Enabling Sex Traffickers Act (SESTA) and Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA), a controversial bill which they argued as "a bill that will lead to censorship of a broad range of speech and speakers while failing to help law enforcement prosecute criminal traffickers". However, the law was passed and was enacted into public law on April 11, 2018.
In May 2020, the CDT filed a lawsuit against the Trump administration for his executive order 13,925 on preventing online censorship arguing that this executive order violates the first amendment. However, the court dismissed the case for lack of jurisdiction. In May 2021, President Joe Biden revoked former President Trump's executive order. The Center for Democracy and Technology praised this, quoting them referring to the executive order ""was an attempt to use threats of retaliation to coerce social media companies into allowing disinformation and hateful speech to go unchecked."" In September 2020, the CDT organized a coalition opposed to the then recently proposed EARN IT Act, stating that the bill would potentially result in "online censorship disproportionately impacting marginalized communities, jeopardize access to encrypted services, and risk undermining child abuse prosecutions". In July 2021, the CEO of CDT Alexandra Givens commented on a story relating to artificial intelligence hiring tools. Her concerns were about them adversely effecting candidates with disabilities, as artificial intelligence will likely use data from gamified tests like Pymetrics to make hiring decisions. This typically does not take into account an applicants strengths or weaknesses in the field, nor their experience. In September 2021, the CDT wrote a letter to multiple senators and representatives urging them to update the CIPA (Children’s Internet Protection Act) to protect the privacy of students using school computers by disallowing invasive monitoring software. This applies to computers on, and off-campus that are taken home by the student. In September 2021, the CDT released their research that children from low-income families are more likely to use school devices on, and off-campus and therefore are more likely to be observed by monitoring software. Afterwards, the CDT wrote a letter to multiple senators and representatives urging them to update the CIPA (Children’s Internet Protection Act) to protect the privacy of students using school computers by disallowing invasive monitoring software. This applies to computers on, and off-campus that are taken home by the student. In April 2019, Michelle Richardson, a director of the privacy and data project CDT she formerly head, commented on an article saying that federal regulation regarding social media, and the internet was inevitable.
A Center for Democracy and Technology gerrymandering expert on November 9, 2021 commented on North Carolina’s legislature's multiple redrawings of the congressional map, citing that they were ordered twice, due to unfair gerrymandering. The Center for Democracy and Technology comments on gerrymandering because most drawings of districts are initially drafted by digital software of political parties. On November 29, 2021, a spokesperson for The Center for Democracy and Technology commented on FBI Documents obtained by the Rolling Stone magazine revealing that WhatsApp messages, and iMessages could be seen in real time by FBI agents with a warrant or subpoena. “Judging by this document, “the most popular encrypted messaging apps iMessage and WhatsApp are also the most permissive,” according to Mallory Knodel, the chief technology officer at the Center for Democracy and Technology....“You’re handing someone else the key to hold onto on your behalf,” says Mallory Knodel of the Center for Democracy and Technology. “Apple has encrypted iCloud but they still have the keys, and as long as they have the key, the FBI can ask for it.”
Project teams
Privacy and data
Formerly directed by Michelle Richardson, CDT's Privacy and Data Project was created to examine the evolving role of technology in daily life, considering its influence on individuals, communities, and law. In identifying emerging issues and collaborating with companies and public officials, CDT's privacy experts are able to develop progress-oriented technical and policy solutions. Among the topics covered by CDT's Privacy and Data team are AI and Machine Learning, European Privacy law, U.S Privacy Legislation, Disability rights and AI, Health Privacy, the Internet of Things, Broadband Privacy, Drones, Student Privacy, and Digital Decisions. Additionally, CDT heads the State Privacy Resource Center, which serves as a repository of information to help policymakers at the state and local level craft privacy legislation.
AI and Machine Learning
The CDTs Privacy and Data team is focusing on how automated process are making decisions that affect people’s data and privacy because automated programs could make discriminatory decisions that can affect many people’s lives. They are working to ensure that innovation in the field of AI and Machine Learning remains. However, they want to make sure the rights of individuals aren’t taken away and AI is used in a responsible and ethical way.
European Privacy Law
The CDT pushes for privacy to be upheld in Europe through different laws. Due to the many emerging possibilities in the field of data collection the CDT pushes to ensure that the privacy of the individuals in society data is maintained. The CDT focuses on both the technology behind data collection as well as the laws in place in Europe. The legislation that they are focused on includes the General Data Protection Regulation, Right to Be Forgotten, Pseudonymous Data, and the EU-U.S. Privacy Shield agreement.
Health Privacy
Due to the importance of data regarding individual’s health the CDT pushes for the privacy and security in the data that is collected in the Health Industry and advocates for the ability for Individuals to know who is using data related to their health. There recent featured content in this field includes Vaccination data gathering, The CDT and eHI’s Proposed Consumer Privacy Framework for Health Data.
Internet of Things
Many recent innovations in technology have led to technology that is more apart of individuals daily lives including, smart watches, smart refrigerators, and smart cars. These technologies allow for a large amount of data to be collected about individuals that was previously unavailable while also not having important privacy and security protections. The CDT is attempting to push for Government officials and technology creators to work together to protect individuals’ privacy when it comes to these devices. The CDT and EPIC have filed an Amicus brief arguing for protections for E-Scooter location information on August 23, 2021.
State Privacy Resources
The CDT has created a resource center to help state and local policy makers to push for privacy regulations on the local and state level as the CDT believes that regulations for privacy on the local and state level can lead the way for changes on the federal level.
Student Privacy
The CDT advocates for student data privacy due to the massive amounts of data being collected by school districts across the country, while also not having proper security measures in place to protect the data. The CDT helps school districts achieve higher security standards through policy advocacy and guidance. On December 2, 2021 the CDT published a research brief on student privacy titled Unmet Demand for Community Engagement on School Data and Technology Use where the CDT’s research found that parents and students want to play a role in the decisions around technology and data.
U.S Privacy Legislation
The CDT’s privacy team has worked on possible Legislation in order to garner support on the limitations on the collection and sharing of personal data in order to allow for the ability for individuals to be in control of their own data. The CDT has joined EPIC to urge congress to strengthen FTC’s power to halt data abuse on November 8, 2021
Disability Rights and AI
The CDT’s privacy team advocates for algorithms that make fair decisions when it comes to people with disabilities through analyzing how artificial intelligence is used to make decisions regarding employment that has the possibility of affecting another individual’s life. The CDT shares this data with policy makers, researchers, advocates, and disabled community member in order to make a change and make sure these algorithms stay fair. The CDT recently Join AJL, ACLU, Color of Change and other organizations in order to urge the secretary of commerce to fill Artificial Intelligence advisory committees with experienced civil rights advocates.
Workers’ Rights
The CDT focuses on educating the public on the impacts that certain technologies have on the workplace as well as researching possible technologies to assist workers to do things like gain a fairer compensation. On September 16, 2021, the CDT posted an article to inform people about the threat of bossware titled "Strategies to Tackle Bossware’s Threats to the health & Safety of Workers".
Free expression
CDT's Free Expression Project places emphasis on the ability for information being allowed to flow freely online without restrictions. The main focus of this project is to advocate for the protection of online free speech, while simultaneously providing criticism and other insight towards concepts such as online content censorship and gatekeeping. In particular, CDT observes the application and the boundaries of the First Amendment online in many circumstances, due to the constantly changing nature of the internet and other technologies that allow the spread of mass information at large. CDT's Free Expression team currently focuses on subjects such as Digital Copyright, Intermediary Liability, Children's Privacy, and Net Neutrality. This project is currently directed by Emma Llansó, a graduate of Yale's School of Law and the University of Delaware.
Content Moderation
The CDT’s free expression team research how applications apply moderation to certain content and the mechanics behind content moderation. The team also meets with different companies, academic institutions, and government officials to enact change and make sure for consumer transparency on possible platform moderation practices that could affect the user’s rights. On November 16, 2021, the CDT filed a brief to urge to 11th circuit to affirm preliminary injunction enjoining enforcement of Florida’s social media law.
Security and surveillance
The Security and Surveillance Project of the CDT was created for the purpose of limiting the extent of which governments should be allowed to access and store personal information about their citizens, in addition to limiting the surveillance of foreign citizens. This project calls for the restriction of such practices to be implemented through legislation in the interest of preserving individual privacy rights in addition to rights for freedom of expression. The formation of this project was inspired by the rapid evolution of mass data storage technologies, and the increasing accumulation of the big data industry following the integration of the internet and other digital services which have the potential to carry a large amount of identifying and personal information about individual users. While bearing the importance of national security, the project also considers issues of ECPA reform, cybersecurity, U.S. government surveillance, drones, and encryption and government hacking.
Internet architecture
The Internet Architecture team of the CDT focuses on online anonymity and encryption, the standards that govern the technical decisions of internet operation, net neutrality, government surveillance, internet governance policies at across the globe, cybersecurity research, and election security and privacy. The Internet Architecture project as a whole focuses on the value provided by individual security, privacy, and free expression along with each of these individual aspects relationships to human rights. Similar to other projects maintained by the CDT, the Internet Architecture team emphasizes the importance of setting modern precedents in regards to the drafting and implementation of well-informed legislation that will affect digital spaces.
European Union
The CDT has considered the importance of the global influence and impacts of international technology policy and has actively made efforts to communicate and coordinate with the European Union. Utilizing their established presence in Brussels, CDT has made efforts to promote their agenda overseas by advocating for the establishment of an open and inclusive Internet in collaboration with the EU's Member States, civil society, public institutions, and technology sector. CDT's EU Office focuses on the policy areas of digital copyright, intermediary liability and free expression, surveillance and government access to personal data, net neutrality, internet governance, and data protection and privacy. The GDPR (General Data Protection Regulation), EU net neutrality policy, European Commission's cybersecurity strategy, and the EU intellectual property enforcement directive are among the issues that CDT's Brussels Office has actively engaged with.
Funding
One-third of CDT's funding comes from foundations and associated grants such as the MacArthur Foundation, while another third of the organization's annual budget comes from industry sources including various companies such as Amazon, Facebook, Microsoft, and Apple among other high-profile tech oriented businesses. The remainder is split among an annual fundraising dinner known in DC circles as the "tech prom", Cy Pres awards, and other sources. In 2020, of the $5,689,369 generated through donations from outlets such as foundations and large corporations, most of the funding was used for programs relating to education and research in the organization ($4,655,316). The remaining $1,034,053 would be used for other expenses, with $224,012 going to public events, $280,000 going toward further fundraising for future projects, and $530,041 going to administration within the organization's staff.
See also
Electronic Frontier Foundation
Information freedom
Internet Censorship
References
External links
Organizations established in 1994
Digital rights organizations
Computer law organizations
Political and economic research foundations in the United States
Privacy organizations
Privacy in the United States
Politics and technology
Charities based in Washington, D.C. |
1216649 | https://en.wikipedia.org/wiki/Eos%20%28disambiguation%29 | Eos (disambiguation) | Eos is the goddess of the dawn in Greek mythology.
Eos or EOS may also refer to:
Astronomy
221 Eos, an asteroid
Eos Chasma, a depression on Mars
Eos family, main-belt asteroids
Earth Observing System, a NASA program
Literature
Eos (imprint), an imprint of HarperCollins Publishing
Eos (magazine), a weekly publication of the American Geophysical Union
Eos Press, an American card games and role-playing game publisher
Music
Eos (album), a 1984 album by Terje Rypdal and David Darling
"E.O.S.", a 2004 trance music piece by Ronski Speed
"EOS", a 2017 song by Rostam from Half-Light
Places
Eos Glacier, a glacier in Antarctica
Mount Eos, a mountain in Antarctica
Science
Eos (bird), a genus of lories (parrots)
Eos (protein), a photoactivatable fluorescent protein
Esterified omega-hydroxyacyl-sphingosine, a human lipoxygenase
Ethanolamine-O-sulfate, a chemical compound used in biochemical research
European Optical Society
IKZF4 or zinc finger protein Eos, a protein in humans
Technology
EOS (medical imaging), a medical projection radiography system
EOS.IO, a cryptocurrency
Canon EOS, a series of film and digital cameras
Computing
Cisco Eos, a software platform
EOS (operating system), a supercomputer operating system in the 1980s
EOS (8-bit operating system), a homecomputer operating system in the 1980s
EOS memory, ECC on SIMMs, used in server-class computers
Emulator Operating System, an operating system used in E-mu Emulator musical instruments
Extensible Operating System, Arista Networks's single-network operating system
Ethernet over SDH, a set of protocols for carrying Ethernet traffic
/e/ (operating system), a free and open-source Android-based mobile operating system
Transportation
Eos (yacht), a yacht owned by Barry Diller
Eos Airlines, a defunct airline
Grif Eos, an Italian hang glider design
Volkswagen Eos, a coupé convertible vehicle made by Volkswagen
Video games
E.O.S. or Earth Orbit Stations, a space station simulation by Electronic Arts
Eos, a fictional world in Final Fantasy XV
Eos, a fictional planet in Mass Effect: Andromeda
Eos, a character in Red Faction
Eos, a fictional planet in Star Wars: Starfighter
Other uses
EOS International, an American developmental charity
EOS (company), an American skin care company
End-of-sale
Liwathon E.O.S.
Norwegian Parliamentary Intelligence Oversight Committee or EOS Committee
People with the given name
Eos Counsell (born 1976), Welsh violinist
Gaynor Rowlands or Eos Gwalia (1883–1906), English actress
Eos Morlais (1841–1892), Welsh tenor
See also
Aeos (disambiguation)
AOS (disambiguation)
EO (disambiguation)
OS (disambiguation) |
2419323 | https://en.wikipedia.org/wiki/TclX | TclX | TclX, an abbreviation for extended Tcl, was one of the first freely available Tcl extensions to the Tcl programming language, providing new operating system interface commands, extended file controls, time and date manipulation, scanning and status commands and many others. While many features of TclX have been incorporated into Tcl, TclX continued to be updated, providing Tcl interfaces to many Unix/Linux system calls and library routines, expanded list functions, and so forth. No new releases have been issued since November 2012, with 8.4.1 being the latest; however, version 8.6 is in preparation.
TclX is shipped by Debian and as part of Mac OS X. It is also available as an RPM for Red Hat Enterprise Linux, Fedora, openSUSE, and the Mandriva versions of Linux, and as a port for FreeBSD, among others.
TclX was developed by Karl Lehenbauer and Mark Diekhans.
References
External links
Extended Tcl (TclX) GitHub site
Extended Tcl (TclX) SourceForge site (inactive)
TclX Manual
Tcl programming language family |
64170193 | https://en.wikipedia.org/wiki/Hack%20Club | Hack Club | Hack Club is a global nonprofit network of high school computer hackers, makers and coders. Founded in 2014 by Zach Latta and Jonathan Leung, it now includes 400 high school clubs and 14,000 students. It has been featured on the TODAY Show, and profiled in the Wall Street Journal and many other publications.
Programs
Hack Club's primary focus is its clubs program, in which it supports high school coding clubs through learning resources and mentorship. It also runs / has ran a series of other programs and events.
A few notable programs and events are:
Hack Club Bank - a fiscal sponsorship program originally targeted at high school hacker events
Flagship 2019 - a meetup of high school hackathon organizers and coding club leaders
AMAs - video calls with industry experts such as Elon Musk
Summer of Making - a collaboration with GitHub, Adafruit & Arduino to create an online summer program for teenagers during the COVID-19 pandemic that included $50k in hardware donations to teen hackers around the world
The Hacker Zephyr - a cross-country hackathon on a train across America
Funding
Hack Club is funded by grants from philanthropic organizations and donations from individual supporters. In 2019, GitHub Education provided cash grants of up to $500 to every Hack Club "hackathon" event. In May 2020, GitHub committed to a $50K hardware fund, globally alongside Arduino and Adafruit, to deliver hardware tools directly to students’ homes with a program named Hack Club Summer of Making. In 2020, Elon Musk and the Musk Foundation donated $500,000 to help expand Hack Club, and donated another $1,000,000 in 2021.
References
Hacker_culture
Clubs and societies
Computer programming |
882947 | https://en.wikipedia.org/wiki/Attachmate | Attachmate | Attachmate Corporation is a 1982-founded software company which focused on secure terminal emulation, legacy integration, and managed file transfer software. Citrix-compatibility and Attachment Reflection were enhanced/added offerings.
History and products
Attachmate Corporation
Attachmate was founded in 1982 by Frank W. Pritt and Tom Borkowski.It focused initially on the IBM terminal emulation market, and became a major technology employer in the Seattle area.
KeaTerm
KEAsystems' KEAterm products were PC software packages that emulated some of Digital Equipment Corporation's VT terminals, and facilitated integrating Windows-based PCs with multiple host applications. These included KEAterm VT340 and VT420 terminal emulators, and KEA X X terminal software).
KEA was acquired by Attachmate.
DCA IRMA
Another acquisition was DCA (Digital Communications Associates) (makers of IRMA line of emulators, INFOconnect, Crosstalk communications software, and OpenMind collaborative software).
DCA was also known for its 3270 IRMA hardware product (used for SDLC), and ISCA SDLC hardware adapters. They also supported driver downloads.
Extra!
The Attachmate Extra! family of Terminal emulator packages was built to include 3270, 5250 and VT100.
Acquisition
After buying both WRQ, Inc. and Attachmate, who had been long-time competitors in the host emulation business, a group of private equity firms announced in 2005 that the companies would be merged under the new ownership.
It was announced that Attachmate founder and CEO Frank Pritt would retire at the same time.
IBM, RedHat, Microsoft, Attachmate, Apache, Cisco, NEC, SAP, Software AG, Adobe Systems, Fujitsu, Oracle, CA Technologies, and BonitaSoft, are some of the key players operating in the Global Application Server Market.
References
External links
Micro Focus Attachmate page
Attachmate File Library, Archive.org
1982 establishments in Washington (state)
2005 mergers and acquisitions
History of computing hardware
Software companies based in Seattle
Software companies established in 1982
Software companies of the United States |
21005399 | https://en.wikipedia.org/wiki/Education%20in%20Uttar%20Pradesh | Education in Uttar Pradesh | The state of Uttar Pradesh had a long tradition of learning, although it had remained mostly confined to the elite class and the religious establishment.
History
Sanskrit-based education comprising the learning of Vedic to Gupta periods, coupled with the later Pali corpus of knowledge and a vast store of ancient to medieval learning in Persian/Arabic languages, had formed the edifice of Hindu-Buddhist-Muslim education, till the rise of British power. But, the system became decadent as it missed the advancements that were taking place in Europe during and after the Renaissance, resulting in large educational gaps. Measures were initiated by the British administration for making liberal, universal education available in this area through a network of schools to university system on the European pattern. However, a real turning point came due to the efforts of educationalists like Pandit Madan Mohan Malviya and Sir Syed Ahmad Khan, who championed the cause of learning and supported British efforts to spread it.
skillshop technologies also providing technical knowledge in school.
Post-independence
After independence, the state of U.P. has continued to make investment over the years in all sectors of education and has achieved significant success in overcoming general educational backwardness and illiteracy. The increase in overall literacy rate is due to persistent multi-pronged efforts made by the state government: to enrol and retain children, specially of weaker sections, in schools; to effectively implement the adult education programmes; and to establish centres of higher education. As a result, U.P. is ranked amongst the first few States to have successfully implemented the Education For All policy. The following is indicative of the gradual progress:
In 1981, the literacy rate in U.P. was 28% and it increased to 42% in 1991. In 1991, the adult literacy rate (per cent literates among those aged 15 and above) was 38% and increased to 49% in 1998, an increase of 11 per centage in the seven-year period. But, the differential between female and male literacy remained high: while in 1991, male literacy was 56% and female literacy 25%, eight years later in 1999, as per survey estimates, the male literacy became 73% and female literacy 43% (NFHS II).
One more notable feature in the state has been the persistence of higher levels of illiteracy in the younger age group, more so in females, especially in the rural areas. In the late 1980s, the incidence of illiteracy in the 10–14 age group was as high as 32% for rural males and 61% for rural females; and more than two-thirds of all rural girls in the 12–14 age group never went to school. Only 25% of the girls in 7+ age group were able to read and write in 1991 and this figure went down to 19% for rural areas: it was 11% for the scheduled castes, 8% for scheduled castes in rural areas and 8% for the entire rural population in the most educationally backward districts. In terms of completion of basic or essential educational attainment (the primary or the secondary education), in 1992–1993, only 50% of literate males and 40% of literate females could complete the cycle of eight years of schooling (the primary and middle stages). Possibly, Bihar is the only state in India which lags behind U.P. in education.
The problems of state's education system are complex. Due to public apathy the public schools are run inefficiently. Privately run schools (including those run by Christian missionaries) are functional, but expensive and so beyond the reach of ordinary people.
In order to make the population totally literate, steps are being taken by the government to involve public participation, including the help of NGOs and other organisations. There are also special programmes, like the World Bank aided DPEP. As a result, progress in adult education has been made and the census of 2001 indicates a male literacy rate of 70.23% and a female literacy rate of 42.98%.
Presently, there are 866,361 primary schools, 8,459 higher secondary schools, 758 degree colleges and 26 universities in the state. Some of the oldest educational institutions – founded by the British, the pioneer educationalists and other social/religious reformers – are still functional. In addition, a number of highly competitive ivy league centres of higher or technical education have been established since Independence.
Higher education
Considering the size of Uttar Pradesh, it is not surprising that it has a large number of academic and research institutes. These institutes are either under the jurisdiction of the State Government, the Central Government, or are privately run. The state has two IITs – at Kanpur and Varanasi, an IIM at Lucknow, an LU at Lucknow, an NIT and an IIIT at Allahabad. A good number of State and Central Government universities are founded in Uttar Pradesh to provide Higher Education in various course works. State is also home of Asia's one of the oldest Agriculture Institute Sam Higginbottom University of Agriculture, Technology and Sciences (SHUATS), formerly Allahabad Agricultural Institute Established in 1910
The Rajiv Gandhi Institute of Petroleum Technology: The Ministry of Petroleum and Natural Gas (MOP&NG), Government of India set up the institute at Jais, Rae Bareli district, Uttar Pradesh through an Act of Parliament. RGIPT has been accorded Institute of National Importance. With the status of a deemed university, the institute awards degrees in its own right.
RGIPT is co-promoted as an energy domain specific institute by six oil public sector units (ONGC, IOCL, OIL, GAIL, BPCL and HPCL) in association with the Oil Industry Development Board (OIDB). The institute is associated with leading International Universities/Institutions specializing in the domain of Petroleum Technology.
Alongside above mentioned institutes of higher learning, in Uttar Pradesh, a range of Government Degree College has been set up by the Government of Uttar Pradesh for providing Higher Education to scholars who are interested in different course work (undergraduate, postgraduate and research) and program (Humanities, Science and Commerce) in higher studies. At present in Uttar Pradesh, 137 Government Degree Colleges has been established to fulfill the above criteria. The U.P. government administers and controls these colleges through Department of Higher Education, Uttar Pradesh; however, syllabus and affiliation to the universities concerned are depending upon the locality of Government Degree College. Beside government instructions, the government degree colleges also follow the norms and regulations of the University Grants Commission, New Delhi. Few private college likewise, IIMT Group of Institutions (Institute of Integrated Management and Technology) in Varanasi has been established. Uttar Pradesh Board of Technical Education is the body responsible for pre degree vocational and technical education.
Education and social welfare
Banaras Hindu University (BHU) is a Central University in Varanasi. It evolved from the Central Hindu College of Varanasi, envisioned as a Hindu university in April 1911 by Annie Wood Besant and Pandit Madan Mohan Malaviya. BHU began on 1 October 1917, with the Central Hindu College as its first constituent college. Most of the money for the university came from Hindu princes, and its present campus was built on land donated by the Kashi Naresh. Regarded as one of the largest residential universities in Asia, it has more than 128 independent teaching departments; several of its colleges—including science, linguistics, law, engineering (IIT (BHU) Varanasi) and medicine (IMS-BHU)—are ranked amongst the best in India. The university's total enrolment stands at just over 15,000 (including international students). It is the only university in India hosting one of the IITs on its premises (IIT BHU).
The Indian Institute of Technology Kanpur (established in 1959 in the industrial city of Kanpur, and now known as IIT-Kanpur or IITK) is an Indian Institutes of Technology; it is primarily focused on undergraduate education in engineering and related science and technology, and research in these fields. It is among the few institutions which enjoys the status of an Institute of National Importance. IITK was the first college in India to offer education in computer science.
The Indian Institute of Technology (BHU) Varanasi traces its origins to three engineering and technological institutions established by Pandit Madan Mohan Malaviya in 1919–1923 in BHU. In 1971 these three colleges, viz. BENCO, MINMET and TECHNO, were merged to form the Institute of Technology (IT-BHU) and admissions were instituted jointly with the IIT's through the Joint Entrance Examination. In 2012, IT-BHU was officially rechristened as IIT (BHU) Varanasi. The institute has 13 departments and three inter disciplinary schools. It enjoys the status of an Institute of National Importance.
The Indian Institute of Management Lucknow was established in 1984 by the government of India. It was the fourth Indian Institute of Management to be established in India, after IIM Calcutta, IIM Ahmedabad and IIM Bangalore. IIM Lucknow's main campus is in Prabandh Nagar, about 21 kilometres (13 mi) from Lucknow railway station and 31 kilometres (19 mi) from Lucknow Airport. A second campus, focusing on executive programs, was established in Noida. According to the institute's website, IIM Lucknow is the first IIM in the country to establish a second campus.
The Motilal Nehru National Institute of Technology, Allahabad (MNNIT) was formerly Motilal Nehru Regional Engineering College, Allahabad. It is one of the leading institutes in the country, established in 1961 as a joint enterprise of the governments of India and Uttar Pradesh in accordance with the plan to establish regional engineering colleges. On 26 June 2002 the college became a deemed university and is now known as an Institute of National Importance. MNNIT was the first college in India to grant a Bachelor of Technology degree in computer science and engineering, and among the very few colleges in India to have a PARAM supercomputer.
The Rajiv Gandhi Institute of Petroleum Technology (RGIPT) in Jais, Raebareli was established by the Ministry of Petroleum and Natural Gas (MOP&NG) of the Government of India through an act of Parliament. RGIPT has been designated an Institute of National Importance, along with the Indian Institute of Technology (IIT) and Indian Institute of Management (IIM). With deemed university status, the institute awards degrees in its own right.
RGIPT is co-sponsored as an energy-domain-specific institute by six oil public-sector units (ONGC, IOCL, OIL, GAIL, BPCL and HPCL), in association with the Oil Industry Development Board (OIDB). The institute is associated with international universities and institutions specialising in petroleum technology.
Rajiv Gandhi National Aviation University (RGNAU) is an autonomous public central university located in the Fursatganj Airfield, Rae Bareli, Uttar Pradesh.
Allahabad University is a Central University located in Allahabad. Its origins lie in Muir Central College, named after Lt. Governor of North-Western Provinces Sir William Muir in 1876; Muir suggested a Central University at Allahabad, which later evolved into the present institution. At one point it was called the "Oxford of the East", and on 24 June 2005 its Central University status was restored through the University Allahabad Act, 2005 of the Parliament of India. It is the fourth-oldest university in the country.
The Aligarh Muslim University is a residential academic institution. This university is spread over an area of . Modelled on the University of Cambridge, it was established by Sir Syed Ahmed Khan in 1875 as Mohammedan Anglo-Oriental College and was granted the status of a Central University by an Act of Parliament in 1920. Located in the city of Aligarh, it was among the first institutions of higher learning established during the British Raj.
The Gautam Buddha University was established in 2002 by the Uttar Pradesh government. The university commenced its first academic session in 2008. It basically focusses on research and offers integrated dual-degree courses in engineering, biotechnology, Bsc, BBA+MBA, BBA+LLB, humanities and Buddhist studies. Its campus is spread over 511 acres and is located in Greater Noida in close proximity to many industrial units. The university has eight schools: Gautam Buddha University School of Engineering, Gautam Buddha University School of Information and Communication Technology, Gautam Buddha University School of Biotechnology, Gautam Buddha University School of Vocational Studies and Applied Sciences, Gautam Buddha University School of Management, Gautam Buddha University School of Law, Justice and Governance, Gautam Buddha University School of Buddhist Studies and Civilization and Gautam Buddha University School of Humanities and Social Sciences.
The Indian Institute of Information Technology, Allahabad was established in 1999 by the government of India. The institute was conferred deemed university status in 2000, empowering it to award degrees following the setting of its own examinations. The new campus has been developed on 100 acres (0.40 km2) of land at Deoghat, Jhalwa, on the outskirts of Allahabad. The campus and other buildings have been styled on patterns developed by mathematics professor Roger Penrose. IIITA offers a BTech degree in both information technology and electronics and communications engineering. Admission is through the All India Engineering Entrance Examination (AIEEE). Foreign students are accepted based on SAT II scores. IIITA has an extension campus at Amethi, Sultanpur District (the Rajiv Gandhi Institute of Information Technology).
The Dr. A.P.J. Abdul Kalam Technical University is a well-known technical university, formerly known as Uttar Pradesh Technical University. It provides technical education, research and training in such programs as engineering, technology, architecture, town planning, pharmacy and applied arts and crafts which the central government decrees in consultation with All India Council for Technical Education (AICTE). There are five government engineering colleges of GBTU:
Harcourt Butler Technological Institute, Kanpur
Kamla Nehru Institute of Technology, Sultanpur
Madan Mohan Malaviya Engineering College, Gorakhpur
Institute of Engineering and Technology, Lucknow
Bundelkhand Institute of Engineering & Technology, Jhansi
Other schools in the state capital, Lucknow, include Colvin Taluqdars' College, St. Francis' College, Lucknow and La Martinière College. Secondary schools include the Loreto Convent, St Agnes' Loreto High School and City Montessori School. The Babasaheb Bhimrao Ambedkar University, Lucknow is one of the youngest central universities in the country. The jurisdiction of this residential university is over the entire state of Uttar Pradesh. The campus Vidya Vihar is located off Rae Bareli Road, about 10 km south of the Charbagh railway station in Lucknow. All courses offered by the university are postgraduate, innovative and non-traditional.
M.J.P. Rohilkhand University, established in 1975, has produced a large number of scholars and technocrats in various fields of the arts, science and technology; it has departments of management, engineering, the arts, science, law, education and technology. The university's Institute of Engineering and Technology was established in 1995, and it has a successful job-placement bureau throughout India for graduating students.
Harish-Chandra Research Institute, Allahabad : Harish-Chandra Research Institute (HRI) is a research institution, named after the mathematician Harish-Chandra, and located in Allahabad, Uttar Pradesh. It is an autonomous institute, funded by the Department of Atomic Energy (DAE), Government of India.
The Govind Ballabh Pant Social Science Institute, Allahabad:
Established in 1980 as one in the network of Social Science Research Institutes, which Indian Council of Social Science Research (ICSSR) set up in association with the State Governments, in G.B.Pant's case, the Government of Uttar Pradesh. The Institute undertakes interdisciplinary research in the field of social sciences.
G.B.Pant Institute entered privileges of the University of Allahabad in 2005.
Govind Ballabh Pant Social Science Institute became a Constituent Institute of the University of Allahabad on 14 July 2005, when the University of Allahabad Act, 2005 came into force.
The main areas of research at the Institute include development planning and policy, environment, health and population, human development, rural development and management, culture, power and change, democracy and institutions.
Institute offers a doctoral programme in social sciences and an MBA in rural development programme. In both programmes the degree is awarded by the University of Allahabad.
Primary and secondary education
Most schools in the state are affiliated to Uttar Pradesh Madhyamik Shiksha Parishad (commonly referred to as U.P. board) with English or Hindi as the medium of instruction, while schools affiliated to Central Board of Secondary Education (CBSE) and Council for the Indian School Certificate Examinations (CISCE) with English as medium of instruction are also present.
See also
Education in India
List of educational institutions of Uttar Pradesh
References
External links |
3292213 | https://en.wikipedia.org/wiki/Hazard%20analysis | Hazard analysis | A hazard analysis is used as the first step in a process used to assess risk. The result of a hazard analysis is the identification of different type of hazards. A hazard is a potential condition and exists or not (probability is 1 or 0). It may in single existence or in combination with other hazards (sometimes called events) and conditions become an actual Functional Failure or Accident (Mishap). The way this exactly happens in one particular sequence is called a scenario. This scenario has a probability (between 1 and 0) of occurrence. Often a system has many potential failure scenarios. It also is assigned a classification, based on the worst case severity of the end condition. Risk is the combination of probability and severity. Preliminary risk levels can be provided in the hazard analysis. The validation, more precise prediction (verification) and acceptance of risk is determined in the Risk assessment (analysis). The main goal of both is to provide the best selection of means of controlling or eliminating the risk. The term is used in several engineering specialties, including avionics, chemical process safety, safety engineering, reliability engineering and food safety.
Hazards and risk
A hazard is defined as a "Condition, event, or circumstance that could lead to or contribute to an unplanned or undesirable event." Seldom does a single hazard cause an accident or a functional failure. More often an accident or operational failure occurs as the result of a sequence of causes. A hazard analysis will consider system state, for example operating environment, as well as failures or malfunctions.
While in some cases, safety or reliability risk can be eliminated, in most cases a certain degree of risk must be accepted. In order to quantify expected costs before the fact, the potential consequences and the probability of occurrence must be considered. Assessment of risk is made by combining the severity of consequence with the likelihood of occurrence in a matrix. Risks that fall into the "unacceptable" category (e.g., high severity and high probability) must be mitigated by some means to reduce the level of safety risk.
IEEE STD-1228-1994 Software Safety Plans prescribes industry best practices for conducting software safety hazard analyses to help ensure safety requirements and attributes are defined and specified for inclusion in software that commands, controls or monitors critical functions. When software is involved in a system, the development and design assurance of that software is often governed by DO-178C. The severity of consequence identified by the hazard analysis establishes the criticality level of the software. Software criticality levels range from A to E, corresponding to the severity of Catastrophic to No Safety Effect. Higher levels of rigor are required for level A and B software and corresponding functional tasks and work products is the system safety domain are used as objective evidence of meeting safety criteria and requirements.
In 2009 a leading edge commercial standard was promulgated based on decades of proven system safety processes in DoD and NASA. ANSI/GEIA-STD-0010-2009 (Standard Best Practices for System Safety Program Development and Execution) is a demilitarized commercial best practice that uses proven holistic, comprehensive and tailored approaches for hazard prevention, elimination and control. It is centered around the hazard analysis and functional based safety process.
Severity definitions - Safety Related examples
(aviation)
(medical devices)
Likelihood of occurrence examples
(aviation)
(medical devices)
See also
Environmental hazard
(Software Considerations in Airborne Systems and Equipment Certification)
(similar to DO-178B, but for hardware)
(System safety assessment process)
(System development process)
(Standard practice for system safety)
(Standard Best Practices for System Safety Program Development and Execution)
Further reading
References
External links
CFR, Title 29-Labor, Part 1910--Occupational Safety and Health Standards, § 1910.119 U.S. OSHA regulations regarding "Process safety management of highly hazardous chemicals" (especially Appendix C).
FAA Order 8040.4 establishes FAA safety risk management policy.
The FAA publishes a System Safety Handbook that provides a good overview of the system safety process used by the agency.
IEEE 1584-2002 Standard which provides guidelines for doing arc flash hazard assessment.
Avionics
Process safety
Safety engineering
Software quality
Occupational safety and health
Reliability engineering |
966244 | https://en.wikipedia.org/wiki/Burroughs%20MCP | Burroughs MCP | The MCP (Master Control Program) is the operating system of the Burroughs small, medium and large systems, including the Unisys Clearpath/MCP systems.
MCP was originally written in 1961 in ESPOL (Executive Systems Problem Oriented Language). In the 1970s, MCP was converted to NEWP which was a better structured, more robust, and more secure form of ESPOL.
The MCP was a leader in many areas, including: the first operating system to manage multiple processors, the first commercial implementation of virtual memory, and the first OS written exclusively in a high-level language.
History
In 1961, the MCP was the first OS written exclusively in a high-level language (HLL). The Burroughs Large System (B5000 and successors) were unique in that they were designed with the expectation that all software, including system software, would be written in an HLL rather than in assembly language, which was a unique and innovative approach in 1961.
Unlike IBM, which faced hardware competition after the departure of Gene Amdahl, Burroughs software was designed to run only on proprietary hardware. For this reason, Burroughs was free to distribute the source code of all software it sold, including the MCP, which was designed with this openness in mind. For example, upgrading required the user to recompile the system software and apply any needed local patches. At the time, this was common practice, and was necessary as it was not unusual for customers (especially large ones, such as the Federal Reserve) to modify the program to fit their specific needs. As a result, a Burroughs Users Group was formed, which held annual meetings and allowed users to exchange their own extensions to the OS and other parts of the system software suite. Many such extensions have found their way into the base OS code over the years, and are now available to all customers. As such, the MCP could be considered one of the earliest open-source projects.
Burroughs was not the first manufacturer to distribute source code and was a late entry to electronic computing (compared to its traditional rivals NCR, IBM, and Univac). Now that MCP runs on commodity hardware, some elements of the MCP based software suite are no longer made available in source form by Unisys.
The MCP was the first commercial OS to provide virtual memory, which has been supported by the Burroughs large systems architecture since its inception. This scheme is unique in the industry, as it stores and retrieves compiler-defined objects rather than fixed-size memory pages, as a consequence of its overall non-von Neumann and uniformly stack-based architecture.
File system
The MCP provides a file system with hierarchical directory structures. In early MCP implementations, directory nodes were represented by separate files with directory entries, as other systems did. However, since about 1970, MCP internally uses a 'FLAT' directory listing all file paths on a volume. This is because opening files by visiting and opening each directory in a file path was inefficient and for a production environment it was found to be better to keep all files in a single directory, even though they retain the hierarchical naming scheme. Programmatically, this makes no difference. The only difference visible to users is that an entity file can have the same name as a directory. For example, "A/B" and "A/B/C" can both exist; "B" can be both a node in a file and a directory.
Files are stored on named volumes, for example 'this/is/a/filename on myvol', 'myvol' being the volume name. This is device independent, since the disk containing 'myvol' can be moved or copied to different physical disk drives. Disks can also be concatenated so that a single volume can be installed across several drives, as well as mirrored for recoverability of sensitive data. For added flexibility, each program can make volume substitutions, a volume name may be substituted with a primary and secondary alternate name. This is referred to as the process’ FAMILY. For instance, the assignment “FAMILY DISK = USERPACK OTHERWISE SYSPACK” stores files logically designated on volume DISK onto the volume USERPACK and will seek files first on volume USERPACK. If that search has no success, another search for the file is done on volume SYSPACK. DISK is the default volume name if none is specified.
Each file in the system has a set of file attributes. These attributes record all sorts of meta data about a file, most importantly its name and its type (which tells the system how to handle a file, like the more limited four-character file type code on the Macintosh). Other attributes have the file's record size (if fixed for commercial applications), the block size (in multiples of records that tells the MCP how many records to read and write in a single physical IO) and an area size in multiples of blocks, which gives the size of disk areas to be allocated as the file expands.
The file type indicates if the file is character data, or source code written in particular languages, binary data, or code files.
Files are protected by the usual security access mechanisms such as public or private, or a file may have a guard file where the owner can specify complex security rules.
Another security mechanism is that code files can only be created by trusted compilers. Malicious programmers cannot create a program and call it a compiler – a program could only be converted to be a compiler by an operator with sufficient privileges with the 'mc' make compiler operator command.
The MCP implements a Journaling file system, providing fault tolerance in case of disk failure, loss of power, etc. It is not possible to corrupt the file system (except by the operating system or other trusted system software with direct access to its lower layers) .
The file system is case-insensitive and not case-preserving unless quotes are added around the name in which case it is case-sensitive and case-preserving.
Process management
MCP processes are called "Jobs" and "Tasks." A Job contains one or more tasks. Tasks within a job can run sequentially or in parallel. Logic can be implemented at the Job level, typically in the MCP's job control language WFL, to control the flow of a job. Once all tasks in a job are complete, the job itself is completed.
An MCP Process goes through a life cycle from the time it enters the system until it leaves. The initial state for a Job is "Queued." There is a period of time while the Job resides in one of several user defined Job Queues. The next state is "Scheduled" as the Job moves from a queue into memory. Tasks within a job do not wait in queue; instead going directly to the 'Scheduled' state when initiated. Once a Job or Task is started, it can transition between "Active," "Waiting" and "Scheduled" as it progresses. Once a Job or Task completes, it moves to the 'Completed' state.
Running processes are those that use a processor resource and are marked as 'running'. Processes that are ready to be assigned to a processor, when there is no free processor are placed in the ready queue. Processes may be assigned a “Declared” or “Visible” priority, generally 50 as the default, but can be from 0 to 99 for user processes. System processes may be assigned the higher values. Note that this numerical priority is secondary to an overall priority, which is based on the task type. Processes that are directly part of the operating system, called Independent Runners, have the highest priority regardless of numeric priority value. Next come processes using an MCP lock, then Message Control Systems such as CANDE. Then Discontinued processes. Then Work Flow Language jobs. Finally come user processes. At a lower level, there is a Fine priority intended to elevate the priority of tasks that do not use their full processor slice. This allows an IO bound task to get processor time ahead of a processor bound task on the same declared priority.
Processes that are waiting on other resources, such as a file read, wait on the EVENT data structure. Thus all processes waiting on a single resource wait on a single event. When the resource becomes available, the event is caused, which wakes up all the processes waiting on it. Processes may wait on multiple events for any one of them to happen, including a time out. Events are fully user programmable – that is, users can write systems that use the generalized event system provided by the MCP.
Processes that have terminated are marked as completed.
Operationally, the status of all tasks in the system is displayed to the operator. All running and ready processes are displayed as 'Active' tasks (since the system implements preemptive multitasking, the change from ready to running and back is so quick that distinguishing ready and running tasks is pointless because they will all get a slice of the processor within a second). All active tasks can be displayed with the 'A' command.
Terminated tasks are displayed as completed tasks with the reason for termination, EOT for normal 'end of task', and DSed with a reason for a process failure. All processes are assigned a mix number, and operators can use this number to identify a process to control. One such command is the DS command (which stands for either Delete from Schedule, DiScontinue, or Deep Six, after the influence of Navy personnel on early computer projects, depending on who you talk to). Tasks terminated by the operator are listed in the complete entries as O-DS.
Tasks can also terminate due to program faults, marked as F-DS or P-DS, for faults such as invalid index, numeric overflow, etc. Completed entries can be listed by the operator with the 'C' command.
Tasks waiting on a resource are listed under the waiting entries and the reason for waiting. All waiting tasks may be listed with the 'W' command. The reason for waiting is also listed and more information about a task may be seen with the 'Y' command. It may be that a task is waiting for operator input, which is sent to a task via the accept 'AX' command (note that operator input is very different from user input, which would be input from a network device with a GUI interface).
Tasks waiting on user input or file reads would not normally be listed as waiting entries for operator attention. Another reason for a task to be waiting is waiting on a file. When a process opens a file, and the file is not present, the task is placed in the waiting entries, noting that it is waiting on a certain file. An operator (or the user that owns the process) has the opportunity either to copy the file to the expected place, or to redirect the task to read the file from another place, or the file might even be created by an independent process that hasn't yet completed.
If the resource cannot be provided by the operator, the operator can DS the task as a last resort. This is different from other systems, which automatically terminate a task when a resource such as a file is not available. The MCP provides this level of operator recoverability of tasks. Other systems force programmers to add code to check for the presence of files before accessing them, and thus extra code must be written in every case to provide recoverability, or process synchronization. Such code may be written in an MCP program when it is not desirable to have a task wait, but because of the operator-level recoverability, this is not forced and therefore makes programming much simpler.
In addition to the ability to dynamically remap file (or database) requests to other files (or databases), before or during program execution, several mechanisms are available to allow programmers to detect and recover from errors. One way, an 'ON' statement, has been around for many years. Specific faults (e.g., divide by zero) can be listed, or the catch-all 'anyfault' can be used. The statement or block following the 'ON' statement is recognized by the compiler as fault-handling code. During execution, if a recoverable fault occurs in scope of the 'on' statement, the stack is cut back and control transferred to the statement following it.
One problem with the handling logic behind the ON statement was that it would only be invoked for program faults, not for program terminations having other causes. Over time, the need for guaranteed handling of abnormal terminations grew. In particular, a mechanism was needed to allow programs to invoke plug-ins written by customers or third parties without any risk should the plug-in behave badly. In addition to general plug-in mechanisms, the new form of dynamic library linkage (Connection Libraries) allows programs to import and export functions and data, and hence one program runs code supplied by another.
To accomplish such enhanced protection, a newer mechanism was introduced in the mid 1990s. In a misguided attempt at compatibility, it was named after the then-proposed C++ language construct of the same name. Because the syntax and behavior of the two differ to such a large extent, choosing the same name has only led to confusion and misunderstanding.
Syntactically, 'try' statements look like 'if' statements: 'try', followed by a statement or block, followed by 'else' and another statement or block. Additional 'else' clauses may follow the first. During execution, if any recoverable termination occurs in the code following the 'try' clause, the stack is cut back if required, and control branches to the code following the first 'else'. In addition, attributes are set to allow the program to determine what happened and where (including the specific line number).
Most events that would result in task termination are recoverable. This includes stack overflow, array access out-of-bounds, integer over/under flow, etc. Operator (or user) DS is not recoverable except by privileged tasks using an UNSAFE form of try.
MCP thus provides a very fault-tolerant environment, not the crash-and-burn core-dump of other systems.
As with file attributes, tasks have attributes as well, such as the task priority (which is assigned at compile time or execution time, or can be changed while the task is running), processor time, wait time, status, etc. These task attributes can be accessed programmatically as can file attributes of files. The parent task is available programmatically as a task attribute that is of type task. For example, 'myself.initiator.name' gives the name of the process that initiated the current process.
GETSPACE and FORGETSPACE are the two main procedures handling memory allocation and deallocation. Memory needs to be allocated at process initiation and whenever a block is entered that uses arrays, files, etc. GETSPACE and FORGETSPACE not only handle memory space, they also allocate or deallocate the disk space where non memory resident data may be overlaid. Memory may be SAVE (i.e., memory resident), OVERLAYABLE (i.e., virtual memory) or STICKY (meaning memory resident, but movable). They are called upon e.g. by HARDWAREINTERRUPT when a process addresses an uninitialized array or by FILEOPEN.
HARDWAREINTERRUPT handles hardware interrupts and may call upon GETSPACE, IO_FINISH or the like.
BLOCKEXIT is called upon by a task exiting a block. BLOCKEXIT may in turn call FILECLOSE, FORGETSPACE or the like while cleaning up and releasing resources declared and used within that block.
J_EDGAR_HOOVER is the main security guardian of the system, called upon at process start, file open, user log on, etc.
GEORGE is the procedure that decides which process is the next one to receive CPU resources and is thus one of the few processes that uses the MoveStack instruction.
A task goes through various states starting with NASCENT. At DELIVERY the event BIRTH is caused and the task's state changes to ALIVE. When PROCESSKILL is called upon, the state changes into DISEASED. When DEATH is caused the task gets put into the queue structure the MORGUE, after which all remaining resources are freed to the system by a process called PROCESSKILL.
While the task is ALIVE, MCP functions are run on top of that particular process, thus CPU resources are automatically charged to the task causing the MCP overhead. Also, much of the MCP work is being performed with that particular stack's security rights. Only before BIRTH and after DEATH does the MCP need to be operating out of some other stack. If none is available, the system maintains an idle stack.
Software components and libraries
MCP libraries provide a way of sharing data and code between processes. The article on Burroughs large systems looks at the way dependent processes could be asynchronously run so that many processes could share common data (with the mechanisms to provide synchronized update). Such a family of related processes had to be written as a single program unit, processing procedures at higher lex levels as the asynchronous processes, which could still access global variables and other variables at lower lex levels.
Libraries completely inverted this scenario with the following advantages:
Libraries and independent processes are written as independent programming units
Libraries completely controlled access to shared resources (data encapsulation and hiding)
Libraries and clients could be written in different languages
Process switching was not required to safely access data
So clean and radical was the library mechanism that much system software underwent major rewrites resulting in a better structured systems and performance boosts.
Libraries were introduced to MCP systems in the early 1980s, having been developed by Roy Guck and others at Burroughs. They are very much like C. A. R. Hoare's monitors and provide the opportunity for controlled mutual exclusion and synchronization between client processes, using MCP EVENTs and the Dahm locking technique. Libraries offer procedural entry-points to the client, which are checked for a compatible interface (all parameters and return types of imported procedures checked) before the client is linked to the library. The library and its client may be written in different languages. The advantage is that all synchronization is provided in the library and client code does not need to worry about this level of programming at all. This results in robust code since clients can't undermine the synchronization code in the library. (Some would call this a 'Trusted Computing Initiative'.)
Libraries are more sophisticated forms of libraries on other systems such as DLLs. MCP libraries can be 'shared by all', ‘shared by rununit’ or 'private'. The private case is closest to libraries on other systems – for each client a separate copy of the library is invoked and there is no data sharing between processes.
Shared by all is more interesting. When a client starts up, it can run for a while until it requires the services in the library. Upon first reference of a library entry-point, the linkage is initiated. If an instance of the library is already running, the client is then linked to that instance of the library. All clients share the same instance.
Shared by rununit is a sharing mechanism in between these two sharing schemes. It was designed specifically for COBOL, where a rununit is defined as the original initiating client program and all the libraries it has linked to. Each rununit gets one instance of the library and different rununits get a different instance. This is the only dynamic implementation of COBOL rununits.
If this was the first invocation of the library, the library would run its main program (outer block in an ALGOL program) to initialize its global environment. Once initialization was complete, it would execute a freeze, at which point all exported entry points would be made available to clients. At this point, the library's stack was said to be frozen since nothing more would be run on this stack until the library became unfrozen, in which case clean-up and termination code would be run. When a client calls a routine in a library, that routine runs on top of the client stack, storing its locals and temporary variables there. This allows many clients to be running the same routine at the same time, being synchronized by the library routine, which accesses the data in the global environment of the library stack.
Freeze could also be in three forms – temporary, permanent and controlled. Temporary meant that once the client count dropped to zero, the library would be unfrozen and terminated. Permanent meant that the library remained available for further clients even if the client count dropped to zero – permanent libraries could be unfrozen by an operator with a THAW command. A controlled freeze meant that the library actually kept running, so that it could execute monitoring functions and perform data initialization and cleanup functions for each linking client.
Libraries could also be accessed 'by title' and 'by function'. In 'by title' the client specified the file name of the library. 'By function' was an indirect method where a client would just specify the function name of the library, for example 'system_support' and the actual location of the library is found in a table previously set up by an operator with 'SL' (system library) commands, for example 'SL system_support = *system/library/support'. MCP's fault tolerant attitude also works here – if a client tries accessing a library that is not present, the client is put in the 'waiting' tasks and the library could be made present, or the request redirected.
Libraries can also be updated on the fly, all that needs to be done is to 'SL' the new version. Running clients will continue to use the old version until they terminate and new clients will be directed to the new version.
Function libraries also implemented a very important security feature, linkage classes. All normal libraries have a linkage class of zero. Libraries used by the MCP or other privileged system modules may not be usable from normal programs. They are accessed by function and forced in linkage class one. A client in linkage class zero cannot link to linkage class one entry-points. A library with linkage class one that needs to offer entry-points to normal programs can do so if it is designated as ‘trusted’. It can offer selected entry-points in linkage class zero.
The entire database system is implemented with libraries providing very efficient and tailored access to databases shared between many clients. The same goes for all networking functionality and system intrinsics.
In the mid-1990s a new type of library was made available: Connection Libraries. These are programs in their own right that can execute independently as well as import and export data and functions to other programs in arrays of structure blocks. For example, the networking component of the operating system is available as a connection library, allowing other programs to use its services by exporting and importing functions. Upon linkage, each client gets a dedicated structure block to keep state information in. A program that uses the network might import a network-write function and export a network-read function. Thus, if you open a network connection (e.g., using TCP), when data arrives for you to read, the networking component can directly call your function to consume it, without having to first copy the data to a buffer and do a context switch. Likewise, you can write data to the network by directly calling a network-write function.
Connection Libraries allow a significant degree of control over linkages. Each side of a linkage can optionally approve a linkage and can sever the linkage as desired. State can be easily maintained per linkage as well as globally.
Port files
Another technique for inter-process communication (IPC) is port files. They are like Unix pipes, except that they are generalized to be multiway and bidirectional. Since these are an order of magnitude slower than other IPC techniques such as libraries, it is better to use other techniques where the IPC is between different processes on the same machine.
The most advantageous use of port files is therefore for distributed IPC. Port files were introduced with BNA (Burroughs Network Architecture), but with the advent of standard networking technologies such as OSI and TCP/IP, port files can be used with these networks as well.
A server listening for incoming connections declares a port file (a file with the KIND attribute equal to PORT). Each connection that is made from a client creates a subfile with an index, so each port file represents multiple connections to different clients around the network.
A server process receives client requests from anywhere on the network by issuing a read on the port file (subfile = 0 to read from any subfile). It issues a response to the client that issued the request by writing to the particular subfile from which the request was read.
Operating environment
The MCP also provides a sophisticated yet simple operator environment. For large installations, many operators might be required to make physical resources, such as printers (loading paper, toner cartridges, etc.) available. Low-end environments for small offices or single user may require an operator-free environment (especially the laptop implementation).
Large systems have dedicated operations terminals called ODTs (Operator Display Terminals), usually kept in a secure environment. For small systems, machines can be controlled from any terminal (provided the terminal and user have sufficient privileges) using the MARC program (Menu Assisted Resource Control). Operator commands can also be used by users familiar with them.
Operator commands are mostly two letters (as with Unix), and some are just one letter. This means that the operator interface must be learned, but it is very efficient for experienced operators who run a large mainframe system from day to day. Commands are case insensitive.
Tasks are entered in the program 'mix' and identified by mix numbers, as are libraries. To execute a program, operators can use the 'EX' or 'RUN' command followed by the file name of the program. ODTs are run typically with ADM (Automatic Display Mode), which is a tailorable display of system status usually set up to display the active, waiting, and completed mix entries, as well as system messages to the operator for notifications or situations requiring operator action.
Complete listing of these displays are given by the 'A' (active), 'W' (waiting), 'C' (completed), and 'MSG' (message commands).
If a task becomes waiting on some operator action, the operator can find out what the task needs by entering its mix number followed by the 'Y' command. (Note the object-oriented style of commands, selecting the object first, followed by the command.) For example, '3456Y'.
An operator can force a task into the waiting entries with the stop command '3456ST' and make it active again with OK: '3456OK'. The OK command can also be used when an operator has made a resource available for a task, although more frequently than not, the MCP will detect that resources have become available, CAUSE the EVENT that processes have been waiting on without further operator intervention. To pass textual information from an operator to a program, the accept command ‘3456AX MORE INFO’ can be used. Programs can pass information to operators using the DISPLAY mechanism, which causes DISPLAY messages to be added to the MSG display.
As well as tasks and processes, operators also have control over files. Files can be listed using the FILE command, copied using COPY, removed using REMOVE, and renamed.
The operating environment of the MCP is powerful, yet simple and usually only requires a fraction of the number of operators of other systems.
An important part of the operations environment is the high-level Work Flow Language.
Logging
All actions in the system are logged, for example all messages displayed to the operator, and all operator actions. All significant program actions are optionally logged in a system log and a program log, for example BOJ for beginning of a WFL job, BOT for beginning of a task within a WFL job, EOT and EOJ for end of tasks and jobs. As well, all file and database open and closes can be logged. Logging many events contributes to an apparent slowness of the MCP operating environment compared to systems like Unix, since everything is logged with forced physical writes to the program log after every record, which is what systems like Unix don't do, even though they too keep many things in the system logs.
The logs can be used for forensics to find out why programs or systems may have failed, or for detecting attempts to compromise system security. System logs are automatically closed after a system-settable period and a new one opened. System logs contain a huge amount of information, which can be filtered and analyzed with programs such as LOGANALYZER.
The DUMPANALYZER analyzes memory dumps that were originally written to tape. As all compilers added LINEINFO into the code-files, the DUMPANALYZER is able to pinpoint exactly which source statement was being executed at the time of error.
Also a normal program dump, where just one program was dumped, contains information on source-code sequence number and variable names.
The two analyzers are major diagnostic tools for all kinds of purposes.
Innovations
Beyond the many technical innovations in the MCP design, the Burroughs Large Systems had many management innovations now being used by the internet community at large. The system software was shipped to customers inclusive of source code and all the editing and compilation tools needed to generate new versions of MCP for customers. Many customers developed niche expertise on the inner workings of the MCP, and customers often sent in the 'patches' (fragment pieces of source code with sequence numbers) as suggestions of new enhanced features or fault corrections (FTR - field trouble reports). Many of the suggested patches were included by the systems developers and integrated into the next version of the MCP release. Including a community of voluntary, self-professed experts, into mainstream technical work, is now widely practised and is the essence of Open Innovation. This management innovation of community development dated back to the 1970s.
Compilers
Unisys MCP has had several generations of compilers in its history supporting a wide variety of programming languages, including:
ALGOL, including DCALGOL, BDMSALGOL and DMALGOL.
C
COBOL which includes COBOL74 and COBOL85
Fortran77
NEWP
Pascal
RPG
Other products include:
Binder
Network Definition Language
Work Flow Language
Compilers previously existed for ESPOL, COBOL(68), Fortran(66), APL, and PL/I.
Assembler
There is no assembler on the Unisys MCP operating system, with the exception of the medium-systems family.
Summary
The MCP was the first OS developed exclusively in a high-level language. Over its 50-year history, it has had many firsts in a commercial implementation, including virtual memory, symmetric multiprocessing, and a high-level job control language (WFL). It has long had many facilities that are only now appearing in other widespread operating systems, and together with the Burroughs large systems architecture, the MCP provides a very secure, high performance, multitasking and transaction processing environment.
References
External links
MCP 19.0 Documentation – Free access but may require copyright acknowledgement
Burroughs mainframe computers
Proprietary operating systems
Time-sharing operating systems
Unisys operating systems
1961 software
Computer-related introductions in 1961 |
486489 | https://en.wikipedia.org/wiki/OSEK | OSEK | OSEK (Offene Systeme und deren Schnittstellen für die Elektronik in Kraftfahrzeugen; English: "Open Systems and their Interfaces for the Electronics in Motor Vehicles") is a standards body that has produced specifications for an embedded operating system, a communications stack, and a network management protocol for automotive embedded systems. It has produced related specifications, namely AUTOSAR. OSEK was designed to provide a reliable standard software architecture for the various electronic control units (ECUs) throughout a car.
OSEK was founded in 1993 by a German automotive company consortium (BMW, Robert Bosch GmbH, DaimlerChrysler, Opel, Siemens, and Volkswagen Group) and the University of Karlsruhe. In 1994, the French cars manufacturers Renault and PSA Peugeot Citroën, which had a similar project called VDX (Vehicle Distributed eXecutive), joined the consortium. Therefore, the official name was OSEK/VDX and OSEK was registered trademark of Continental Automotive GmbH (until 2007: Siemens AG).
Standards
OSEK is an open standard, published by a consortium founded by the automobile industry. Some parts of OSEK are standardized in ISO 17356.
ISO 17356-1:2005 Road vehicles—Open interface for embedded automotive applications—Part 1: General structure and terms, definitions and abbreviated terms
ISO 17356-2:2005 Road vehicles—Open interface for embedded automotive applications—Part 2: OSEK/VDX specifications for binding OS, COM and NM
ISO 17356-3:2005 Road vehicles—Open interface for embedded automotive applications—Part 3: OSEK/VDX Operating System (OS)
ISO 17356-4:2005 Road vehicles—Open interface for embedded automotive applications—Part 4: OSEK/VDX Communication (COM)
ISO 17356-5:2006 Road vehicles—Open interface for embedded automotive applications—Part 5: OSEK/VDX Network Management (NM)
ISO 17356-6:2006 Road vehicles—Open interface for embedded automotive applications—Part 6: OSEK/VDX Implementation Language (OIL)
before ISO
OSEK VDX Portal
OSEK/VDX Operating system(OS) : "event-triggered" Real-time kernel
OSEK/VDX Communication(COM) : Application level communication protocol
OSEK/VDX Newark Management(NM) : Network management
OSEK/VDX OSEK Implementation Language(OIL) : Offline application description and configuration language
OSEK/VDX OSEK RTI(ORTI) : Debugging interface
OSEK/VDX Binding Specification: Binding document
MODISTARC
OSEK/VDX Conformance Testing Methodology
OSEK/VDX Operating System Test Plan
OSEK/VDX Operating System Test Procedure
OSEK/VDX Communication Test Plan
OSEK/VDX Communication Test Procedure
OSEK/VDX Communication Test Suites
OSEK/VDX Network Management Test Plan
OSEK/VDX Network Management Test Procedure
OSEK/VDX direct Network Management Test Suites
OSEK/VDX indirect Network Management Test Suites
OSEK Functioning
The OSEK standard specifies interfaces to multitasking functions—generic I/O and peripheral access—and thus remains architecture dependent. OSEK is expected to run on microcontroller without memory management unit (MMU), which is favored for safety-critical systems such as cars, therefore features of an OSEK implementation will be usually configured at compile-time. The number of application tasks, stacks, mutexes, etc. is statically configured; it is not possible to create more at run time. OSEK recognizes two types of tasks/threads/compliance levels: basic tasks and enhanced tasks. Basic tasks never block; they "run to completion" (coroutine). Enhanced tasks can sleep and block on event objects. The events can be triggered by other tasks (basic and enhanced) or interrupt routines. Only static priorities are allowed for tasks. First In First Out (FIFO) scheduling is used for tasks with equal priority. Deadlocks and priority inversion are prevented by priority ceiling (i.e. no priority inheritance).
The specification uses ISO/ANSI-C-like syntax; however, the implementation language of the system services is not specified. An Application Binary Interface (ABI) is also not specified.
OSEK-OS scheduling can be configured as:
Preemptive, a task can always be preempted by means of a higher priority task
Non-preemptive, a task can only be preempted in prefixed compile-time points (cooperative scheduling)
Mixed mode scheduling
Groups of tasks (cooperative)
State of the art
AUTOSAR
Currently the AUTOSAR consortium reuses the OSEK specifications as part of the Classic Platform.
The operating system is a backwards compatible superset of OSEK OS which also covers the functionality of OSEKtime, and the communication module is derived from OSEK COM. OSEKtime specifies a standard for optional time-triggered real-time operating systems. If used, OSEKtime triggered callbacks run with higher priority than OSEK tasks.
Research
There is also a limited amount of active research, e. g. in the area of systems engineering and OSEK / VDX RTOS or in relation to the compatibility between OSEK and AUTOSAR.
Quality
In a 48-page report from 2003 by the Software Engineering Institute (SEI) at Carnegie Mellon University (CMU), the specifications were examined and possible weaknesses in the areas of alarm and event mechanisms were identified with possible solutions. The potential of OSEK was also mentioned.
Implementations
Note: A limited number of implementations and vendors exist. Most products are only commercially sold and licensed, others are freely available with open-source license for a limited number of controllers. See also: Comparison of real-time operating systems.
Open-source derivates
Note: Open-source developments are often very limited in scope (targets, conformance classes, characteristics) and are not verified against the specifications unless told otherwise.
ArcCore AUTOSAR OS by Arctic Core (now part of Vector Informatik)
License: Dual GPL/commercial
Firmware de la CIAA (former FreeOSEK), specifically Firmware v1 (hosted on GitHub)
OSEK by Chalandi Amine hosted on GitHub
Lego Mindstorms implementations:
ev3OSEK (last release hosted on GitHub: May 2016)
nxtOSEK (last release hosted on SourceForge (nxtOSEK/JSP): January 2013)
TOPPERS Project (Toyohashi OPen Platform for Embedded Real-time Systems)
Release: ATK1 (2008)
Release: ATK2 (2013)
Targets: m68k, sh1, sh2, sh3, h8, arm 4, m32r, MicroBlaze, tms320c54x, xstormy16, mips3, Nios II, v850, rh850
License: MIT or TOPPERS Lisence
Trampoline by IRCCyN (Research Institute in Communications and Cybernetics of Nantes)
Targets: Infineon C166, PowerPC
License: LGPL
Defunct, not active, unknown status
mKernel for Microchip PIC18F4550 (Former https://sourceforge.net/projects/mkernel/ - not accessible or available as of October 2021)
openOSEK (no files, hosted on SourceForge, last update: 2013)
PicOS18 etc. - formerly available and hosted at picos18.com
Trioztech OSEK was a commercial implementation
Further reading
Berkely EE249 on OSEK (Presentation in PDF formatting)
Christian Michel Sendis. OSEK/RTOS & OSEKturbo Introduction (PDF, March 2009, NXP Semiconductors)
Joseph Lemieux: Programming in the Osek/VDX Environment. Mcgraw-Hill Professional, 2001, ISBN 1578200814
See also
AUTOSAR
COMASSO association (AUTOSAR BSW consortium)
Comparison of real-time operating systems
Controller Area Network (CAN)
Embedded system
IEC 61508 is a standard for programmable electronic safety-related systems.
ISO 26262 Road vehicle safety norm
Safety standards
References
External links
AUTOSAR Homepage
Original OSEK-VDX website - not accessible anymore
Operating system APIs
Embedded operating systems
Automotive software
Standards of Germany |
3133750 | https://en.wikipedia.org/wiki/Open%20Vulnerability%20and%20Assessment%20Language | Open Vulnerability and Assessment Language | Open Vulnerability and Assessment Language (OVAL) is an international, information security, community standard to promote open and publicly available security content, and to standardize the transfer of this information across the entire spectrum of security tools and services. OVAL includes a language used to encode system details, and an assortment of content repositories held throughout the community. The language standardizes the three main steps of the assessment process:
representing configuration information of systems for testing;
analyzing the system for the presence of the specified machine state (vulnerability, configuration, patch state, etc.); and
reporting the results of this assessment.
The repositories are collections of publicly available and open content that utilize the language.
The OVAL community has developed three schemas written in Extensible Markup Language (XML) to serve as the framework and vocabulary of the OVAL Language. These schemas correspond to the three steps of the assessment process: an OVAL System Characteristics schema for representing system information, an OVAL Definition schema for expressing a specific machine state, and an OVAL Results schema for reporting the results of an assessment.
Content written in the OVAL Language is located in one of the many repositories found within the community. One such repository, known as the OVAL Repository, is hosted by The MITRE Corporation. It is the central meeting place for the OVAL Community to discuss, analyze, store, and disseminate OVAL Definitions. Each definition in the OVAL Repository determines whether a specified software vulnerability, configuration issue, program, or patch is present on a system.
The information security community contributes to the development of OVAL by participating in the creation of the OVAL Language on the OVAL Developers Forum and by writing definitions for the OVAL Repository through the OVAL Community Forum. An OVAL Board consisting of representatives from a broad spectrum of industry, academia, and government organizations from around the world oversees and approves the OVAL Language and monitors the posting of the definitions hosted on the OVAL Web site. This means that the OVAL, which is funded by US-CERT at the U.S. Department of Homeland Security for the benefit of the community, reflects the insights and combined expertise of the broadest possible collection of security and system administration professionals worldwide.
OVAL is used by the Security Content Automation Protocol (SCAP).
OVAL Language
The OVAL Language standardizes the three main steps of the assessment process: representing configuration information of systems for testing; analyzing the system for the presence of the specified machine state (vulnerability, configuration, patch state, etc.); and reporting the results of this assessment.
OVAL Interpreter
The OVAL Interpreter is a freely available reference implementation created to show how data can be collected from a computer for testing based on a set of OVAL Definitions and then evaluated to determine the results of each definition.
The OVAL Interpreter demonstrates the usability of OVAL Definitions, and can be used by definition writers to ensure correct syntax and adherence to the OVAL Language during the development of draft definitions. It is not a fully functional scanning tool and has a simplistic user interface, but running the OVAL Interpreter will provide you with a list of result values for each evaluated definition.
OVAL Repository
The OVAL Repository is the central meeting place for the OVAL Community to discuss, analyze, store, and disseminate OVAL Definitions. Other repositories in the community also host OVAL content, which can include OVAL System Characteristics files and OVAL Results files as well as definitions. The OVAL Repository contains all community-developed OVAL Vulnerability, Compliance, Inventory, and Patch Definitions for supported operating systems. Definitions are free to use and implement in information security products and services.
The OVAL Repository Top Contributor Award Program grants awards on a quarterly basis to the top contributors to the OVAL Repository. The Repository is a community effort, and contributions of new content and modifications are instrumental in its success. The awards serve as public recognition of an organization’s support of the OVAL Repository and as an incentive to others to contribute.
Organizations receiving the award will also receive an OVAL Repository Top Contributor logo indicating the quarter of the award (e.g., 1st Quarter 2007) that may be used as they see fit. Awards are granted to organizations that have made a significant contribution of new or modified content each quarter.
OVAL Board
The OVAL Board is an advisory body, which provides valuable input on OVAL to the Moderator (currently MITRE). While it is important to have organizational support for OVAL, it is the individuals who sit on the OVAL Board and their input and activity that truly make a difference. The Board’s primary responsibilities are to work with the Moderator and the Community to define OVAL, to provide input into OVAL’s strategic direction, and to advocate OVAL in the Community.
See also
MITRE The MITRE Corporation
Common Vulnerability and Exposures (index of standardized names for vulnerabilities and other security issues)
XCCDF - eXtensible Configuration Checklist Description Format
Security Content Automation Protocol uses OVAL
External links
OVAL web site
Gideon Technologies (OVAL Board Member) Corporate Web Site
www.itsecdb.com Portal for OVAL definitions from several sources
oval.secpod.com SecPod OVAL Definitions Professional Feed
Computer security procedures
Mitre Corporation |
13258721 | https://en.wikipedia.org/wiki/Archos%20Jukebox%20series | Archos Jukebox series | The Archos Jukebox is a series of Archos portable audio players from 2000 through 2002.
Portable Audio
Jukebox 5000 and 6000
The Archos Jukebox 6000 was one of Archos' very first players. Containing a 6 GB 2.5" hard drive, this was one of the first of its kind. This player is only MP3 compatible, and was bundled with Musicmatch Jukebox to allow users to rip their music collection onto the jukebox. Users could also copy files straight onto the device without any additional software, which allows the Jukebox 6000 to work on any operating system.
Another model, the Archos Jukebox 5000, was also released. The only difference was the 5 GB 2.5" hard drive; hence the "5000" moniker.
This was one of the first Hard Disk-based portable audio players, and at the time was relatively expensive. The robust and chunky design did somewhat hinder its portability but due to the large disk capacity, the Jukebox proved to be popular. It is also possible to upgrade the harddrive to a larger capacity, higher RPM drive using a standard 2.5" IDE drive. It has also been reported using a compact flash to IDE adapter and compact flash card will allow the use of solid state storage, which has no moving parts and is less susceptible to damage from drops or sudden movements.
The player came in metallic silver and metallic blue, and was known for the large blue bumpers on its corners. This device also has a 1-bit charcell LCD screen with two lights above showing power and HDD activity. Also, like Archos' other products, this is also connectible to a hifi with its line out source, which was ideal for portable DJs.
The Jukebox 6000, and its successor the Jukebox Studio (see below), used standard USB 1.1 technology, transferring data at a maximum rate of 1 MB per second. These models transfer data at a comparably slow rate compared with succeeding Archos devices using the USB 2.0 standard.
This device was released Saturday, December 9, 2000, and discontinued as of Friday, May 16, 2003. It weighs 350 g.
The Jukebox is historically notable for shipping with a user interface and operating system so unfriendly and bug-ridden as to inspire Björn Stenberg and other programmers to develop a superior, free and open-source replacement operating system. This project became Rockbox.
Jukebox Studio
The Archos' Jukebox Studio succeeded the Jukebox 6000, the main difference between the two models being the larger hard drive sizes offered. Internally, the two models were the same. The Jukebox Studio was available as a 10 GB, 15 GB, or 20 GB model. (The 15 GB version was short lived.)
The Jukebox Studio was released Thursday, October 4, 2001, and discontinued in 2003. It weighs 350 g.
Jukebox Recorder
Archos' Jukebox Recorder was similar to the Player/Studio models, but featured a 112x64 bitmap LCD and recording capabilities. This model is sometimes referred to as the Recorder v1 to differentiate it from the later v2 version which looks quite different. Some confusion exists regarding the speed of the Recorder's USB port. An earlier version of the Recorder contained a USB 1.1 port and a later version provided a USB 2.0 port interface (source: rockbox.org). The two can be somewhat differentiated because earlier Recorder models came with the smaller hard drive capacities: 6, 10, or 15gb. The later version came with 15 or 20gb sized hard drives. Owners of Jukebox Recorder 15's thus may or may not have USB 2.0, a significant concern for prospective buyers; but one can tell if it uses USB 2.0 or not by looking for the line "USB 2.0 Hard Disk" along the bottom of the device. Although discontinued, the Jukebox Recorder with USB 2.0 interface remains in some demand because of the enhanced speed of the USB 2.0 connection (in contrast to USB 1.1), the capability of the device to be flashed with the free and open source Rockbox firmware, the device's recording feature, easy to replace AA-sized NiMH batteries, and its use of easily upgradeable 2.5" standard laptop hard drives.
The Jukebox Recorder 20 was released around January 2002. It weighs 350 g.
Portable Video
Jukebox Multimedia
The Jukebox Multimedia is Archos's first multimedia player and considered the first ever portable media player (PMP). It enabled users to record straight from a camera attachment. Also featuring an audio player, an image viewer and video player, as well as the correct cables supplied straight from the box.
The player also has the ability to record audio from a line-in source (cables supplied) straight into MP3 format. The player features a 10 GB Hard Disk Drive (Jukebox 10) or 20 GB (Jukebox 20) and uses DivX MPEG4 format for video recording and playback.
The player uses USB 1.0 technology, though has add-ons for USB 2.0 and Firewire to give quicker transfers of files and data, and is recognized as a USB mass storage device.
This player was released on Friday July 5, 2002, and weighs 290 g.
See also
Archos Gmini series
Portable media player
Rockbox
References
External links
Open Source Firmware for Jukebox Multimedia
Step-by-step directions for upgrading the harddrive in a Jukebox 5000
Digital audio players
Portable media players
Jukebox-style media players |
397944 | https://en.wikipedia.org/wiki/Odra%20%28computer%29 | Odra (computer) | Odra was a line of computers manufactured in Wrocław, Poland. The name comes from the Odra river that flows through the city of Wrocław.
Overview
The production started in 1959–1960. Models 1001, 1002, 1003, 1013, 1103, 1204 were of original Polish construction. Models 1304 and 1305 were functional counterparts of ICL 1905 and 1906 due to software agreement. The last model was 1325 based on two models by ICL.
The computers were built at the Elwro manufacturing plant, which was closed in 1989.
Odra 1002 was capable of only 100–400 operations per second.
In 1962, Witold Podgórski, an employee of Elwro, managed to create a computer game on a prototype of Odra 1003; it was an adaptation of a variant of Nim, as depicted in the film Last Year at Marienbad. The computer could play a perfect game and was guaranteed to win. The game was never distributed outside of the Elwro company, but its versions appeared elsewhere. It was probably the first Polish computer game in history.
The operating system used by the Odra 1204 is called SODA. It was designed to work on a small computer without magnetic storage and
can run simultaneous loading and execution of programs.
An Odra 1204 computer was used by a team in Leningrad developing an ALGOL 68 compiler in 1976. The Odra 1204 ran the syntax analysis, code generation ran on an IBM System/360.
Up until 30 April 2010 there was still one Odra 1305 working at the railway station in Wrocław Brochów. The system was shut down at 22:00 CEST and replaced with a contemporary computer system.
The Museum of the History of Computers and Information Technology (Muzeum Historii Komputerów i Informatyki) in Katowice, Poland started a project to recommission an Odra 1305 in 2017.
Literature
See also
History of computing in Poland
History of computer hardware in Soviet Bloc countries
References
Early computers
Science and technology in Poland |
3423785 | https://en.wikipedia.org/wiki/IEEE%201355 | IEEE 1355 | IEEE Standard 1355-1995, IEC 14575, or ISO 14575 is a data communications standard for Heterogeneous Interconnect (HIC).
IEC 14575 is a low-cost, low latency, scalable serial interconnection system, originally intended for communication between large numbers of inexpensive computers.
IEC 14575 lacks many of the complexities of other data networks. The standard defined several different types of transmission media (including wires and optic fiber), to address different applications.
Since the high-level network logic is compatible, inexpensive electronic adapters are possible. IEEE 1355 is often used in scientific laboratories. Promoters include large laboratories, such as CERN, and scientific agencies.
For example, the ESA advocates a derivative standard called SpaceWire.
Goals
The protocol was designed for a simple, low cost switched network made of point-to-point links. This network sends variable length data packets reliably at high speed. It routes the packets using wormhole routing. Unlike Token Ring or other types of local area networks (LANs) with comparable specifications, IEEE 1355 scales beyond a thousand nodes without requiring higher transmission speeds. The network is designed to carry traffic from other types of networks, notably Internet Protocol and Asynchronous Transfer Mode (ATM), but does not depend on other protocols for data transfers or switching. In this, it resembles Multiprotocol Label Switching (MPLS).
IEEE 1355 had goals like Futurebus and its derivatives Scalable Coherent Interface (SCI), and InfiniBand. The packet routing system of IEEE 1355 is also similar to VPLS, and uses a packet labeling scheme similar to MPLS.
IEEE 1355 achieves its design goals with relatively simple digital electronics and very little software. This simplicity is valued by many engineers and scientists.
Paul Walker (see links ) said that when implemented in an FPGA, the standard takes about a third the hardware resources of a UART (a standard serial port), and gives one hundred times the data transmission capacity, while implementing a full switching network and being easier to program.
Historically, IEEE 1355 derived from the asynchronous serial networks developed for the Transputer model T9000 on-chip serial data interfaces. The Transputer was a microprocessor developed to inexpensively implement parallel computation. IEEE 1355 resulted from an attempt to preserve the Transputer's unusually simple data network. This data strobe encoding scheme makes the links self-clocking, able to adapt automatically to different speeds. It was patented by Inmos under U.K. patent number 9011700.3, claim 16 (DS-Link bit-level encoding),
and in 1991 under US patent 5341371,
claim 16. The patent expired in 2011.
Use
IEEE 1355 inspired SpaceWire. It is sometimes used for digital data connections between scientific instruments, controllers and recording systems. IEEE 1355 is used in scientific instrumentation because it is easy to program and it manages most events by itself without complex real-time software.
IEEE 1355 includes a definition for cheap, fast, short-distance network media, intended as the internal protocols for electronics, including network switching and routing equipment. It also includes medium, and long-distance network protocols, intended for local area networks and wide area networks.
IEEE 1355 is designed for point-to-point use. It could therefore take the place of the most common use of Ethernet, if it used equivalent signaling technologies (such as Low voltage differential signaling).
IEEE 1355 could work well for consumer digital appliances. The protocol is simpler than Universal Serial Bus (USB), FireWire, Peripheral Component Interconnect (PCI) and other consumer protocols. This simplicity can reduce equipment expense and enhance reliability. IEEE 1355 does not define any message-level transactions, so these would have to be defined in auxiliary standards.
A 1024 node testbed called Macramé was constructed in Europe in 1997.
Researchers measuring the performance and reliability of the Macramé testbed provided useful input to the working group which established the standard.
What it is
The work of the Institute of Electrical and Electronics Engineers was sponsored by the Bus Architecture Standards Committee as part of the Open Microprocessor Systems Initiative.
The chair of the group was Colin Whitby-Strevens, co-chair was Roland Marbot, and editor was Andrew Cofler. The standard was approved 21 September 1995 as IEEE Standard for Heterogeneous InterConnect (HIC) (Low-Cost, Low-Latency Scalable Serial Interconnect for Parallel System Construction) and published as IEEE Std 1355-1995.
A trade association was formed in October 1999 and maintained a web site until 2004.
The family of standards use similar logic and behavior, but operate at a wide range of speeds over several types of media.
The authors of the standard say that no single standard addresses all price and performance points for a network. Therefore, the standard includes slices (their words) for single-ended (cheap), differential (reliable) and high speed (fast) electrical interfaces, as well as fiber optic interfaces. Long-distance or fast interfaces are designed so that there is no net power transfer through the cable.
Transmission speeds range from 10 megabits per second to 1 gigabit per second.
The network's normal data consists of 8-bit bytes sent with flow control. This makes it compatible with other common transmission media, including standard telecommunications links.
The maximum length of the different data transmission media range from one meter to 3 kilometers. The 3 km standard is the fastest. The others are cheaper.
The connectors are defined so that if a plug fits a jack, the connection is supposed to work. Cables have the same type of plug at both ends, so that each standard has only one type of cable. "Extenders" are defined as two-ended jacks that connect two standard cables.
Interface electronics perform most of the packet-handling, routing, housekeeping and protocol management. Software is not needed for these tasks. When there is an error, the two ends of a link exchange an interval of silence or a reset, and then restart the protocol as if from power-up.
A switching node reads the first few bytes of a packet as an address, and then forwards the rest of the packet to the next link without reading or changing it. This is called "wormhole switching" in an annex to the standard. Wormhole switching requires no software to implement a switching fabric. Simple hardware logic can arrange fail-overs to redundant links.
Each link defines a full-duplex (continuous bidirectional transmission and reception) point-to-point connection between two communicating pieces of electronics. Every transmission path has a flow control protocol, so that when a receiver begins to get too much data, it can turn down the flow. Every transmission path's electronics can send link control data separately from normal data. When a link is idle, it transmits NULL characters. This maintains synchronization, finishes any remaining transmission quickly, and tests the link.
Some Spacewire users are experimenting with half-duplex versions.
The general scheme is that half-duplex uses one transmission channel rather than two. In space, this is useful because the weight of wires is half as much. Controllers would reverse the link after sending an end-of-packet character. The scheme is most effective in the self-clocking electrical systems, such as Spacewire. In the high speed optical slices, half-duplex throughput would be limited by the synchronization time of the phase locked loops used to recover the bit clock.
Definition
This description is a brief outline. The standard defines more details, such as the connector dimensions, noise margins, and attenuation budgets.
IEEE 1355 is defined in layers and slices.
The layers are network features that are similar in different media and signal codings. Slices identify a vertical slice of compatible layers.
The lowest layer defines signals. The highest defines packets. Combinations of packets, the application or transaction layer, are outside the standard.
A slice, an interoperable implementation, is defined by a convenient descriptive code, SC-TM-dd, where:
SC is the signal coding system. Valid values are DS (data strobe encoding), TS (three of six), and HS (high speed).
TM is the transmission medium. Valid values are SE (single-ended electrical), DE (differential electrical), and FO (fiber optic)
dd is the speed in hundreds of megabaud (MBd). A baud rate relates to a change of the signal. Transmission codings may send several bits per second per baud, or several baud per bit per second.
Defined slices include:
DS-SE-02, cheap, useful inside electronic equipment, (200 Mbit/s, <1 meter maximum length).
DS-DE-02, noise-resistant electrical connections between equipment (200 Mbit/s, <10 meters).
TS-FO-02, good, useful for long-distance connections (200 Mbit/s, <300 meters).
HS-SE-10, short very fast connections between equipment (1 Gbit/s, <8 meters).
HS-FO-10, long very fast connections (1 Gbit/s, <3000 meters).
Spacewire is very similar to DS-DE-02, except it uses a microminiature 9-pin "D" connector (lower-weight), and low voltage differential signaling. It also defines some higher-level standard message formats, routing methods, and connector and wire materials that work reliably in vacuum and severe vibration.
Layer 0: The signal layer
In all slices, each link can continuously transmit in both directions ("full duplex"). Each link has two transmission channels, one for each direction.
In a link's cable, the channels have a "half twist" so that input and output always go to the same pins of the connector on both ends of the cable. This makes the cables "promiscuous", that is, each end of any cable will plug into any jack on a piece of equipment.
Each end of a link's cable must be clearly marked with the type of link: for example "IEEE 1355 DS-DE Link Cable".
Layer 1: The Character Layer
Every slice defines 256 data characters. This is enough to represent 8 bits per character. These are called "normal data" or "N-chars."
Every slice defines a number of special link control characters, sometimes called "L-chars." The slice cannot confuse them with N-chars.
Each slice includes a flow control link-control character, or FCC, as well as L-chars for NULL (no data), ESCAPE, end of packet, and exceptional end of packet. Some slices add a few more to start-up the link, diagnose problems, etc.
Every slice has error detection defined at the character layer, usually using parity. The parity is usually distributed over several characters.
A flow-control-character gives a node permission to transmit a few normal data characters. The number depends on the slice, with faster slices sending more characters per FCC. Building flow control in at a low level makes the link far more reliable, and removes much of the need to retransmit packets.
Layer 2: The Exchange layer
Once a link starts, it continuously exchanges characters. These are NULLs if there is no data to exchange. This tests the link, and assures that the parity bits are sent quickly to finish messages.
Each slice has its own start-up sequence. For example, DS-SE and DS-DE are silent, then start sending as soon as they are commanded to start. A received character is a command to start.
In error detection, normally the two ends of the link exchange a very brief silence (e.g. a few microseconds for DS-SE), or a reset command and then try to reset and restore the link as if from power-up.
Layer 3: The common packet layer
A packet is a sequence of normal data with a specific order and format, ended by an "end of packet" character. Links do not interleave data from several packets. The first few characters of a packet describe its destination. Hardware can read those bytes to route the packet. Hardware does not need to store the packet, or perform any other calculations on it in order to copy it and route it.
One standard way to route packets is wormhole source routing, sometimes called "subtractive path routing", in which the first data byte always tells the router which of its outputs should carry the packet. The router then strips off the first byte, exposing the next byte for use by the next router.
Layer 4: The Transaction Layer
IEEE 1355 acknowledges that there must be sequences of packets to perform useful work. It does not define any of these sequences.
Slice: DS-SE-02
DS-SE stands for "Data and Strobe, Single-ended Electrical." This is the least expensive electrical standard. It sends data at up to 200 megabits per second, for up to 1 meter, this is useful inside an instrument for reliable low-pin-count communications.
A connection has two channels, one per direction. Each channel consists of two wires carrying strobe and data. The strobe line changes state whenever the data line starts a new bit with the same value as the previous bit. This scheme makes the links self-clocking, able to adapt automatically to different speeds.
Data characters start with an odd parity, followed by a zero bit. This means that the character is a normal data character, followed by eight data bits.
Link control characters start with odd parity, followed by a one bit, followed by two bits. Odd-1 means that the character is a link control character. 00 is the flow control character FCC, 01 is a normal end of packet EOP, 10 is an exceptional end of packet EEOP, and 11 is an escape character ESC. A NULL is the sequence "ESC FCC".
An FCC gives permission to send eight (8) normal data characters.
Each line can have two states: above 2.0 V, and below 0.8 V -- single-ended CMOS or TTL logic level signals.
The nominal impedance is either 50 or 100 ohms, for 3.3 V and 5 V systems respectively. Rise and fall times should be <100 ns. Capacitance should be <300 pF for 100 MBd, and <4 pF for 200 MBd.
No connectors are defined because DS-SE is designed for use within electronic equipment.
Slice: DS-DE-02
DS-DE stands for "Data and Strobe, Differential Electrical." This is the electrical standard that resists electrical noise the best. It sends data at up to 200 megabits per second, for up to 10 meters, which is useful for connecting instruments. The cable is thick, and the standard connectors are both heavy and expensive.
Each cable has eight wires carrying data. These eight wires are divided into two channels, one for each direction. Each channel consists of four wires, two twisted pairs. One twisted pair carries differential strobe, and the other carries differential data. The encoding for the character layer and above is otherwise like the DS-SE definition.
Since the cable has ten wires, and eight are used for data, a twisted pair is left over. The black/white pair optionally carries 5 V power and return.
The driver rise time should be between 0.5 and 2ns. The differential voltage may range from 0.8 V to 1.4 V, with 1.0 V typical—differential PECL logic level signals.
The differential impedance is 95 ± 10 ohms. The common mode output voltage is 2.5–4 V. The receiver's input impedance should be 100 ohms, within 10%. the receiver input's common mode voltage must be between -1 and 7 V. The receiver's sensitivity should be at least 200 mV.
The standard cable has ten wires. The connectors are IEC-61076-4-107. Plug A (pin 1 is first, pin 2 second): a:brown/blue, b:red/green, c:white/black, d:orange/yellow, e:violet/gray (Pin 1 is given first). Plug B (pin 2 is first, pin 1 second): e:brown/blue, d:red/green, c:black/white, b:orange/yellow, a:violet/gray. Note the implementation the "half twist", routing inputs and outputs to the same pins on each plug.
The Pin 1C/black, may carry 5 volts, while 2C/white may carry return. If the power supply is present it must have a self-healing fuse, and may have ground fault protection. If it is absent, the pins should include a 1 MΩ resistor to ground to leak away static voltages.
Slice: TS-FO-02
TS-FO stands for "Three of Six, Fiber Optical." This is a fiber optic standard designed for affordable plastic fibers operating in the near infrared. It sends 200 megabits/second about 300 meters.
The wavelength should be between 760 and 900 nanometers, which is in the near infrared. The operating speed should be at most 250 MBd with at most 100 parts per million variation. The dynamic range should be about 12 decibels.
The cable for this link uses two 62.5 micrometer-diameter multimode optic fibers. The fiber's maximum attenuation should be 4 decibels per kilometer at an infrared wavelength of 850 nanometers. The standard connector on each end is an MU connector-duplex. Ferrule 2 is always "in", while ferrule 1 is "out". The centerlines should be on 14 mm centers, and the connector should be 13.9 mm maximum. The cable has a "half twist" to make it promiscuous.
The coding is designed so single-bit errors in reception do not generate double-bit errors after encoding, and to avoid the use of CRC, which can double the size of small packets.
The line code "3/6" sends a stream of six bits, of which three bits are always set. There are twenty possible characters. Sixteen are used to send four (4) bits, two are unused, and two are used to construct link control characters. These are shown with the first bit sent starting on the left.
If the previous symbol ends with a 0, Control is 010101 and Control* is 101010. If the previous symbol ends with a 1, Control is 101010, and Control* is 010101. NULL is Control Control*. FCC is Control Control. EOP_1 is Control Checksum (see below for def.). EOP_2 is Checksum Control. INIT is Control Control* Control* Control*.
Data characters are made of two 4-bit symbols. Bits 0..3 are transmitted in the first symbol, 4..7 in the second.
This link transmits NULLs when idle. It starts by sending INIT characters. After receiving them for 125us, it switches to sending NULLs. After it sends NULLs for 125us, it sends a single INIT. When a link has both sent and received a single INIT, it may send an FCC and start receiving data.
A flow control character (FCC) authorizes sending sixteen (16) normal data characters.
Receiving two consecutive INITs, or many zeros or ones indicates disconnection.
Data errors are detected by a longitudinal parity: all the unencoded 4-bit words are exclusive-ored and then the result is sent as a 4-bit checksum nibble translated into three-of-six. This is the "checksum" discussed above.
Slice: HS-SE-10
HS-SE stands for "High speed, Single-ended Electrical." This is the fastest electrical slice. It sends a gigabit per second, but the 8 meter range limits its usage to instrument clusters. However, the modulation and link control features of this standard are also used by the wide-area fiber optic protocols.
A link cable consists of two 2.85 mm diameter 50 Ω coaxial cables. The impedance of the whole transmission line shall be 50 ohms ±10%. The connectors shall follow IEC 1076-4-107. The coaxial cables do a "half twist" so that pin B is always "in" and pin A is always "out".
The electrical link is single-ended. For 3.3 V operation, low is 1.25 V and high is 2 V. For 5 V operation, low is 2.1 V and high is 2.9 V. The signaling speed is 100 MBd to 1 GBd. The maximum rise time is 300 picoseconds, and the minimum is 100 picoseconds.
The HS link's 8B/12B code is a balanced paired disparity code, so there is no net power transfer. It arranges this by keeping a running disparity, a count of the average number of ones and zeros. It uses the running disparity to selectively invert characters. An inverted character is marked with a set invert bit. 8B/12B also guarantees a clock transition on each character.
8B/12B first sends an odd parity bit, followed by 8 bits (least-significant bit first), followed by an inversion bit, followed by a 1 (which is the start bit), and a 0 which is the stop bit.
When the disparity of a character is zero (that is, it has the same number of ones and zeroes, and therefore will not transfer power), it can be transmitted either inverted or noninverted with no effect on the running disparity. Link control characters have a disparity of zero, and are inverted. This defines 126 possible link characters. Every other character is a normal data character.
The link characters are:
0:IDLE
5:START_REQ (start request)
1:START_ACK (start acknowledge)
2:STOP_REQ (stop request)
3:STOP_ACK (stop acknowledge)
4:STOP_NACK (stop negative acknowledge)
125:FCC (flow control character)
6:RESET
When a link starts, each side has a bit "CAL" that is zero before the receiver is calibrated to the link. When CAL is zero, the receiver throws away any data it receives.
During a unidirectional start up, side A sends IDLE. When side B is calibrated, it begins to send IDLE to A. When A is calibrated, it sends START_REQ. B responds with START_ACK back to A. A then sends START_REQ to B, B responds with START_ACK, and at that point, either A or B can send a flow control character and start to get data.
In a bidirectional start-up both sides start sending IDLE. When side A is calibrated, it send START_REQ to side B. Side B sends START_ACK, and then A can send an FCC to start getting data. Side B does exactly the same.
If the other side is not ready, it does not respond with a START_ACK. After 5 ms, side A tries again. After 50 ms, side A gives up, turns off the power, stops and reports an error. This behavior is to prevent eye-injuries from a high-powered disconnected optical fiber end.
A flow control character (FCC) authorizes the receiver to send thirty-two (32) data characters.
A reset character is echoed, and then causes a unidirectional start-up.
If a receiver loses calibration, it can either send a reset command, or simply hold its transmitter low, causing a calibration failure in the other link.
The link is only shutdown if both nodes request a shutdown. Side A sends STOP_REQ, side B responds with STOP_ACK if it is ready to shut down, or STOP_NACK if it is not ready. Side B must perform the same sequence.
Slice: HS-FO-10
"HS-FO" stands for "High Speed Fiber Optical." This is the fastest slice, and has the longest range, as well. It sends a gigabit/second up to 3000 meters.
The character and higher levels are just like HS-SE-10.
The cable is very similar to the other optical cable, TS-FO-02, except for the mandatory label and the connector, which should be IEC-1754-6. However, in older cables, it is often exactly the same as TS-FO-02, except for the label. HS-FO-10 and TS-FO-02 will not interoperate.
This cable can have 62.5 micrometer multimode cable, 50 micrometer multimode cable, or 9 micrometer single-mode cable. These vary in expense and the distances they permit: 100 meters, 1000 meters, and 3000 meters respectively.
For multimode fiber, on the transmitter, the launch power is generally -12 decibels. The wavelength is 760-900 nanometer (near infrared). On the receiver, the dynamic range is 10 decibels, and the sensitivity is -21 decibels at with a bit error rate of one bit in 1012 bits.
For single mode fiber, on the transmitter, the launch power is generally -12 decibels. The wavelength is 1250-1340 nanometers (farther infrared). On the receiver, the dynamic range is 12 decibels, and the sensitivity is -20 decibels with a bit error rate of one bit in 1012 bits.
References
Further reading
External links
official specification; requires payment
CERN's public copy of official IEEE standard 1355-1995
The European Space Agency's site for SpaceWire, a derived standard.
Computer buses
Networking standards
Serial buses |
954189 | https://en.wikipedia.org/wiki/NeoOffice | NeoOffice | NeoOffice is an office suite for the macOS operating system developed by Planamesa Inc. It is a commercial fork of the free and open source OpenOffice.org that implements most of the features of OpenOffice.org, including a word processor, spreadsheet, presentation program and graphics program, and adds some features not present in the macOS versions of LibreOffice and Apache OpenOffice. Current versions are based on LibreOffice 4.4.
History
Versions of OpenOffice.org for Mac prior to 3.0 did not have a native Mac OS X interface; they required that either X11.app or XDarwin be installed.
NeoOffice was the first OpenOffice.org fork to offer a native Mac OS X experience, with easier installation, better integration into the Mac OS X interface (pull-down menus at the top of the screen and familiar keyboard shortcuts, for example), use of Mac OS X's fonts and printing services without additional configuration, and integration with the Mac OS X clipboard and drag-and-drop functions. Subsequently, both LibreOffice and Apache OpenOffice followed NeoOffice's lead and implemented similarly native Mac OS X interfaces.
NeoOffice began as a project to investigate methods of creating a native port of OpenOffice.org to Mac OS X. The project now called NeoOffice was originally dubbed "NeoOffice/J", reflecting its use of Mac OS X's Java integration to enable a native application. A related project was NeoOffice/C, which was a simultaneous effort to develop a version using Apple's Cocoa APIs. But NeoOffice/C proved very difficult to implement and the application was highly unstable, so the project was set aside in favor of the more promising NeoOffice/J. The "/J" suffix was dropped with version 1.2, since there was no longer another variety of NeoOffice from which to distinguish it. Many of these releases were preceded by a version that only Early Access Members could download; these versions were released about a month before the official release date.
All versions from NeoOffice 3.1.1 to NeoOffice 2015 were based on OpenOffice.org 3.1.1, though latter versions included stability fixes from LibreOffice and Apache OpenOffice. NeoOffice 2017 and later versions are fully based on LibreOffice.
In 2013, NeoOffice moved to a commercial distribution model via the Mac App Store. As of 2016, the source code is still available for free, but the software package is only available with the purchase of a commercial license.
Supported file formats
Listed here, in the order of appearance in the Save As dialogue box, are the file formats supported for saving documents in NeoOffice 3.1.2. In cases where NeoOffice is used to edit a document originally in a Microsoft format, NeoOffice can save to that format without loss of formatting.
Word processor application
OpenDocument Text (.odt) *
OpenDocument Text Template (.ott)
NeoOffice 1.0 Text Document (.sxw)
NeoOffice 1.0 Text Document Template (.stw)
Microsoft Word 97/2000/XP (.doc)
Microsoft Word 95 (.doc)
Microsoft Word 6.0 (.doc)
Rich Text Format (.rtf)
StarWriter 5.0 (.sdw)
StarWriter 5.0 Template (.vor)
StarWriter 4.0 (.sdw)
StarWriter 4.0 Template (.vor)
StarWriter 3.0 (.sdw)
StarWriter 3.0 Template (.vor)
Text (.txt)
Text Encoded (.txt)
HTML Document (NeoOffice Writer) (.html)
AportisDoc (Palm) (.pdb)
DocBook (.xml)
Microsoft Word 2007 XML (.docx)
Microsoft Word 2003 XML (.xml)
OpenDocument Text (Flat XML) (.fodt)
Pocket Word (.psw)
Unified Office Format text (.uot)
Spreadsheet application
OpenDocument Spreadsheet (.ods) *
OpenDocument Spreadsheet Template (.ots)
NeoOffice 1.0 Spreadsheet (.sxc)
NeoOffice 1.0 Spreadsheet Template (.stc)
Data Interchange Format (.dif)
dBase (.dbf)
Microsoft Excel 97/2000/XP (.xls)
Microsoft Excel 97/2000/XP Template (.xlt)
Microsoft Excel 95 (.xls)
Microsoft Excel 95 Template (.xlt)
Microsoft Excel 5.0 (.xls)
Microsoft Excel 5.0 Template (.xlt)
StarCalc 5.0 (sdc)
StarCalc 5.0 Template (.vor)
StarCalc 4.0 (.sdc)
StarCalc 4.0 Template (.vor)
StarCalc 3.0 (.sdc)
StarCalc 3.0 Template (.vor)
SYLK (.slk)
Text CSV (.csv)
HTML Document (NeoOffice Calc) (.html)
Microsoft Excel 2007 XML (.xlsx)
Microsoft Excel 2003 XML (.xml)
OpenDocument Spreadsheet (Flat XML) (.fods)
Pocket Excel (.pxl)
Unified Office Format spreadsheet (.uos)
Presentation application
OpenDocument Presentation (.odp) *
OpenDocument Presentation Template (.otp)
NeoOffice 1.0 Presentation (.sxi)
NeoOffice 1.0 Presentation Template (.sti)
Microsoft PowerPoint 97/2000/XP (.ppt)
Microsoft PowerPoint 97/2000/XP Template (.pot)
NeoOffice 1.0 Drawing (NeoOffice Impress) (.sxd)
StarDraw 5.0 (NeoOffice Impress) (.sda)
StarDraw 5.0 (NeoOffice Impress) (.sdd)
StarImpress 5.0 (.sdd)
StarImpress 5.0 Template (.vor)
StarImpress 4.0 (.sdd)
StarImpress 4.0 Template (.vor)
Microsoft PowerPoint 2007 XML (.pptx)
OpenDocument Presentation (Flat XML) (.fodp)
Unified Office Format presentation (.uop)
OpenDocument Drawing (Impress) (.odg)
Graphics application
OpenDocument Drawing (.odg) *
OpenDocument Drawing Template (.otg)
NeoOffice 1.0 Drawing (.sxd)
NeoOffice 1.0 Drawing Template (.std)
StarDraw 5.0 (.sda)
StarDraw 5.0 Template (.vor)
StarDraw 3.0 (.sdd)
StarDraw 3.0 Template (.vor)
OpenDocument Drawing (Flat XML) (.fodg)
Database application
OpenDocument Database (.odb) *
Formula application
OpenDocument Formula (.odf) *
NeoOffice 1.0 Formula (.sxm)
StarMath 5.0 (.smf)
MathML 1.01 (mml)
(*Pre-chosen save default format.)
By default, NeoOffice loads and saves files in the OpenDocument file format, although this can be changed by the user. The OpenDocument file format is an XML file format standardized by OASIS (Organisation for the Advancement of Structured Information Standards).
Licensing
Sun first released OpenOffice.org under both the LGPL and SISSL, later under only the LGPL, with a requirement for copyright assignment for any contributions to the main code base, which allowed Sun to create proprietary versions of the software (notably StarOffice). NeoOffice chose not to assign its code to Sun, which prevented Sun from using NeoOffice code in official OpenOffice.org versions.
There were initially some attempts to resolve the licensing differences and foster more direct cooperation and code-sharing between the NeoOffice and OpenOffice.org developers. However, the NeoOffice developers said that they preferred to work separately from OpenOffice.org because "coordination requires a significant amount of time." The OpenOffice.org developers said that "a proposal to work together has been made, and NeoOffice developers refused". The NeoOffice developers subsequently expressed support for LibreOffice and the launch of The Document Foundation.
Though NeoOffice is sold commercially, the source code is still available for free under the terms of the GPL.
See also
List of word processors
Comparison of word processors
References
External links
NeoWiki
MacOS-only free software
Office suites for macOS
Open-source office suites
OpenOffice
Software forks
Office suites |
14865118 | https://en.wikipedia.org/wiki/Packet%20delay%20variation | Packet delay variation | In computer networking, packet delay variation (PDV) is the difference in end-to-end one-way delay between selected packets in a flow with any lost packets being ignored. The effect is sometimes referred to as packet jitter, although the definition is an imprecise fit.
Terminology
The term PDV is defined in ITU-T Recommendation Y.1540, Internet protocol data communication service - IP packet transfer and availability performance parameters, section 6.2.
In computer networking, although not in electronics, usage of the term jitter may cause confusion. From RFC 3393 (section 1.1):
The variation in packet delay is sometimes called "jitter". This term, however, causes confusion because it is used in different ways by different groups of people. ... In this document we will avoid the term "jitter" whenever possible and stick to delay variation which is more precise.
Measurement of packet delay variation
The means of packet selection for measurement is not specified in RFC 3393, but could, for example, be the packets which had the largest variation in delay in a selected time period.
The delay is specified from the start of the packet being transmitted at the source to the start of the packet being received at the destination. A component of the delay which does not vary from packet to packet can be ignored, hence if the packet sizes are the same and packets always take the same time to be processed at the destination then the packet arrival time at the destination could be used instead of the time the end of the packet is received.
Instantaneous packet delay variation is the difference between successive packets—here RFC 3393 does specify the selection criteria—and this is usually what is loosely termed "jitter", although jitter is also sometimes the term used for the variance of the packet delay. As an example, say packets are transmitted every 20 ms. If the 2nd packet is received 30 ms after the 1st packet, IPDV = −10 ms. This is referred to as dispersion. If the 2nd packet is received 10 ms after the 1st packet, IPDV = +10 ms. This is referred to as clumping.
Limiting PDV or its effects
For interactive real-time applications, e.g., voice over IP (VoIP), PDV can be a serious issue and hence VoIP transmissions may need quality of service-enabled networks to provide a high quality channel.
The effects of PDV in multimedia streams can be mitigated by a properly sized buffer at the receiver. As long as the bandwidth can support the stream, and the buffer size is sufficient, buffering only causes a detectable delay before the start of media playback.
See also
Latency (engineering)
References
Network architecture |
50797595 | https://en.wikipedia.org/wiki/Form%20factor%20%28design%29 | Form factor (design) | Form factor is a hardware design aspect that defines and prescribes the size, shape, and other physical specifications of components, particularly in electronics. A form factor may represent a broad class of similarly sized components, or it may prescribe a specific standard. It may also define an entire system, as in a computer form factor.
Evolution and standardization
As electronic hardware has become smaller following Moore's law and related patterns, ever-smaller form factors have become feasible. Specific technological advances, such as PCI Express, have had a significant design impact, though form factors have historically been slower to evolve than individual components. Standardization of form factors is vital for compatibility of hardware from different manufacturers.
Trade-offs
Smaller form factors may offer more efficient use of limited space, greater flexibility in the placement of components within a larger assembly, reduced use of material, and greater ease of transportation and use. However, smaller form factors typically incur greater costs in the design, manufacturing, and maintenance phases of the engineering lifecycle, and do not allow the same expansion options as larger form factors. In particular, the design of smaller form-factor computers and network equipment must entail careful consideration of cooling. End-user maintenance and repair of small form-factor electronic devices such as mobile phones is often not possible, and may be discouraged by warranty voiding clauses; such devices require professional servicing—or simply replacement—when they fail.
Examples
Computer form factors comprise a number of specific industry standards for motherboards, specifying dimensions, power supplies, placement of mounting holes and ports, and other parameters. Other types of form factors for computers include:
Small form factor (SFF), a more loosely defined set of standards which may refer to both motherboards and computer cases. SFF devices include mini-towers and home theater PCs.
Pizza box form factor, a wide, flat case form factor used for computers and network switches; often sized for installation in a 19-inch rack.
All-in-one PC
"Lunchbox" portable computer
Components
Hard disk drive form factors, the physical dimensions of a computer hard drive
Hard disk enclosure form factor, the physical dimensions of a computer hard drive enclosure
Motherboard form factor, the physical dimensions of a computer motherboard
Mobile form factors
Laptop or notebook, a form of portable computer with a "clamshell design" form factor.
Subnotebook, ultra-mobile PC, netbook, and tablet computer, various form factors for devices which are smaller and often cheaper than a typical notebook.
Mobile phone form factor, comprising the size, shape, layout, and style of mobile phones. Broad categories of form factors include bars, flip phones, and sliders, with many subtypes and variations of each.
Stick PC, a single-board computer in a small elongated casing resembling a stick
See also
Computer hardware
Electronic packaging
Packaging engineering
List of computer size categories
List of integrated circuit package dimensions
References
Design
Electronic design
Industrial design
Packaging
Broad-concept articles |
3561465 | https://en.wikipedia.org/wiki/Hardware%20architect | Hardware architect | (In the automation and engineering environments, the hardware engineer or architect encompasses the electronic engineering and electrical engineering fields, with subspecialities in analog, digital, or electromechanical systems.)
The hardware systems architect or hardware architect is responsible for:
Interfacing with a systems architect or client stakeholders. It is extraordinarily rare nowadays for sufficiently large and/or complex hardware systems that require a hardware architect not to require substantial software and a systems architect. The hardware architect will therefore normally interface with a systems architect, rather than directly with user(s), sponsor(s), or other client stakeholders. However, in the absence of a systems architect, the hardware systems architect must be prepared to interface directly with the client stakeholders in order to determine their (evolving) needs to be realized in hardware. The hardware architect may also need to interface directly with a software architect or engineer(s), or with other mechanical or electrical engineers.
Generating the highest level of hardware requirements, based on the user's needs and other constraints such as cost and schedule.
Ensuring that this set of high level requirements is consistent, complete, correct, and operationally defined.
Performing cost–benefit analyses to determine the best methods or approaches for meeting the hardware requirements; making maximum use of commercial off-the-shelf or already developed components.
Developing partitioning algorithms (and other processes) to allocate all present and foreseeable (hardware) requirements into discrete hardware partitions such that a minimum of communications is needed among partitions, and between the user and the system.
Partitioning large hardware systems into (successive layers of) subsystems and components each of which can be handled by a single hardware engineer or team of engineers.
Ensuring that maximally robust hardware architecture is developed.
Generating a set of acceptance test requirements, together with the designers, test engineers, and the user, which determine that all of the high level hardware requirements have been met, especially for the computer-human-interface.
Generating products such as sketches, models, an early user's manual, and prototypes to keep the user and the engineers constantly up to date and in agreement on the system to be provided as it is evolving.
Background
Large systems architecture was developed as a way to handle systems too large for one person to conceive of, let alone design. Systems of this size are rapidly becoming the norm, so architectural approaches and architects are increasingly needed to solve the problems of large systems.
Users and sponsors
Engineers as a group do not have a reputation for understanding and responding to human needs comfortably or for developing humanly functional and aesthetically pleasing products. Architects are expected to understand human needs and develop humanly functional and aesthetically pleasing products. A good architect is a translator between the user/sponsor and the engineers—and even among just engineers of different specialties. A good architect is also the principal keeper of the user's vision of the end product—and of the process of deriving requirements from and implementing that vision.
Determining what the users/sponsors actually want, rather than what they say they want, is not engineering—it is an art. An architect does not follow an exact procedure. S/he communicates with users/sponsors in a highly interactive way—together they extract the true requirements necessary for the engineered system. The hardware architect must remain constantly in communication with the end users (or a systems architect). Therefore, the architect must be familiar with the user's environment and problem. The engineer need only be very knowledgeable of the potential engineering solution space.
High-level requirements
The user/sponsor should view the architect as the user's representative and provide all input through the architect. Direct interaction with project engineers is generally discouraged as the chance of mutual misunderstanding is very high. The user requirements' specification should be a joint product of the user and hardware architect (or, the systems and hardware architects): the user brings his needs and wish list, the architect brings knowledge of what is likely to prove doable within cost and time constraints. When the user needs are translated into a set of high level requirements is also the best time to write the first version of the acceptance test, which should, thereafter, be religiously kept up to date with the requirements. That way, the user will be absolutely clear about what s/he is getting. It is also a safeguard against untestable requirements, misunderstandings, and requirements creep.
The development of the first level of hardware engineering requirements is not a purely analytical exercise and should also involve both the hardware architect and engineer. If any compromises are to be made—to meet constraints like cost, schedule, power, or space, the architect must ensure that the final product and overall look and feel do not stray very far from the user's intent. The engineer should focus on developing a design that optimizes the constraints but ensures a workable and reliable product. The architect is primarily concerned with the comfort and usability of the product; the engineer is primarily concerned with the producibility and utility of the product.
The provision of needed services to the user is the true function of an engineered system. However, as systems become ever larger and more complex, and as their emphases move away from simple hardware components, the narrow application of traditional hardware development principles is found to be insufficient—the application of the more general principles of hardware architecture to the design of (sub) systems is seen to be needed. A Hardware architecture is also a simplified model of the finished end product—its primary function is to define the hardware components and their relationships to each other so that the whole can be seen to be a consistent, complete, and correct representation of what the user had in mind—especially for the computer–human interface. It is also used to ensure that the components fit together and relate in the desired way.
It is necessary to distinguish between the architecture of the user's world and the engineered hardware architecture. The former represents and addresses problems and solutions in the user's world. It is principally captured in the computer–human interfaces (CHI) of the engineered system. The engineered system represents the engineering solutions—how the engineer proposes to develop and/or select and combine the components of the technical infrastructure to support the CHI. In the absence of an architect, there is an unfortunate tendency to confuse the two architectures, since the engineer thinks in terms of hardware, but the user may be thinking in terms of solving a problem of getting people from point A to point B in a reasonable amount of time and with a reasonable expenditure of energy, or of getting needed information to customers and staff. A hardware architect is expected to combine knowledge of both the architecture of the user's world and of (all potentially useful) hardware engineering architectures. The former is a joint activity with the user; the latter is a joint activity with the engineers. The product is a set of high level requirements reflecting the user's requirements which can be used by the engineers to develop hardware systems design requirements.
Because requirements evolve over the course of a project, especially a long one, an architect is needed until the hardware system is accepted by the user: the architect is the best insurance that no changes and interpretations made during the course of development compromise the user's viewpoint.
Cost–benefit analyses
Most hardware engineers are specialists. They know the applications of hardware design and development intimately, apply their knowledge to practical situations—that is, solve real world problems, evaluate the cost–benefits of various solutions within their hardware specialty, and ensure the correct operation of whatever they design. Hardware architects are generalists. They are not expected to be experts in any one hardware technology or approach, but are expected to be knowledgeable of many, and able to judge their applicability to specific situations. They also apply their knowledge to practical situations, but evaluate the cost/benefits of various solutions using different hardware technologies, for example, specially developed versus commercially available hardware components, and assure that the system as a whole performs according to the user's expectations.
Many commercial-off-the-shelf or already developed hardware components may be selected independently according to constraints such as cost, response, throughput, etc. In some cases, the architect can already assemble the end system unaided. Or, s/he may still need the help of a hardware engineer to select components and to design and build any special purpose function. The architects (or engineers) may also enlist the aid of specialists—in safety, security, communications, special purpose hardware, graphics, human factors, test and evaluation, quality control, RMA, interface management, etc. An effective hardware architectural team must have immediate access to specialists in critical specialties.
Partitioning and layering
An architect planning a building works on the overall design, making sure it will be pleasing and useful to its inhabitants. While a single architect by himself may be enough to build a single-family house, many engineers may be needed, in addition, to solve the detailed problems that arise when a novel high-rise building is designed. If the job is large and complex enough, parts of the architecture may be designed as components. That is, if we are building a housing complex, we may have one architect for the complex, and one for each type of building, as part of an architectural team.
Large hardware systems also require an architect and much engineering talent. If the engineered system is large and complex enough, the chief hardware systems architect may defer to subordinate architects for parts of the job, although they all may be members of a joint architectural team. But the architect must never be viewed as an engineering supervisor.
The architect should sub-allocate the hardware requirements to major components or subsystems that are within the scope of a single hardware engineer, or engineering manager or subordinate architect. Ideally, each such hardware component/subsystem is a sufficiently stand-alone object that it can be tested as a complete component, separate from the whole, using only a simple testbed to supply simulated inputs and record outputs. That is, it is not necessary to know how an air traffic control system works in order to design and build a data management subsystem for it. It is only necessary to know the constraints under which the subsystem will be expected to operate.
A good architect ensures that the system, however complex, is built upon relatively simple and "clean" concepts for each (sub) system or layer—easily understandable by everyone, especially the user, without special training. The architect will use a minimum of rules to ensure that each partition is well-defined and clean of kludges, work-arounds, short-cuts, or confusing detail and exceptions. As user needs evolve, (once the system is fielded and in use), it is a lot easier subsequently to evolve a simple concept than one laden with exceptions, special cases, and much "fine print."
Layering the hardware architecture is important for keeping it sufficiently simple at each layer so that it remains comprehensible to a single mind. As layers are ascended, whole systems at lower layers become simple components at the higher layers, and may disappear altogether at the highest layers.
Acceptance test
The acceptance test always remains the principal responsibility of the architect(s). It is the chief means by which the architect will prove to the user that the hardware is as originally planned and that all subordinate architects and engineers have met their objectives. Large projects tend to be dynamic, with changes along the way needed by the user (e.g., as his problems change), or expected of the user (e.g., for cost or schedule reasons). But acceptance tests must be kept current at all times. They are the principal means by which the user is kept informed as to how the final product will perform. And they act as the principal goal towards which all subordinate personnel must design, build and test for.
Good communications with users and engineers
A building architect uses sketches, models, drawings. A hardware systems architect should use sketches, models, and prototypes to discuss different solutions and results with the user or system architect, engineers, and subordinate architects. An early, draft version of the user's manual is invaluable, especially in conjunction with a prototype. A set of (engineering) requirements as a means of communicating with the users is explicitly to be avoided. A well written set of requirements, or specification, is intelligible only to the engineering fraternity, much as a legal contract is for lawyers.
People
Herb Sutter
See also
Systems architecture / Systems architect
Software architecture / Software architect
Hardware architecture
Systems engineering / Systems engineer
Software engineering / Software engineer
Requirements analysis
Systems design
Electrical engineering
Electronics engineering
References
Computer architecture |
19745125 | https://en.wikipedia.org/wiki/KC%20College%20of%20Engineering%2C%20Thane | KC College of Engineering, Thane | K.C. College of Engineering (KCCOE) is a private engineering college located in Thane, Mumbai, Maharashtra, India. The college is affiliated to the University of Mumbai and approved by Directorate of Technical Education (DTE), Maharashtra State and All India Council of Technical Education (AICTE), New Delhi.
History
K.C. College of Engineering and Management Studies & Research was established in 2001 by the Excelssior Education Society, offering three branches namely Electronics and Telecommunication Engineering, Computer Engineering, and Information Technology Engineering.
Academics
KCCOE offers undergraduate courses of study in engineering. The four-year undergraduate program leads to the degree of Bachelor of Engineering (BE). The courses offered are:
Electronic and Telecommunication Engineering
Computer Engineering
Information Technology
Intakes
Electronics and Telecommunication - 90
Computer Engineering - 120
Information Technology - 60
References
Engineering colleges in Mumbai
Affiliates of the University of Mumbai
Education in Thane
Educational institutions established in 2001
2001 establishments in Maharashtra |
7022979 | https://en.wikipedia.org/wiki/Bayesian%20inference%20in%20phylogeny | Bayesian inference in phylogeny | Bayesian inference of phylogeny combines the information in the prior and in the data likelihood to create the so-called posterior probability of trees, which is the probability that the tree is correct given the data, the prior and the likelihood model. Bayesian inference was introduced into molecular phylogenetics in the 1990s by three independent groups: Bruce Rannala and Ziheng Yang in Berkeley, Bob Mau in Madison, and Shuying Li in University of Iowa, the last two being PhD students at the time. The approach has become very popular since the release of the MrBayes software in 2001, and is now one of the most popular methods in molecular phylogenetics.
Bayesian inference of phylogeny background and bases
Bayesian inference refers to a probabilistic method developed by Reverend Thomas Bayes based on Bayes' theorem. Published posthumously in 1763 it was the first expression of inverse probability and the basis of Bayesian inference. Independently, unaware of Bayes' work, Pierre-Simon Laplace developed Bayes' theorem in 1774.
Bayesian inference or the inverse probability method was the standard approach in statistical thinking until the early 1900s before RA Fisher developed what's now known as the classical/frequentist/Fisherian inference. Computational difficulties and philosophical objections had prevented the widespread adoption of the Bayesian approach until the 1990s, when Markov Chain Monte Carlo (MCMC) algorithms revolutionized Bayesian computation.
The Bayesian approach to phylogenetic reconstruction combines the prior probability of a tree P(A) with the likelihood of the data (B) to produce a posterior probability distribution on trees P(A|B). The posterior probability of a tree will be the probability that the tree is correct, given the prior, the data, and the correctness of the likelihood model.
MCMC methods can be described in three steps: first using a stochastic mechanism a new state for the Markov chain is proposed. Secondly, the probability of this new state to be correct is calculated. Thirdly, a new random variable (0,1) is proposed. If this new value is less than the acceptance probability the new state is accepted and the state of the chain is updated. This process is run thousands or millions of times. The number of times a single tree is visited during the course of the chain is an approximation of its posterior probability. Some of the most common algorithms used in MCMC methods include the Metropolis–Hastings algorithms, the Metropolis-Coupling MCMC (MC³) and the LOCAL algorithm of Larget and Simon.
Metropolis–Hastings algorithm
One of the most common MCMC methods used is the Metropolis–Hastings algorithm, a modified version of the original Metropolis algorithm. It is a widely used method to sample randomly from complicated and multi-dimensional distribution probabilities. The Metropolis algorithm is described in the following steps:
An initial tree, Ti, is randomly selected.
A neighbour tree, Tj, is selected from the collection of trees.
The ratio, R, of the probabilities (or probability density functions) of Tj and Ti is computed as follows: R = f(Tj)/f(Ti)
If R ≥ 1, Tj is accepted as the current tree.
If R < 1, Tj is accepted as the current tree with probability R, otherwise Ti is kept.
At this point the process is repeated from Step 2 N times.
The algorithm keeps running until it reaches an equilibrium distribution. It also assumes that the probability of proposing a new tree Tj when we are at the old tree state Ti, is the same probability of proposing Ti when we are at Tj. When this is not the case Hastings corrections are applied.
The aim of Metropolis-Hastings algorithm is to produce a collection of states with a determined distribution until the Markov process reaches a stationary distribution. The algorithm has two components:
A potential transition from one state to another (i → j) using a transition probability function qi,j
Movement of the chain to state j with probability αi,j and remains in i with probability 1 – αi,j.
Metropolis-coupled MCMC
Metropolis-coupled MCMC algorithm (MC³) has been proposed to solve a practical concern of the Markov chain moving across peaks when the target distribution has multiple local peaks, separated by low valleys, are known to exist in the tree space. This is the case during heuristic tree search under maximum parsimony (MP), maximum likelihood (ML), and minimum evolution (ME) criteria, and the same can be expected for stochastic tree search using MCMC. This problem will result in samples not approximating correctly to the posterior density. The (MC³) improves the mixing of Markov chains in presence of multiple local peaks in the posterior density. It runs multiple (m) chains in parallel, each for n iterations and with different stationary distributions , , where the first one, is the target density, while , are chosen to improve mixing. For example, one can choose incremental heating of the form:
so that the first chain is the cold chain with the correct target density, while chains are heated chains. Note that raising the density to the power with has the effect of flattening out the distribution, similar to heating a metal. In such a distribution, it is easier to traverse between peaks (separated by valleys) than in the original distribution. After each iteration, a swap of states between two randomly chosen chains is proposed through a Metropolis-type step. Let be the current state in chain , . A swap between the states of chains and is accepted with probability:
At the end of the run, output from only the cold chain is used, while those from the hot chains are discarded. Heuristically, the hot chains will visit the local peaks rather easily, and swapping states between chains will let the cold chain occasionally jump valleys, leading to better mixing. However, if is unstable, proposed swaps will seldom be accepted. This is the reason for using several chains which differ only incrementally.
An obvious disadvantage of the algorithm is that chains are run and only one chain is used for inference. For this reason, is ideally suited for implementation on parallel machines, since each chain will in general require the same amount of computation per iteration.
LOCAL algorithm of Larget and Simon
The LOCAL algorithms offers a computational advantage over previous methods and demonstrates that a Bayesian approach is able to assess uncertainty computationally practical in larger trees. The LOCAL algorithm is an improvement of the GLOBAL algorithm presented in Mau, Newton and Larget (1999) in which all branch lengths are changed in every cycle. The LOCAL algorithms modifies the tree by selecting an internal branch of the tree at random. The nodes at the ends of this branch are each connected to two other branches. One of each pair is chosen at random. Imagine taking these three selected edges and stringing them like a clothesline from left to right, where the direction (left/right) is also selected at random. The two endpoints of the first branch selected will have a sub-tree hanging like a piece of clothing strung to the line. The algorithm proceeds by multiplying the three selected branches by a common random amount, akin to stretching or shrinking the clothesline. Finally the leftmost of the two hanging sub-trees is disconnected and reattached to the clothesline at a location selected uniformly at random. This would be the candidate tree.
Suppose we began by selecting the internal branch with length that separates taxa and from the rest. Suppose also that we have (randomly) selected branches with lengths and from each side, and that we oriented these branches. Let , be the current length of the clothesline. We select the new length to be , where is a uniform random variable on . Then for the LOCAL algorithm, the acceptance probability can be computed to be:
Assessing convergence
To estimate a branch length of a 2-taxon tree under JC, in which sites are unvaried and are variable, assume exponential prior distribution with rate . The density is . The probabilities of the possible site patterns are:
for unvaried sites, and
Thus the unnormalized posterior distribution is:
or, alternately,
Update branch length by choosing new value uniformly at random from a window of half-width centered at the current value:
where is uniformly distributed between and . The acceptance
probability is:
Example: , . We will compare results for two values of , and . In each case, we will begin with an initial length of and update the length times.
Maximum parsimony and maximum likelihood
There are many approaches to reconstructing phylogenetic trees, each with advantages and disadvantages, and there is no straightforward answer to “what is the best method?”. Maximum parsimony (MP) and maximum likelihood (ML) are traditional methods widely used for the estimation of phylogenies and both use character information directly, as Bayesian methods do.
Maximum Parsimony recovers one or more optimal trees based on a matrix of discrete characters for a certain group of taxa and it does not require a model of evolutionary change. MP gives the most simple explanation for a given set of data, reconstructing a phylogenetic tree that includes as few changes across the sequences as possible. The support of the tree branches is represented by bootstrap percentage. For the same reason that it has been widely used, its simplicity, MP has also received criticism and has been pushed into the background by ML and Bayesian methods. MP presents several problems and limitations. As shown by Felsenstein (1978), MP might be statistically inconsistent, meaning that as more and more data (e.g. sequence length) is accumulated, results can converge on an incorrect tree and lead to long branch attraction, a phylogenetic phenomenon where taxa with long branches (numerous character state changes) tend to appear more closely related in the phylogeny than they really are. For morphological data, recent simulation studies suggest that parsimony may be less accurate than trees built using Bayesian approaches, potentially due to overprecision, although this has been disputed. Studies using novel simulation methods have demonstrated that differences between inference methods result from the search strategy and consensus method employed, rather than the optimization used.
As in maximum parsimony, maximum likelihood will evaluate alternative trees. However it considers the probability of each tree explaining the given data based on a model of evolution. In this case, the tree with the highest probability of explaining the data is chosen over the other ones. In other words, it compares how different trees predict the observed data. The introduction of a model of evolution in ML analyses presents an advantage over MP as the probability of nucleotide substitutions and rates of these substitutions are taken into account, explaining the phylogenetic relationships of taxa in a more realistic way. An important consideration of this method is the branch length, which parsimony ignores, with changes being more likely to happen along long branches than short ones. This approach might eliminate long branch attraction and explain the greater consistency of ML over MP. Although considered by many to be the best approach to inferring phylogenies from a theoretical point of view, ML is computationally intensive and it is almost impossible to explore all trees as there are too many. Bayesian inference also incorporates a model of evolution and the main advantages over MP and ML are that it is computationally more efficient than traditional methods, it quantifies and addresses the source of uncertainty and is able to incorporate complex models of evolution.
Pitfalls and controversies
Bootstrap values vs posterior probabilities. It has been observed that bootstrap support values, calculated under parsimony or maximum likelihood, tend to be lower than the posterior probabilities obtained by Bayesian inference. This leads to a number of questions such as: Do posterior probabilities lead to overconfidence in the results? Are bootstrap values more robust than posterior probabilities?
Controversy of using prior probabilities. Using prior probabilities for Bayesian analysis has been seen by many as an advantage as it provides a way of incorporating information from sources other than the data being analyzed. However when such external information is lacking, one is forced to use a prior even if it is impossible to use a statistical distribution to represent total ignorance. It is also a concern that the Bayesian posterior probabilities may reflect subjective opinions when the prior is arbitrary and subjective.
Model choice. The results of the Bayesian analysis of a phylogeny are directly correlated to the model of evolution chosen so it is important to choose a model that fits the observed data, otherwise inferences in the phylogeny will be erroneous. Many scientists have raised questions about the interpretation of Bayesian inference when the model is unknown or incorrect. For example, an oversimplified model might give higher posterior probabilities.
MRBAYES software
MrBayes is a free software tool that performs Bayesian inference of phylogeny. It was originally written by John P. Huelsenbeck and Frederik Ronquist in 2001. As Bayesian methods increased in popularity, MrBayes became one of the software of choice for many molecular phylogeneticists. It is offered for Macintosh, Windows, and UNIX operating systems and it has a command-line interface. The program uses the standard MCMC algorithm as well as the Metropolis coupled MCMC variant. MrBayes reads aligned matrices of sequences (DNA or amino acids) in the standard NEXUS format.
MrBayes uses MCMC to approximate the posterior probabilities of trees. The user can change assumptions of the substitution model, priors and the details of the MC³ analysis. It also allows the user to remove and add taxa and characters to the analysis. The program uses the most standard model of DNA substitution, the 4x4 also called JC69, which assumes that changes across nucleotides occur with equal probability. It also implements a number of 20x20 models of amino acid substitution, and codon models of DNA substitution. It offers different methods for relaxing the assumption of equal substitutions rates across nucleotide sites. MrBayes is also able to infer ancestral states accommodating uncertainty to the phylogenetic tree and model parameters.
MrBayes 3 was a completely reorganized and restructured version of the original MrBayes. The main novelty was the ability of the software to accommodate heterogeneity of data sets. This new framework allows the user to mix models and take advantages of the efficiency of Bayesian MCMC analysis when dealing with different type of data (e.g. protein, nucleotide, and morphological). It uses the Metropolis-Coupling MCMC by default.
MrBayes 3.2 was released in 2012 The new version allows the users to run multiple analyses in parallel. It also provides faster likelihood calculations and allow these calculations to be delegated to graphics processing unites (GPUs). Version 3.2 provides wider outputs options compatible with FigTree and other tree viewers.
List of phylogenetics software
This table includes some of the most common phylogenetic software used for inferring phylogenies under a Bayesian framework. Some of them do not use exclusively Bayesian methods.
Applications
Bayesian Inference has extensively been used by molecular phylogeneticists for a wide number of applications. Some of these include:
Inference of phylogenies.
Inference and evaluation of uncertainty of phylogenies.
Inference of ancestral character state evolution.
Inference of ancestral areas.
Molecular dating analysis.
Model dynamics of species diversification and extinction
Elucidate patterns in pathogens dispersal.
References
External links
MrBayes official website
BEAST official website
Bioinformatics
Computational phylogenetics
Phylogeny |
7939840 | https://en.wikipedia.org/wiki/AuthIP | AuthIP | AuthIP is a Microsoft proprietary extension of the IKE cryptographic protocol. AuthIP is supported in Windows Vista and later on the client and Windows Server 2008 and later on the server. AuthIP adds a second authentication to the standard IKE authentication which, according to Microsoft, increases security and deployability of IPsec VPNs. AuthIP adds support for user-based authentication by using Kerberos v5 or SSL certificates.
AuthIP is not compatible with IKEv2, an IETF standard with similar characteristics; however Windows 7 and Windows Server 2008 R2 also support IKEv2.
See also
SSTP
External links
AuthIP in Windows Vista - The Cable Guy column at the Microsoft website
The Authenticated Internet Protocol - The Cable Guy column at the Microsoft website
Cryptographic protocols
Windows Server |
2466905 | https://en.wikipedia.org/wiki/The%20Pandora%20Directive | The Pandora Directive | The Pandora Directive is the fourth installment in the Tex Murphy series of graphic adventure games produced by Access Software. After its creators reacquired the rights to the series, it was re-released on Good Old Games on July 2, 2009.
It was re-released in 2009 on GOG.com for Windows and in 2012 for macOS, and then released on Steam in 2014 with support for Windows, macOS, and Linux. In 2021 Big Finish Games announced that a remaster of The Pandora Directive is in development.
Plot
Like all Tex Murphy games, The Pandora Directive takes place in post-World War III San Francisco in April 2043. After the devastating events of WWIII, many major cities have been rebuilt (as is the case with New San Francisco), though certain areas still remain as they were before the war (as in Old San Francisco). WWIII also left another mark on the world: the formation of two classes of citizens. Specifically, the Mutants and the Norms. After the events of Under a Killing Moon, tensions between the two groups have begun to diminish. The end to the Crusade for Genetic Purity was a turning point in the relations between Mutants and "Norms". Tex still lives on Chandler Ave., which recently underwent a city-funded cleanup. The events of WWIII still left the planet with no ozone layer, and to protect their citizens many countries adopted a time reversal. Instead of sleeping at night, and being awake in the day, humans have become nocturnal, in a manner of speaking. Though Tex lives in what is considered a Mutant area of town, he himself is a "Norm".
In The Pandora Directive, after accidentally offending his love interest Chelsee Bando, Tex (Chris Jones) is hired by Gordon Fitzpatrick (Kevin McCarthy) to find his friend, Thomas Malloy (John Agar). He learns that Malloy stayed at the Ritz, and decides to follow up the lead after reconciling with Chelsee and agreeing to go for dinner with her at her apartment. Upon investigating Malloy's room, Tex is knocked out by a mysterious masked figure dressed in black. Tex is out through the night, inadvertently missing his date with Chelsee. After finding out that a female acquaintance of Malloy works at the Fuchsia Flamingo club, Tex offers to take Chelsee there to both apologise and hopefully to check out the lead. Regardless of whether Chelsee comes out with Tex or not, she will decide to take a vacation to Phoenix for a few days. At the club, Tex meets with the girl, Emily who agrees to trade information on Malloy if Tex can find out who is stalking her. She gives a note she received from her stalker to Tex and upon showing it to his connection in the police station Mac Malden, Tex finds out that Emily is being observed by the Black Arrow Killer.
Tex is able to discover that the NSA is involved and looking for Malloy, and breaks into one of their headquarters, Autotech. He finds out that the NSA are using video surveillance to monitor the goings on in Emily's room at the Fuchsia Flamingo. Tex finds the source of this and sees a figure dresses similarly to the person who attacked him in Malloy's room waiting to confront Emily. Tex hurries over to the club and is able to get there just in time to see the Killer jump out of the window carrying a small box. (Whether Emily survives or not will depend on the storyline path.) Tex chases him down and in the ensuing fight accidentally causes the Killer to fall off the roof and die. Tex removes the Killer's mask and sees that it is NSA agent Dag Horton, who had an office in Autotech. Tex is pulled in for questioning by the police, but is allowed to leave when an unknown woman enters the station and speaks to Mac.
Tex retrieves the box that Horton stole from Emily's room, but is seized by the NSA and taken to Jackson Cross's office at Autotech. He is threatened to stay out of their affairs, and is forced to hand over the box. Returning to his office, Tex is met by the woman who talked the police into letting him go. She reveals her name is Regan Madsen, she is Thomas Malloy's daughter and that Malloy sent out several boxes like the one Tex found.
Tex goes to the Fuchsia Flamingo and finds out that Emily is Malloy's wife, hence her being sent a box. using the return address on the packaging, Tex is finally able to track down Malloy in a run down warehouse in the industrial district. After establishing that Tex is not with the NSA, he reveals that he used to work at Roswell, the military base where a spacecraft allegedly crashed in 1947. Malloy asserts that the crash was legitimate and that the government covered the story up. The military began investigating then wreckage to look for weapons, and in the 1980s Malloy came into the project to attempt to decipher the hieroglyphics on the craft. After World War 3, Malloy left the project but was able to continue his research in secret. Before Malloy can continue his story, two NSA agents arrive and kill him. Tex is able to escape by blowing the building up.
Tex fills a disheartened Fitzpatrick in on the events, but insists on following up on the details he has uncovered. Fitzpatrick tells him that he worked with Malloy in Roswell, and that after becoming close friends, Malloy confided in him that he had been deciphering the alien hieroglyphics and had discovered that a second spacecraft had crashed somewhere on Earth. He then reveals that he received one of Malloy's boxes and there are probably about 6 in circulation. Tex meets with Regan to tell her about her father, and she agrees to give him her box despite reservations that Tex will open it and sell off the information for himself. After stealing Horton's personal effects from the morgue where his body is being held, Tex is able to get into Autotech's evidence room to recover Emily's box. Tex travels to the Cosmic Connection shop and speaks to Archie Ellis, an eccentric comic book nerd and ufologist who recently interviewed Malloy. Archie tells him that the famous author Elijah Witt set up the interview between them, during which Malloy made several cryptic references to something called 'The Pandora Device'. He also reveals that during their research into the alien crafts at Roswell, the scientists accidentally released something into the facility that proceeded to kill off practically everyone in the complex before the military moved in and quarantined the entire base. Archie tells Tex that Malloy sent him one of the boxes but it was stolen, and that the alien power cell in a picture from one of the other boxes is still stored in the Roswell complex.
Tex travels to Roswell and enters the deserted site, but whilst moving around the facility becomes increasingly aware that he is being stalked. It is revealed that the alien entity released by the researchers many years before is still lurking in the complex, but Tex is able to seal it off in a containment pod before he suffers that same fate and is then able to secure the power cell from the security room.
Tex is able to break the code on a disc Malloy sent to Elijah Witt on which Malloy reveals that each of the boxes sent out contains a piece of the Pandora Device and that assembling the parts will reveal the information Malloy had discovered. After obtaining all the relevant pieces, Tex summons Fitzpatrick, Regan and Witt to his office where he assembles the Pandora Device. A hologram of Malloy appears and tells the group that there was indeed another spacecraft that landed on Earth and that Malloy discovered its location. He hypothesises that the ship contains large amounts of anti-hydrogen on board, and that if this gets into the wrong hands it could result in the destruction of life on Earth. Witt immediately decides that the ship must be destroyed, but Regan is adamant that they could sell the technology off for big money. Regardless, the four decide that they must find the craft so they each take a separate route to the location Malloy specified.
Tex arrives and manages to navigate his way through a dense jungle and an ancient Mayan labyrinth in which he comes across Regan who set off earlier in hope she might get there first. Tex and Regan find the ship, but Jackson Cross arrives and it revealed that Regan and Cross had been working together all along. Before Cross is able to kill Tex, Fitzpatrick emerges from the ship and invites the three on board. Fitzpatrick shows them around and offers to show them the central power core before locking Regan and Cross inside, but not before Cross fires his gun and hits Fitzpatrick. As he is dying, Fitzpatrick reveals that he knows how to work the controls of the ship as his father was one of the aliens from the ship; his mother was a human woman from Nebraska, hence Fitzpatrick's human appearance. After urging Tex to type in the correct controls, he dies from his wound and Tex quickly exits the ship just in time for it to ascend into space and self-destruct. Tex is picked up by a late arriving Elijah Witt and taken home.
Endings
From this point, several endings are possible depending on how the player made Tex behave throughout the game:
Mission Street:
1. Chelsee returns from Phoenix and invites Tex round for dinner, during which he recounts his tale though she remains skeptical. Afterwards she reveals she is dressed in a square dance outfit and rewards Tex with a striptease.
2. Chelsee and Tex go for a drink at the Brew 'n' Stew. Chelsee reveals that she feels she isn't ready to commit to a relationship so and Tex should just remain good friends. Having signed up to the new 'holodate' service, a hologram of Clark Gable arrives to take Chelsee on a date. A deflated Tex returns to his office and calls the holodate service himself. He speaks to the manager (a hologram of Humphrey Bogart) and requests a two-for-one special date with Jayne Mansfield and Anna Nicole Smith.
Lombard Street
1. Same as Mission Street Ending 2.
2. On the spacecraft when Cross shoots he hits Tex instead of Fitzpatrick. He is able to limp off the ship and sees it explode. Having ruined his chances with Chelsee he decides to give up his career as a P.I. and join the circus as a clown. We see him backstage putting on his makeup before going on, glancing briefly at photograph of Chelsee before sadly leaving the room to perform.
Boulevard of Broken Dreams
1. Same as Lombard Street Ending 2
2. On the space craft Tex is shot in the leg, but is unable to get off before it self-destructs and dies.
3. Before boarding the ship, Cross will give his gun to Tex and ask him to shoot Fitzpatrick. If the player opts to shoot Cross instead, the gun will be empty and Cross will pull out a loaded gun and shoot Tex dead.
4. If the player chooses to shoot Fitzpatrick, the gun will be empty. Before Cross can shoot Fitzpatrick himself, Tex suggests they go to look on the ship. Fitzpatrick will lock all three of them in the ship's core. Tex is able to unlock the door, but Fitzpatrick will have already begun flying the ship into space. The ship self-destructs and all four characters die.
Gameplay
The Pandora Directive was the second game in the series to make use of Under a Killing Moon'''s engine and feature real-time 3D graphics. Players explore environments from a first-person perspective and can click to examine objects or interact using a variety of verbs. In addition to verb interaction, players can gather, use, and combine items to solve a variety of puzzles, and must also solve self-contained logic puzzles. Character interaction consists of two primary modes: asking characters about a universal list of topics available to the player, and branching dialog trees. These dialog trees were unusual at the time in that they did not display Tex's full response, but rather a short and sometimes humorous description, a convention later popularized by BioWare.The Pandora Directive was one of the first adventure games to feature branching narratives and multiple endings. The player could take Tex down "Mission Street", where he takes the high road and wins the love of his longtime crush, Chelsee Bando. Mission Street has three possible endings. Down "Boulevard of Broken Dreams", Tex is a selfish and cynical jerk worrying only about the big payoff. Boulevard of Broken Dreams leads to four possible endings. If the player chooses neither path, Tex will go down "Lombard Street". On this path, he's not really a nice guy, but he's not mean either. Lombard Street leads to two possible endings, both of which are common to Mission Street. The "best" Mission Street ending is achieved when the player has taken the high road every time he was given the choice, and by exactly following two conversation paths earlier in the game.The Pandora Directive provided two difficulty settings, Entertainment and Game Players mode. On Entertainment, hints were available and the player could bypass certain puzzles if the player so chose. Some minor objects and video scenes were available on this setting that were not available on Game Players mode. A total of 1500 points were available on Entertainment mode. On Game Players mode, no hints were available and puzzles could not be bypassed. Bonus points were available to those who solved certain puzzles in an allotted time or within a certain number of moves. In addition to this, extra in game locations and puzzles were available on Game Players mode that weren't available on Entertainment mode, making for a more challenging game playing experience. A total of 4000 points were available on Game Players mode.
The game has a large cast of characters ranging from the deranged to deadly. Several well-known actors starred, including Barry Corbin and Tanya Roberts.
Reception
According to Aaron Conners of Access Software, The Pandora Directives sales totaled "about 120,000 world-wide".
A reviewer for Next Generation praised The Pandora Directives all-star cast, three-dimensional interface, storyline, and use of both sight gags and more subtle humor. He criticized that some of the puzzles "are just too difficult, requiring unbelievable stretches of imagination and leaps in logic", but concluded the game overall to be as good as fans of adventure games could hope for. He scored it four out of five stars.
In PC Zone, Chris Anderson called The Pandora Directive "without question the best adventure game of its type". Scorpia of Computer Gaming World likewise praised it as a "superior entry in the adventure field." She concluded:The Pandora Directive was named the best adventure game of 1996 by Computer Gaming World and Computer Game Entertainment, and a runner-up for PC Gamer USs and CNET Gamecenter's 1996 "Best Adventure Game" awards, which went respectively to The Beast Within: A Gabriel Knight Mystery and The Neverhood. The editors of Computer Gaming World remarked, "Because your choices really affect how the game proceeds, this is, for once, an interactive movie that truly is interactive." Those of PC Gamer US wrote that The Pandora Directive "may have the best cast ever featured in a PC Game", and that it "tops its predecessor in every way".Entertainment Weekly'' gave the game a B+ and wrote that "Because you never know exactly what Murphy is about to say, the game's entertainment value, at least the first time around, is exceptionally high.
In 2011, Adventure Gamers placed it 9th on their list of all-time best adventure games.
Novelization
A novelization of the game was written by Aaron Conners in 1995. It differs slightly in details from the game, but the overall story is the same.
References
Related links
1996 video games
First-person adventure games
DOS games
Full motion video based games
Linux games
MacOS games
Novels based on video games
Tex Murphy
Video games developed in the United States
Video games set in the 2040s
Video games set in San Francisco
Games commercially released with DOSBox
Detective video games
Windows games
Video games with alternate endings
Single-player video games |
946495 | https://en.wikipedia.org/wiki/EZ%20Publish | EZ Publish | eZ Publish (pronounced "easy publish") is an open-source enterprise PHP content management system that was developed by the Norwegian company Ibexa. eZ Publish is freely available under the GNU GPL version 2 license, as well as under proprietary licenses that include commercial support. In 2015, eZ Systems introduced eZ Platform to replace eZ Publish with a more modern and future-proof solution.
Areas of use
eZ Publish supports the development of customized web applications. Typical applications range from brand sites, news sites and intranets to e-commerce, collaboration portals and iOS/Android apps. eZ Publish provides role-based multi-user access, multi-site management and multi-device delivery to desktops, tablets, phones and the Internet of Things (IoT) such as Smart TVs and digital kiosks.
The software is widely used in web applications of varying type and size worldwide.
Handling
eZ Publish is managed via a Web browser, and additional local software is not necessary. It also features a rich-text editor that allows formatting content similar to a word processor. This enables content editing and contribution without HTML skills. Content management can also be done through the eZ Publish front-end.
Dual-licensing
The software is provided for free, and may be used and modified according to the GPL license. In addition, paid professional support is available with the eZ Publish Enterprise Edition. Furthermore, a commercial license is also available, granting the right to use eZ Publish under license conditions different from the GPL.
Functional range
The eZ Publish range of features includes professional and secure development of web applications. Functional areas include content versioning, media library, role-based rights management, mobile development, sitemaps, search and printing.
Additionally, the system includes extensions, which contain individual functions. This allows for the upgrading of components while preserving compatibility with customized parts.
Technology
eZ Publish is written in PHP. Certified webservers on *nix systems are Apache and nginx. Some alternatives, such as Lighttpd, Hiawatha, Cherokee, may also work. On Windows, IIS is the preferred webserver. It is very common to use Varnish for caching high-performance sites that use eZ Publish.
The database abstraction layer enables the use of most common databases, i.e. MySQL, PostgreSQL, Microsoft SQL Server, and Oracle, without changes to the core system, by using drivers.
The software is cluster-ready and enforces the separation of content and presentation via XML storage of all content.
eZ Publish features:
User defined content classes and objects
Role based permissions system
Template engine
Version control
Workflow management and task system
Image conversion and scaling
Database abstraction layer
Multi-lingual support, with Unicode
Libraries for XML, SOAP, localization and internationalization
Search engine support
eZ Components
eZ Components was a library of standardized modules for speeding up application development. It includes functions for compressing binary files, optimizing performance through caching, connecting to several databases, debugging, RSS, generating graphs for analysis, converting images, supporting email and validating user input.
In an effort to transition the development from a company-driven to a community-driven model, the whole source of the eZ Components were donated to the Apache Software Foundation, relicensed from the BSD to the Apache 2 license and renamed to Zeta Components.
Replacement with eZ Platform
In December 2014 the last version of the eZ Publish software was released. The work on the code base continued in the form of eZ Platform. This new version is dropping all the legacy code from the software and transitioning to a complete new code base built on the Symfony Full Stack Framework. This allows the developer team to share components and documentation with the underlying framework, while adding functionalities such as content and media management. eZ Platform is one of many CMSs using Symfony PHP components.
The initial version of eZ Platform was released on December 15, 2015 and the latest stable version, v2.5, was released in March 2019. The product is a fully functional Open Source CMS. Beyond the open source version of the software, users have also the option to choose eZ Platform Enterprise Edition which is a commercial Digital Experience Platform built on the eZ Platform core.
Further reading
References
External links
Free content management systems |
17017516 | https://en.wikipedia.org/wiki/Universal%20Audio%20%28company%29 | Universal Audio (company) | Universal Audio is an American company that designs, imports, and markets audio signal processing hardware and effect pedals, audio interfaces, and digital signal processing, virtual instrument, and digital audio workstation software and plug-ins.
Founded in 1958 by Bill Putnam, Sr. with products produced under the Universal Audio brand through the mid-1970s, the company was re-established in 1999 by his sons Jim and Bill Putnam, Jr.. The company produces modern versions of vintage Universal Audio, UREI, and Teletronix analog recording equipment, as well as hardware and software for digital audio production on the UAD-2 platform.
History
Original Company
Universal Audio, Inc. was founded alongside the United Recording Corporation by Bill Putnam Sr. in 1958. Putnam’s intention was for Universal Audio to serve as United’s manufacturing arm, with the company initially operating out of the United Recording premises at 6050 Sunset Boulevard in Hollywood, California. During its first few years Universal Audio produced a number of tube-based audio processors, the most famous being the 610 preamplifier. These processors also served as components in custom recording consoles built by Universal Audio for various studios.
In 1961, United Recording acquired Studio Supply Co. and rebranded it as the Studio Electronics Corporation (SEC). The focus of SEC was the creation of fully-fledged studio systems built around the equipment produced by Universal Audio. In October, 1961, all manufacturing was moved to Western Recorders, a nearby company in which United Recording had gained a majority stake.
While Universal Audio as a company was eventually absorbed by Studio Electronics in December, 1965, the brand itself continued with individual products retaining the Universal Audio label. This merger also coincided with another relocation, this time to a 8,100 square foot premises at 11922 Valerio Street in North Hollywood.
Studio Electronics acquired two additional brands in 1967: Teletronix and Waveforms. The acquisition of Teletronix from Babcock allowed SEC to begin production of the popular LA-2A compressor. Waveforms on the other hand expanded the product catalog into the area of precision audio test instruments. In light of these acquisitions, and in the anticipation of more, SEC was rebranded as United Recording Electronics Industries (UREI). Products would continue to carry their own brand names alongside the UREI badge until the mid-1970s, at which point the Universal Audio label was removed from Revision H of the 1176 compressor.
As part of Putnam's sale of United Western Recorders, UREI was acquired by JBL in 1984. JBL released a number of products, primarily equalizers, with the UREI label.
Revival
In 1999, Universal Audio (UA) was reestablished by Bill and Jim Putnam, the sons of Bill Putnam, Sr.. A software-based sister company, Kind of Loud Technologies, was also co-founded by Bill Putnam, Jr. and Jonathan Abel, who had met at Stanford University through the Center for Computer Research in Music and Acoustics. The two companies merged to offer both hardware re-issues of classic Universal Audio and Teletronix recording products, and virtual emulations of audio equipment from a range of manufacturers, including officially-branded emulations of original UA and Teletronix products.
Products
The first product introduced by the re-established Universal Audio in 1999 was a re-issue of the 1176LN. The original design was reproduced and revised thanks to the extensive design notes left by Bill Putnam. The company subsequently re-issued an updated version of the Teletronix LA-2A.
UA introduced its line of Apollo audio interfaces in 2012. These interfaces offered onboard DSP that allowed signals to be monitored in realtime through UA plugins. Subsequent models of Apollo also incorporated a technology called Unison, which improved the authenticity of preamp emulations by matching both the impedance of the original hardware as well as its gain level "sweet spots".
At Winter NAMM 2020, UA announced that it would expand its Console software into a fully-featured DAW called LUNA. The software will be freely available to all owners of Thunderbolt Apollo interfaces.
Awards
The company has won several TEC Award awards and a FutureMusic Platinum award.
See also
Bill Putnam
United Recording Electronics Industries (UREI)
References
External links
Official website
NAMM Oral History Program - Bill Putnam, Jr. Interview (2015)
Manufacturers of professional audio equipment
Music equipment manufacturers
Software companies based in California
Privately held companies based in Illinois
Audio equipment manufacturers of the United States
Software companies of the United States |
20830585 | https://en.wikipedia.org/wiki/Split-single%20engine | Split-single engine | In internal combustion engines, a split-single design is a type of two-stroke where two cylinders share a single combustion chamber.
The first production split-single engine was built in 1918 and the design was used on several motorcycles and cars until the mid 1950s, although one company continued producing split-single engines for motorcycles until 1970. During this time, the design was also occasionally used for engines with four or more cylinders.
Principle of operation
The split-single uses a two-stroke cycle (i.e. where every downward stroke produces power) with the following phases:
Pistons travel upwards, compressing the fuel-air mixture in both cylinders. A spark plug ignites the mixture (in the right side cylinder in the animation) when the pistons are near the top of the cylinders.
Pressure from the ignited air-fuel mixture pushes both pistons downwards. Near the bottom of the travel, an exhaust port becomes exposed (in the left side cylinder in the animation), causing the exhaust gases to exit both cylinders. At the same time, the intake port is exposed on the other cylinder, causing a fresh air-fuel mixture (which has been compressed in the crankcase by the downward movement of the pistons) to be drawn into the cylinder for the next cycle.
Characteristics
The advantage of the split-single engine compared to a conventional two-stroke engine is that the split-single can give better exhaust scavenging while minimizing the loss of fresh fuel/air charge through the exhaust port. As a consequence, a split-single engine can deliver better economy, and may run better at small throttle openings. A disadvantage of the split-single is that, for only a marginal improvement over a single-cylinder engine, a split-single engine is larger, heavier and more expensive. Since a manufacturer could produce a conventional two-cylinder engine at similar cost to a split-single engine, a two-cylinder engine is usually a more space- and cost-effective design. Most split-single engines used a single combustion chamber (i.e. two cylinders), however some engines used two combustion chambers (i.e. four cylinders) or more.
Initial designs of split-single engines from 1905-1939 used a single Y-shaped or V-shaped connecting rod. Externally, these engines appeared very similar to a conventional single-cylinder two-stroke engine; they had one exhaust, one carburetor in the usual place behind the cylinders and one spark plug.
After World War II, more sophisticated internal mechanisms improved mechanical reliability and led to the carburetor being placed in front of the barrel, tucked under and to the side of the exhaust. An example of this arrangement was used on the 1953-1969 Puch 250 SGS.
Early engines using a "side-by-side" layout (with the carburetor in the "normal" place behind the cylinder) had similar lubrication and pollution problems as conventional two-stroke engines of the era, however the revised designs after World War II addressed these problems.
Pre-World War II examples
Lucas
The first split-single engine was the Lucas, built in the UK in 1905. It used 2 separate crankshafts connected by gears to drive 2 separate pistons, so that the engine had perfect primary balance.
Garelli
From 1911-1914, Italian engineer Adalberto Garelli patented a split single engine which used a single connecting rod and long wrist pin which passed through both pistons. Garelli Motorcycles was formed after World War I and produced a split-single motorcycle engine for road use and racing from 1918-1926.
Trojan
The Trojan two-stroke, as used from 1913 in the Trojan car in the UK, was independently invented but would now be described as a split-single. Photos of a 1927 "twin" model at the London Science Museum show the internals. The "fore-and-aft" layout of the cylinders means that the V-shaped connecting rod has to flex slightly with each revolution. Unlike the German/Austrian motorcycle engines, this engine was water-cooled. The tax horsepower regulations in the United Kingdom resulted in a lower road tax for the Trojan compared with a conventional engine of similar displacement.
Trojan also made another split-single engine later with the cylinders arranged in a 'V' formation. The unusual 'V6' design had two split-single sets of cylinders (4 cylinders total) on one bank of the V and two scavenge blower cylinders on the other bank of the V.
Puch
After World War I ended, Austrian industry struggled to recover. Italian engineer Giovanni Marcellino arrived at the main factory of Puch with the instruction to wind up operations. Instead of liquidating the factory, he settled in the town and designed and began production of a new split-single engine which debuted in the 1923 Puch LM racing motorcycle. Influenced by industrial opposed-piston engines, the Puch engine had asymmetric port timing and pistons arranged one behind the other (instead of the side-to-side arrangement used by Garelli). To avoid flexing of the connecting rod, the small-end bearing of the cooler intake piston was arranged to slide slightly fore-and-aft in the piston. In 1931 Puch won the German Grand Prix with a supercharged split-single. By 1935, a four-cylinder version of the Puch split-cylinder design produced and was used in motorcycles.
Motor racing
From 1931 until 1939, DKW racing motorcycles powered by split-single engines dominated the Lightweight and Junior racing classes.
At the 1931 and 1932 Indianapolis 500, Leon Duray's competed with cars powered by the 16-cylinder Duray U16 engine using a split-cylinder design.
In 1935, the Monaco-Trossi Grand Prix car was built with a 16-cylinder radial engine using a split-cylinder design.
Post-World War II Examples
Puch
Puch's split-single production and racing were restarted in 1949, and a split-single engine was used in the Puch 125T model.
The 1953-1969 Puch 250 SGS (sold in the United States by Sears as the "Allstate 250" or "Twingle") used with an improved system of one connecting rod hinged on the back of the other. These engines typically use the forward piston to control both intake and exhaust ports, with the interesting result that the carburettor is at the front of the engine, under and to the side of the exhaust. The rear piston controls the transfer port from the crankcase to the cylinder. Increasingly, these models were fitted with an oil mixing pump, fed from a reservoir incorporated in the petrol tank. Some also have a twin-spark plug ignition system firing an almost figure-eight shaped combustion chamber. The improvements tamed, if not virtually eliminated, the previous problem of two-stroke plug fouling. A total of 38,584 Puch 250 SGS motorcycles were produced between 1953 and 1970.
Puch ceased production of split-single engines around 1970.
EMC Motorcycles
EMC Motorcycles in the United Kingdom manufactured a split-single engine that was used in the 1947-1952 EMC 350. After 1948 the engine also was fitted with an oil pump controlled by the throttle, which dispensed two-stroke oil into the fuel at a variable rate depending on throttle opening, instead of having to pre-mix oil in the fuel.
Iso Autoveicoli
The Italian manufacturer began producing a split-single engine in 1952 for their Iso Moto motorcycle. This engine was then used in the Iso Isetta bubble car from 1953–1956.
Triumph-Werke Nürnberg
Triumph-Werke Nürnberg (TWN) in Germany began production of a split-single engines for their motorcycles in 1946. The TWN BDG 250 and TWN BDG 500 models, produced from 1946-1957, used a Y-shaped connecting rod, so the pistons are "side-by-side", making the engine little different visually from a regular two-stroke. Other split-single models from TWN were the 1954-1957 TWN Cornet (200 cc with 12 volt electrics and no kickstart), the 1953-1957 TWN Boss (350 cc) and the 1954-1957 Contessa scooter (200 cc). The bulbous shape of the exhaust of the Cornet and Boss is a two-stroke TWN feature, not linked to the split-single engine. All TWN motorcycle production ceased in 1957.
Further development
Phase shift between pistons also allows to use a supercharger, which may be effective for two-stroke diesel engines.
See also
List of motorcycles by type of engine
Scuderi engine
References
Piston engine configurations
Two-stroke engine technology |
634143 | https://en.wikipedia.org/wiki/Disk%20cloning | Disk cloning | Disk cloning is the process of creating a 1-to-1 copy of a hard disk drive (HDD) or solid-state drive (SSD), not just its files. Disk cloning may be used for upgrading a disk or replacing an aging disk with a fresh one. In this case, the clone can replace the original disk in its host computer. Disk cloning may also be used for disaster recovery or forensics. In the context of backup software, disk cloning is very similar to disk imaging; in case of the latter, a 1-to-1 copy of a disk is created inside a disk image file.
Disk cloning may be done with specialized cloning software, backup software, disk imaging software that has the necessary features, or specialized hardware.
Operating environment
A disk cloning program needs to be able to read even protected operating system files on the source disk, and must guarantee that the system is in a consistent state at the time of reading. It must also overwrite any operating system already present on the destination disk. To simplify these tasks, most disk cloning programs can run under an operating system different from the native operating system of the host computer, for example, MS-DOS or an equivalent such as PC DOS or DR-DOS, or Linux. The computer is booted from this operating system, the cloning program is loaded and copies the computer's file system. Many programs can clone a disk, or make an image, from within the running system, with special provision for copying open files; but an image cannot be restored onto the Windows System Drive under Windows.
A disk cloning program must have device drivers or equivalent for all devices used. The manufacturers of some devices do not provide suitable drivers, so the manufacturers of disk cloning software must write their own drivers, or include device access functionality in some other way. This applies to tape drives, CD and DVD readers and writers, and USB and FireWire drives. Cloning software contains its own TCP/IP stack for multicast transfer of data where required.
Image transfer
The simplest method of cloning a disk is to have both the source and destination disks present in the same machine, but this is often not possible. Disk cloning programs can link two computers by a parallel cable, or save and load images to an external USB drive or network drive. As disk images tend to be very large (usually a minimum of several hundred MB), performing several clones at a time puts excessive stress on a network. The solution is to use multicast technology. This allows a single image to be sent simultaneously to many machines without putting greater stress on the network than sending an image to a single machine.
Image manipulation
Although disk cloning programs are not primarily backup programs, they are sometimes used as such. A key feature of a backup program is to allow the retrieval of individual files without needing to restore the entire backup. Disk cloning programs either provide a Windows Explorer-like program to browse image files and extract individual files from them, or allow an image file to be mounted as a read-only filesystem within Windows Explorer.
Some such programs allow deletion of files from images, and addition of new files.
Cloning Software
This table highlights the common capabilities of disk cloning software.
See also
List of backup software
List of data recovery software
List of disk partitioning software
Disk mirroring
Disk image
Live USB
Recovery disc
Security Identifier
Notes
References
Storage software
Backup
Utility software types |
41471518 | https://en.wikipedia.org/wiki/Kill%20chain | Kill chain | The term kill chain is a military concept which identifies the structure of an attack. It consists of:
・identification of target
・dispatching of forces to target
・initiation of attack on target
・destruction of target
Conversely, the idea of "breaking" an opponent's kill chain is a method of defense or preemptive action.
Military
F2T2EA
One military kill chain model is the "F2T2EA", which includes the following phases:
Find: Identify a target. Find a target within surveillance or reconnaissance data or via intelligence means.
Fix: Fix the target's location. Obtain specific coordinates for the target either from existing data or by collecting additional data.
Track: Monitor the target's movement. Keep track of the target until either a decision is made not to engage the target or the target is successfully engaged.
Target: Select an appropriate weapon or asset to use on the target to create desired effects. Apply command and control capabilities to assess the value of the target and the availability of appropriate weapons to engage it.
Engage: Apply the weapon to the target.
Assess: Evaluate effects of the attack, including any intelligence gathered at the location.
This is an integrated, end-to-end process described as a "chain" because an interruption at any stage can interrupt the entire process.
Previous terminology
The "Four Fs" is a military term used in the United States military, especially during World War II.
Designed to be easy to remember, the "Four Fs" are as follows:
Find the enemy – Locate the enemy.
Fix the enemy – Pin them down with suppressing fire.
Fight the enemy – Engage the enemy in combat or flank the enemy – Send soldiers to the enemy's sides or rear.
Finish the enemy – Eliminate all enemy combatants.
Proposed terminology
The "Five Fs" is a military term described by Maj. Mike "Pako" Benitez, an F-15E Strike Eagle Weapons Systems Officer who served in the United States Air Force and the United States Marine Corps.
Designed to update the Kill Chain to reflect updated, autonomous and semi-autonomous weapon systems, the "Five Fs" are described in "It's About Time: The Pressing Need to Evolve the Kill Chain" as follows:
Find encapsulates the unity of effort of Joint Intelligence Preparation of the Operating Environment, matching collection assets to commander's intent and targeted areas of interest. This inevitably leads to detections, which may be further classified as an emerging target if it meets the intent.
Fix is doctrinally described as "identifying an emerging target as worthy of engagement and determines its position and other data with sufficient fidelity to permit engagement."
Fire involves committing forces or resources (i.e., releasing a munition, payload, or expendable)
Finish involves employment with strike approval authorities (i.e., striking a target/firing directed energy/destructive electronic attack). This is similar to a ground element executing maneuvers to contact but then adhering to prescribed rules of engagement once arriving at the point of friction.
Feedback closes the operational OODA Loop with an evaluative step, in some circumstances referred to as "Bomb Damage Assessment".
North Korean nuclear capability
A new American military contingency plan called "Kill Chain" is reportedly the first step in a new strategy to use satellite imagery to identify North Korean launch sites, nuclear facilities and manufacturing capability and destroy them pre-emptively if a conflict seems imminent. The plan was mentioned in a joint statement by the United States and South Korea.
Cyber
Attack phases and countermeasures
More recently, Lockheed Martin adapted this concept to information security, using it as a method for modeling intrusions on a computer network. The cyber kill chain model has seen some adoption in the information security community. However, acceptance is not universal, with critics pointing to what they believe are fundamental flaws in the model.
Computer scientists at Lockheed-Martin corporation described a new "intrusion kill chain" framework or model to defend computer networks in 2011. They wrote that attacks may occur in phases and can be disrupted through controls established at each phase. Since then, the "cyber kill chain" has been adopted by data security organizations to define phases of cyberattacks.
A cyber kill chain reveals the phases of a cyberattack: from early reconnaissance to the goal of data exfiltration. The kill chain can also be used as a management tool to help continuously improve network defense. According to Lockheed Martin, threats must progress through several phases in the model, including:
Reconnaissance: Intruder selects target, researches it, and attempts to identify vulnerabilities in the target network.
Weaponization: Intruder creates remote access malware weapon, such as a virus or worm, tailored to one or more vulnerabilities.
Delivery: Intruder transmits weapon to target (e.g., via e-mail attachments, websites or USB drives)
Exploitation: Malware weapon's program code triggers, which takes action on target network to exploit vulnerability.
Installation: Malware weapon installs access point (e.g., "backdoor") usable by intruder.
Command and Control: Malware enables intruder to have "hands on the keyboard" persistent access to target network.
Actions on Objective: Intruder takes action to achieve their goals, such as data exfiltration, data destruction, or encryption for ransom.
Defensive courses of action can be taken against these phases:
Detect: Determine whether an intruder is present.
Deny: Prevent information disclosure and unauthorized access.
Disrupt: Stop or change outbound traffic (to attacker).
Degrade: Counter-attack command and control.
Deceive: Interfere with command and control.
Contain: Network segmentation changes
A U.S. Senate investigation of the 2013 Target Corporation data breach included analysis based on the Lockheed-Martin kill chain framework. It identified several stages where controls did not prevent or detect progression of the attack.
Alternatives
Different organizations have constructed their own kill chains to try to model different threats. FireEye proposes a linear model similar to Lockheed-Martin's. In FireEye's kill chain the persistence of threats is emphasized. This model stresses that a threat does not end after one cycle.
Reconnaissance
Initial intrusion into the network
Establish a backdoor into the network.
Obtain user credentials.
Install various utilities.
Privilege escalation/ lateral movement/ data exfiltration
Maintain persistence.
Critiques
Among the critiques of Lockheed Martin's cyber kill chain model as threat assessment and prevention tool is that the first phases happen outside the defended network, making it difficult to identify or defend against actions in these phases. Similarly, this methodology is said to reinforce traditional perimeter-based and malware-prevention based defensive strategies. Others have noted that the traditional cyber kill chain isn't suitable to model the insider threat. This is particularly troublesome given the likelihood of successful attacks that breach the internal network perimeter, which is why organizations "need to develop a strategy for dealing with attackers inside the firewall. They need to think of every attacker as [a] potential insider".
Unified
The Unified Kill Chain was developed in 2017 by Paul Pols in collaboration with Fox-IT and Leiden University to overcome common critiques against the traditional cyber kill chain, by uniting and extending Lockheed Martin's kill chain and MITRE's ATT&CK framework. The unified version of the kill chain is an ordered arrangement of 18 unique attack phases that may occur in end-to-end cyberattack, which covers activities that occur outside and within the defended network. As such, the unified kill chain improves over the scope limitations of the traditional kill chain and the time-agnostic nature of tactics in MITRE's ATT&CK. The unified model can be used to analyze, compare, and defend against end-to-end cyber attacks by advanced persistent threats (APTs). A subsequent whitepaper on the unified kill chain was published in 2021.
References
Crime prevention
Data security
National security
Security |
43930191 | https://en.wikipedia.org/wiki/Network%20Access%20License | Network Access License | The Network Access License (NAL) is mandatory for telecommunication equipment that is exported to or sold in China. This license applies to telecommunication equipment that is connected to the public telecommunication network.
For receiving the Network Access License, an application has to be submitted at the Ministry of Industry and Information Technology (MIIT) in Beijing. Among others, the Ministry of Industry and Information Technology is responsible for the Chinese regulation and development of the Internet, wireless, broadcasting, communications and production of electronic and information goods, and the promotion of the national knowledge economy.
History
In 2001, the Chinese authorities published the first three product categories requiring NAL. Since then, about 25 product categories including about 300 different kinds of telecommunication devices have been added to the product catalogue. Furthermore, the NAL products are categorized in basic and high-end equipment.
In August 2014, the Ministry of Industry and Information Technology has issued 495 network access licenses for telecom equipment.
In 2014, there are only 14 test laboratories that are authorized for the testing of the telecommunication equipment. Most of these test laboratories are specialized on product categories.
Application process
The application process is a 4-step process and include:
Submission of application documents
Product tests
Factory inspection
License issue
The length of time required to obtain an NAL varies according to the product for which the license is sought. Based on current regulations, it usually takes 20 days for testing and 60 days for processing an application. Nevertheless, for some products it may take longer.
See also
Ministry of Industry and Information Technology of the People's Republic of China MIIT
Standardization Administration of China SAC
Electronic information industry in China
Ministries of China
References
Certification marks
Export and import control
Economy of China
Safety codes
Foreign trade of China
Organizations based in Beijing |
7668948 | https://en.wikipedia.org/wiki/Amiexpress | Amiexpress | AmiExpress - also known as /X - by Synthetic Technologies was a popular BBS software application for the Commodore Amiga line of computers. AmiExpress was extremely popular among the warez scene for trading (exchanging) software.
AmiExpress was created and updated between 1992 and 1995. The software was originally written by Michael Thomas of Synthetic Technologies and later sold to Joseph Hodge of Lightspeed Technologies. Mike Thomas worked on AmiExpress for about two years, modelling the software after the commercial PC BBS software PCBoard. He first ran a BBS on PCBoard on a PC himself, but he was not happy with the PC platform in general and decided to make a comparable product on the Amiga.
A Usenet post (by /X author Joseph Hodge) later stated that both programming on /X and the developer company (LightSpeed Technologies Inc.) were to be dissolved, with plans for a new bulletin board system - Millennium BBS. This never surfaced.
In 2018 AmiExpress was revived by Darren Coles. He obtained permission from Joseph Hodge to continue development of the product and to continue using the name AmiExpress. Version 5.0.0 was released publicly at the end of 2018. This version was re-written in Amiga-E by taking the publicly released source code for v3 and reverse engineering the new functionality present in v4.20. It is highly backwards compatible with the v4.x versions and adds many new features and the source code is available on github.
See also
Warez
Bulletin Board System
References
External links
AmiExpress information and live demonstration
Lightspeed Technologies AmiExpress Professional 4.0 software & source code download
Bulletin board system software
Amiga software |
40941507 | https://en.wikipedia.org/wiki/Safefood%20360%C2%B0 | Safefood 360° | Safefood 360, Inc. is a food safety management software company founded in Dublin, Ireland, and now headquartered in Manhattan, New York, United States. The main product of Safefood 360, Inc. is an internet-based food safety management system, which is used by food manufacturing businesses for managing food safety programs.
History
Safefood 360° was founded by former food industry-consultant George Howlett and Philip Gillen in 2011. The company operates globally on four continents, including Europe, North America, Australia, and Africa.
Products
Food safety management software
The main product of Safefood 360° is an online-based food safety management platform that is used with the internet browser. The software is used for collecting, storing, managing, and analyzing food safety related data and documents, creating food safety audit programs, and managing a food supply chain, among other uses. According to the vendor itself, the software is aligned according to major international food safety standards, including FSNA, GFSI, ISO 22000, BRC, among others.
Software modules
HACCP Planning
Calibration
Monitoring
Management
Utilities
Software uses
The three main uses of Safefood 360° software are: food safety management, food supply chain management, and food safety auditing. A food safety management software is typically used to replace a paper based system.
See also
Food safety
ISO 22000
Global Food Safety Initiative
Hazard analysis and critical control points
References
External links
Safefood 360 User Guide
Safefood 360 User Portal
Software companies based in New York City
Software companies of the United States |
31324140 | https://en.wikipedia.org/wiki/JAUS%20Tool%20Set | JAUS Tool Set | The JAUS Tool Set (JTS) is a software engineering tool for the design of software services used in a distributed computing environment. JTS provides a Graphical User Interface (GUI) and supporting tools for the rapid design, documentation, and implementation of service interfaces that adhere to the Society of Automotive Engineers' standard AS5684A, the JAUS Service Interface Design Language (JSIDL). JTS is designed to support the modeling, analysis, implementation, and testing of the protocol for an entire distributed system.
Overview
The JAUS Tool Set (JTS) is a set of open source software specification and development tools accompanied by an open source software framework to develop Joint Architecture for Unmanned Systems (JAUS) designs and compliant interface implementations for simulations and control of robotic components per SAE-AS4 standards. JTS consists of the components:
GUI based Service Editor: The Service Editor (referred to as the GUI in this document) provides a user friendly interface with which a system designer can specify and analyze formal specifications of Components and Services defined using the JAUS Service Interface Definition Language (JSIDL).
Validator: A syntactic and semantic validator provides on-the-fly validation of specifications entered (or imported) by the user with respect to JSIDL syntax and semantics is integrated into the GUI.
Specification Repository: A repository (or database) that is integrated into the GUI that allows for the storage of and encourages the reuse of existing formal specifications.
C++ Code Generator: The Code Generator automatically generates C++ code that has a 1:1 mapping to the formal specifications. The generated code includes all aspects of the service, including the implementations of marshallers and unmarshallers for messages, and implementations of finite-state machines for protocol behavior that are effectively decoupled from application behavior.
Document Generator: The Document Generator automatically generates documentation for sets of Service Definitions. Documents may be generated in several formats.
Software Framework: The software framework implements the transport layer specification AS5669A, and provides the interfaces necessary to integrate the auto-generated C++ code with the transport layer implementation. Present transport options include UDP and TCP in wired or wireless networks, as well as serial connections. The transport layer itself is modular, and allows end-users to add additional support as needed.
Wireshark Plugin: The Wireshark plugin implements a plugin to the popular network protocol analyzer called Wireshark. This plugin allows for the live capture and offline analysis of JAUS message-based communication at runtime. A built-in repository facilitates easy reuse of service interfaces and implementations traffic across the wire.
The JAUS Tool Set can be downloaded from www.jaustoolset.org User documentation and community forum are also available at the site.
Release history
Following a successful Beta test, Version 1.0 of the JAUS Tool Set was released in July 2010. The initial offering focused on core areas of User Interface, HTML document generation, C++ code generation, and the software framework. The Version 1.1 update was released in October 2010. In addition to bug fixes and UI improvements, this version offered several important upgrades including enhancement to the Validator, Wireshark plug-in, and generated code.
The JTS 2.0 release is scheduled for the second quarter of 2011 and further refines the Tool Set functionality:
Protocol Validation: Currently, JTS provides validation for message creation, to ensure users cannot create invalid messages specifications. That capability does not currently exist for protocol definitions, but is being added. This will help ensure that users create all necessary elements of a service definition, and reduce user error.
C# and Java Code Generation: Currently, JTS generates cross-platform C++ code. However, other languages including Java and C# are seeing a dramatic increase in their use in distributed systems, particularly in the development of graphical clients to embedded services.
MS Word Document Generation: HTML and JSIDL output is supported, but native Office-Open-XML (OOXML) based MS Word generation has advantages in terms of output presentation, and ease of use for integration with other documents. Therefore, we plan to integrate MS Word service document generation.
In addition, the development team has several additional goals that are not-yet-scheduled for a particular release window:
Protocol Verification: This involves converting the JSIDL definition of a service into a PROMELA model, for validation by the SPIN model checking tool. Using PROMELA to model client and server interfaces will allow developers to formally validate JAUS services.
End User Experience: We plan to conduct formal User Interface testing. This involves defining a set of tasks and use cases, asking users with various levels of JAUS experience to accomplish those tasks, and measuring performance and collecting feedback, to look for areas where the overall user experience can be improved.
Improved Service Re-Use: JSIDL allows for inheritance of protocol descriptions, much like object-oriented programming languages allow child classes to re-use and extend behaviors defined by the parent class. At present, the generated code 'flattens' these state machines into a series of nested states which gives the correct interface behavior, but only if each single leaf (child) service is generated within its own component. This limits service re-use and can lead to a copy-and-paste of the same implementation across multiple components. The team is evaluating other inheritance solutions that would allow for multiple leaf (child) services to share access to a common parent, but at present the approach is sufficient to address the requirements of the JAUS Core Service Set.
Domains and application
The JAUS Tool Set is based on the JAUS Service Interface Definition Language (JSIDL), which was originally developed for application within the unmanned systems, or robotics, communities. As such, JTS has quickly gained acceptance as a tool for generation of services and interfaces compliant with the SAE AS-4 "JAUS" publications. Although usage statistics are not available, the Tool Set has been downloaded by representatives of US Army, Navy, Marines, and numerous defense contractors. It was also used in a commercial product called the JAUS Expansion Module sold by DeVivo AST, Inc.
Since the JSIDL schema is independent of the data being exchanged, however, the Tool Set can be used for the design and implementation of a Service Oriented Architecture for any distributed systems environment that uses binary encoded message exchange. JSIDL is built on a two-layered architecture that separates the application layer and the transport layer, effectively decoupling the data being exchanges from the details of how that data moves from component to component.
Furthermore, since the schema itself is widely generic, it's possible to define messages for any number of domains including but not limited to industrial control systems, remote monitoring and diagnostics, and web-based applications.
Licensing
JTS is released under the open source BSD license. The JSIDL Standard is available from the SAE. The Jr Middleware on which the Software Framework (Transport Layer) is based is open source under LGPL. Other packages distributed with JTS may have different licenses.
Sponsors
Development of the JAUS Tool Set was sponsored by several United States Department of Defense organizations:
Office of Under Secretary of Defense for Acquisition, Technology & Logistics / Unmanned Warfare.
Navy Program Executive Officer Littoral and Mine
Navy Program Executive Officer Unmanned Aviation and Strike Weapons
Office of Naval Research
Air Force Research Lab
References
External links
jaustoolset.org: Homepage for the JAUS Tool Set
sae.org: Publishers of the SAE AS-4 JAUS family of standards, including JSIDL (AS-5684)
jrmiddleware.org: Homepage for the JR Middleware, the LGPL source code used by the JTS Software Framework
Vehicle design
Programming tools |
619881 | https://en.wikipedia.org/wiki/IBM%20TopView | IBM TopView | TopView is the first object-oriented, multitasking, and windowing, personal computer operating environment for PC DOS developed by IBM, announced in August 1984 and shipped in March 1985. TopView provided a text-mode (although it also ran in graphics mode) operating environment that allowed users to run more than one application at the same time on a PC. IBM demonstrated an early version of the product to key customers before making it generally available, around the time they shipped their new PC AT computer.
Hopeful beginnings
When Microsoft announced Windows 1.0 in November 1983, International Business Machines (IBM), Microsoft's important partner in popularizing MS-DOS for the IBM PC, notably did not announce support for the forthcoming window environment. IBM determined that the microcomputer market needed a multitasking environment. When it released TopView in 1985, the press speculated that the software was the start of IBM's plan to increase its control over the IBM PC (even though IBM published the specifications publicly) by creating a proprietary operating system for it, similar to what IBM had offered for years on its larger computers. TopView also allowed IBM to serve customers who were surprised that the new IBM AT did not come with an operating system able to use the hardware multitasking and protected mode features of the new 80286 CPU, as DOS and most applications were still running in 8086/8088 real mode.
Even given TopView's virtual memory management capabilities, hardware limitations still held the new environment back—a base AT with 256 KB of RAM only had room for 80 KB of application code and data in RAM once DOS and TopView had loaded up. 512-640 KB was recommended to load up two typical application programs of the time. This was the maximum the earlier IBM XT could have installed. Once loaded, TopView took back much of the memory consumed by DOS, but still not enough to satisfy industry critics. TopView ran in real mode on any x86 processor and could run well behaved DOS programs (i.e. programs that did not write directly to the screen but used BIOS int 10h and DOS int 21h, such as the IBM Assistant Series of productivity programs) in an arrangement of windows. Well behaved applications would use standard DOS and BIOS function calls to access system services and hardware. Misbehaving programs (i.e. such as programs that did write directly to the screen) such as Lotus 1-2-3, WordStar and dBase III would still run in the TopView environment, but would consume the entire screen. Object-oriented applications were written using the TopView API. TopView was developed to run on the 8088 (and required what IBM referred to as a fixed disk) and later the 80286. TopView was not updated to make use of the virtual 8086 mode added in the Intel 80386 processors that allowed better virtualization.
Initially, compatibility with the extended features was limited mainly to IBM applications, along with a few third-party products like WordPerfect and VolksWriter. A chicken-and-egg situation developed where third-party developers were reluctant to add extended feature support (such as block insert and delete to allow users to do cut/copy/paste between applications) when they did not see market demand for them. Most DOS programs did, however, support these functions and did allow the user to perform the cut, copy, and paste operations by using the TopView pop-up menus.
Some believed that IBM planned to use TopView to force reliance on them to comply with the new technical specifications. As later versions of TopView were released, it was able to successfully make more challenging DOS apps run in a multitasking fashion by intercepting direct access to system services and hardware.
TopView first introduced Program Information Files (PIF files), which defined how a given DOS program should be run in a multi-tasking environment, notably to avoid giving it unnecessary resources which could remain available to other programs. TopView's PIF files were inherited and extended by Quarterdeck's DESQview and Microsoft Windows. The concept of Program Information Files was also used under Digital Research operating systems such as Concurrent DOS, Multiuser DOS, Datapac System Manager and REAL/32; however, using the PIFED command, the necessary program information got directly embedded into the .EXE or .COM executable file.
Version history
Version 1.1, introduced in June 1986, added support for the IBM PC Network and IBM 3270 terminal emulation. Importantly, support for swapping non-resident programs was added—onto the hard disk on all computers and into the high memory area on machines equipped with a 286 CPU. The initially poor support for DOS batch files was improved.
Version 1.12, introduced in April 1987, added support for the new IBM PS/2 series, their DOS 3.30 operating system, and their new PS/2 mice. It could also now use up to four serial ports.
Decline and discontinuation
TopView sold below expectations from the start, with many potential users already satisfied with cheaper, less memory-intensive TSR task switchers like Ready, Spotlight, and Borland Sidekick which didn't need a multitasking environment. TopView ran in graphics mode (TOPVIEW /G); however, this was rarely used. By mid-1987, IBM began to shift focus away from TopView and was promoting the use of OS/2 to developers and end users alike. OS/2 1.0 was a pre-emptive multitasking, multithreading OS that allowed one real mode and multiple 16-bit protected mode sessions to run at the same time on the PC/AT based 80286 and provided as a DOS alternative announced in April 1987 and made available later that December. A graphical user interface (Presentation Manager) was added with OS/2 1.1 in October 1988. 1.1 could run with or without Presentation Manager as well as an embedded system with no screen, keyboard or mouse interface required. IBM officially stopped marketing the final release of TopView, version 1.12, on 3 July 1990. TopView's concept was carried forward by other DOS multitaskers, most notably Quarterdeck's DESQview, which retained TopView's user interface and many features, plus added more features such as support for the special features of the 80286, 80386 and compatible processors, and, with DESQview/X (released in June 1992), a true GUI interface running on DOS. A variety of similar programs to TopView were also available, including one from Dynamical Systems called Mondrian, which Microsoft bought in 1986 with the stated intention of implementing TopView API compatibility into Windows which never happened. Later in April 1992, IBM introduced OS/2 2.0 which included virtual 8086 mode and full 32-bit support of the Intel 80386 superseding even DESQview and other similar environments. OS/2 2.0 was a priority based preemptive multitasking multithreading OS including 32 levels of priority (from time critical to idle time) for the 386.
TopView requires IBM PC DOS versions 2.0 to 5.0 or MS-DOS 2.0 to 6.0, and will not run with later releases.
Key contributors to TopView included David Morrill (the "father of TopView" code-named "Orion" once the GLASS project was moved to Boca Raton), Dennis McKinley (tasking), Ross Cook (memory management), Bob Hobbs (TopView Toolkit) and Neal Whitten (product manager). Bill Gates, Steve Ballmer, Gordon Letwin and other key Microsoft executives accepted an invitation from IBM executive Don Estridge to IBM Boca Raton to see a special demonstration of TopView. Gates was disturbed that Windows did not have the multitasking (Windows used a cooperative method to share the CPU) and windowing capabilities (i.e. overlapping windows, etc.) that TopView had. Gates witnessed TopView running multiple copies of the Microsoft BASIC interpreter running in windows (overlapping and side-by-side) in a multitasking fashion. Microsoft later released a multitasking version of MS-DOS 4.0 (multitasking) from what it learned from the meeting. Even though there was no joint development agreement with Microsoft for the development of TopView, Estridge asked and later told Whitten (against Whitten's and the TopView team's wishes) to turn over all source code and documentation of TopView to Microsoft. Within a short time after the meeting, Estridge's request was granted. Gates gave the code and documentation to a group headed by Nathan Myhrvold. Once the code had been modified according to Gates' specifications, he purchased the company. The product itself, Mondrian, was never released. Gates, however, gave members of the team key positions at Microsoft. This led to a Joint Development Agreement with Microsoft (an agreement that previously only included DOS) to co-develop OS/2 (an agreement that lasted until 1990). This was all done in order to satisfy the USA vs. IBM anti-trust court case that was filed in 1969. Even though it was dismissed in 1982, IBM was mired in antitrust troubles for more than a decade after the dismissal and did not recover from the legal morass until the early to mid-90s. In June 1990 an FTC probe was launched into possible collusion between Microsoft and IBM in the PC software market.
Reception
InfoWorld in 1985 described TopView as "bland, plain vanilla software that hogs far too much memory". BYTE also criticized TopView's memory usage, but stated that "you will find that most software written for the IBM PC is TopView-compatible". Noting the low price and "innovative multitasking features", the magazine predicted that the software "will attract a lot of takers".
In 1985, Digital Research positioned their multitasking Concurrent DOS 4.1 with GEM as alternative for TopView.
See also
DOS Shell
MS-DOS 4.0 (multitasking)
OS/2
Visi On
VM/386
References
External links
TopView: An early multi-tasking OS for the PC - a history of TopView by its lead developer
DOS software
TopView
Operating system APIs
Process (computing) |
2273532 | https://en.wikipedia.org/wiki/Logical%20Volume%20Manager%20%28Linux%29 | Logical Volume Manager (Linux) | In Linux, Logical Volume Manager (LVM) is a device mapper framework that provides logical volume management for the Linux kernel. Most modern Linux distributions are LVM-aware to the point of being able to have their root file systems on a logical volume.
Heinz Mauelshagen wrote the original LVM code in 1998, when he was working at Sistina Software, taking its primary design guidelines from the HP-UX's volume manager.
Uses
LVM is used for the following purposes:
Creating single logical volumes of multiple physical volumes or entire hard disks (somewhat similar to RAID 0, but more similar to JBOD), allowing for dynamic volume resizing.
Managing large hard disk farms by allowing disks to be added and replaced without downtime or service disruption, in combination with hot swapping.
On small systems (like a desktop), instead of having to estimate at installation time how big a partition might need to be, LVM allows filesystems to be easily resized as needed.
Performing consistent backups by taking snapshots of the logical volumes.
Encrypting multiple physical partitions with one password.
LVM can be considered as a thin software layer on top of the hard disks and partitions, which creates an abstraction of continuity and ease-of-use for managing hard drive replacement, repartitioning and backup.
Features
Basic functionality
Volume groups (VGs) can be resized online by absorbing new physical volumes (PVs) or ejecting existing ones.
Logical volumes (LVs) can be resized online by concatenating extents onto them or truncating extents from them.
LVs can be moved between PVs.
Creation of read-only snapshots of logical volumes (LVM1), leveraging a copy on write (CoW) feature, or read/write snapshots (LVM2)
VGs can be split or merged in situ as long as no LVs span the split. This can be useful when migrating whole LVs to or from offline storage.
LVM objects can be tagged for administrative convenience.
VGs and LVs can be made active as the underlying devices become available through use of the lvmetad daemon.
Advanced functionality
Hybrid volumes can be created using the dm-cache target, which allows one or more fast storage devices, such as flash-based SSDs, to act as a cache for one or more slower hard disk drives.
Thinly provisioned LVs can be allocated from a pool.
On newer versions of device mapper, LVM is integrated with the rest of device mapper enough to ignore the individual paths that back a dm-multipath device if devices/multipath_component_detection=1 is set in lvm.conf. This prevents LVM from activating volumes on an individual path instead of the multipath device.
RAID
LVs can be created to include RAID functionality, including RAID 1, 5 and 6.
Entire LVs or their parts can be striped across multiple PVs, similarly to RAID 0.
A RAID 1 backend device (a PV) can be configured as "write-mostly", resulting in reads being avoided to such devices unless necessary.
Recovery rate can be limited using lvchange --raidmaxrecoveryrate and lvchange --raidminrecoveryrate to maintain acceptable I/O performance while rebuilding a LV that includes RAID functionality.
High availability
The LVM also works in a shared-storage cluster in which disks holding the PVs are shared between multiple host computers, but can require an additional daemon to mediate metadata access via a form of locking.
CLVM
A distributed lock manager is used to broker concurrent LVM metadata accesses. Whenever a cluster node needs to modify the LVM metadata, it must secure permission from its local clvmd, which is in constant contact with other clvmd daemons in the cluster and can communicate a desire to get a lock on a particular set of objects.
HA-LVM
Cluster-awareness is left to the application providing the high availability function. For the LVM's part, HA-LVM can use CLVM as a locking mechanism, or can continue to use the default file locking and reduce "collisions" by restricting access to only those LVM objects that have appropriate tags. Since this simpler solution avoids contention rather than mitigating it, no concurrent accesses are allowed, so HA-LVM is considered useful only in active-passive configurations.
lvmlockd
, a stable LVM component that is designed to replace clvmd by making the locking of LVM objects transparent to the rest of LVM, without relying on a distributed lock manager. It saw massive development during 2016.
The above described mechanisms only resolve the issues with LVM's access to the storage. The file system selected to be on top of such LVs must either support clustering by itself (such as GFS2 or VxFS) or it must only be mounted by a single cluster node at any time (such as in an active-passive configuration).
Volume group allocation policy
LVM VGs must contain a default allocation policy for new volumes created from it. This can later be changed for each LV using the lvconvert -A command, or on the VG itself via vgchange --alloc. To minimize fragmentation, LVM will attempt the strictest policy (contiguous) first and then progress toward the most liberal policy defined for the LVM object until allocation finally succeeds.
In RAID configurations, almost all policies are applied to each leg in isolation. For example, even if a LV has a policy of cling, expanding the file system will not result in LVM using a PV if it is already used by one of the other legs in the RAID setup. LVs with RAID functionality will put each leg on different PVs, making the other PVs unavailable to any other given leg. If this was the only option available, expansion of the LV would fail. In this sense, the logic behind cling will only apply to expanding each of the individual legs of the array.
Available allocation policies are:
Contiguous - forces all LEs in a given LV to be adjacent and ordered. This eliminates fragmentation but severely reduces a LV expandability.
Cling - forces new LEs to be allocated only on PVs already used by an LV. This can help mitigate fragmentation as well as reduce vulnerability of particular LVs should a device go down, by reducing the likelihood that other LVs also have extents on that PV.
Normal - implies near-indiscriminate selection of PEs, but it will attempt to keep parallel legs (such as those of a RAID setup) from sharing a physical device.
Anywhere - imposes no restrictions whatsoever. Highly risky in a RAID setup as it ignores isolation requirements, undercutting most of the benefits of RAID. For linear volumes, it can result in increased fragmentation.
Implementation
Typically, the first megabyte of each physical volume contains a mostly ASCII-encoded structure referred to as an "LVM header" or "LVM head". Originally, the LVM head used to be written in the first and last megabyte of each PV for redundancy (in case of a partial hardware failure); however, this was later changed to only the first megabyte. Each PV's header is a complete copy of the entire volume group's layout, including the UUIDs of all other PVs and of LVs, and allocation map of PEs to LEs. This simplifies data recovery if a PV is lost.
In the 2.6-series of the Linux Kernel, the LVM is implemented in terms of the device mapper, a simple block-level scheme for creating virtual block devices and mapping their contents onto other block devices. This minimizes the amount of relatively hard-to-debug kernel code needed to implement the LVM. It also allows its I/O redirection services to be shared with other volume managers (such as EVMS). Any LVM-specific code is pushed out into its user-space tools, which merely manipulate these mappings and reconstruct their state from on-disk metadata upon each invocation.
To bring a volume group online, the "vgchange" tool:
Searches for PVs in all available block devices.
Parses the metadata header in each PV found.
Computes the layouts of all visible volume groups.
Loops over each logical volume in the volume group to be brought online and:
Checks if the logical volume to be brought online has all its PVs visible.
Creates a new, empty device mapping.
Maps it (with the "linear" target) onto the data areas of the PVs the logical volume belongs to.
To move an online logical volume between PVs on the same Volume Group, use the "pvmove" tool:
Creates a new, empty device mapping for the destination.
Applies the "mirror" target to the original and destination maps. The kernel will start the mirror in "degraded" mode and begin copying data from the original to the destination to bring it into sync.
Replaces the original mapping with the destination when the mirror comes into sync, then destroys the original.
These device mapper operations take place transparently, without applications or file systems being aware that their underlying storage is moving.
Caveats
Until Linux kernel 2.6.31, write barriers were not supported (fully supported in 2.6.33). This means that the guarantee against filesystem corruption offered by journaled file systems like ext3 and XFS was negated under some circumstances.
, no online or offline defragmentation program exists for LVM. This is somewhat mitigated by fragmentation only happening if a volume is expanded and by applying the above-mentioned allocation policies. Fragmentation still occurs, however, and if it is to be reduced, non-contiguous extents must be identified and manually rearranged using the pvmove command.
On most LVM setups, only one copy of the LVM head is saved to each PV, which can make the volumes more susceptible to failed disk sectors. This behavior can be overridden using vgconvert --pvmetadatacopies. If the LVM can not read a proper header using the first copy, it will check the end of the volume for a backup header. Most Linux distributions keep a running backup in /etc/lvm/backup, which enables manual rewriting of a corrupted LVM head using the vgcfgrestore command.
See also
Btrfs (has its own "snapshots" that are different, but using LVM snapshots of btrfs leads to loss of both copies)
Device mapper
Logical Disk Manager (LDM)
Logical volume management
Snapshot (computer storage)
Storage virtualization
ZFS
References
Further reading
.
(fundamental patent).
Volume manager
Linux file system-related software
Linux kernel features
Red Hat software
fi:Looginen taltiohallinta |
62482696 | https://en.wikipedia.org/wiki/Append-only | Append-only | Append-only is a property of computer data storage such that new data can be appended to the storage, but where existing data is immutable.
Access control
Many file systems' Access Control Lists implement an "append-only" permission:
chattr in Linux can be used to set the append-only flag to files and directories. This corresponds to the flag in .
NTFS ACL has a control for "Create Folders / Append Data", but it does not seem to keep data immutable.
Many cloud storage providers provide the ability to limit access as append-only. This feature is especially important to mitigate the risk of data loss for backup policies in the event that the computer being backed-up becomes infected with ransomware capable of deleting or encrypting the computer's backups.
Data structures
Many data structures and databases implement immutable objects, effectively making their data structures append-only. Implementing an append-only data structure has many benefits, such as ensuring data consistency, improving performance, and permitting rollbacks.
The prototypical append-only data structure is the log file. Log-structured data structures found in Log-structured file systems and databases work in a similar way: every change (transaction) that happens to the data is logged by the program, and on retrieval the program must combine the pieces of data found in this log file. Blockchains add cryptography to the logs so that every transaction is verifiable.
Append-only data structures may also be mandated by the hardware or software environment:
All objects are immutable in purely functional programming languages, where every function is pure and global states do not exist.
Flash storage cells can only be written to once before erasing. Erasing on a flash drive works on the level of pages with cover many cells at once, so each page is treated as an append-only set of cells until it fills up.
Hard drives that use shingled magnetic recording cannot be written to randomly because writing on a track would clobber a neighboring, usually later, track. As a result, each "zone" on the drive is append-only.
Append-only data structures grow over time, with more and more space dedicated to "stale" data found only in the history and more time wasted on parsing these data. A number of append-only systems implement rewriting (copying garbage collection), so that a new structure is created only containing the current version and optionally a few older ones.
See also
Access control list
Cloud storage
Comparison of file hosting services
Data structure
Purely-functional data structure
Log-structured merge-tree
References
Computer data storage |
77252 | https://en.wikipedia.org/wiki/Xerox%20Alto | Xerox Alto | The Xerox Alto is the first computer designed from its inception to support an operating system based on a graphical user interface (GUI), later using the desktop metaphor. The first machines were introduced on 1 March 1973, a decade before mass-market GUI machines became available.
The Alto is contained in a relatively small cabinet and uses a custom central processing unit (CPU) built from multiple SSI and MSI integrated circuits. Each machine cost tens of thousands of dollars despite its status as a personal computer. Only small numbers were built initially, but by the late 1970s, about 1,000 were in use at various Xerox laboratories, and about another 500 in several universities. Total production was about 2,000 systems.
The Alto became well known in Silicon Valley and its GUI was increasingly seen as the future of computing. In 1979, Steve Jobs arranged a visit to Xerox PARC, during which Apple Computer personnel would receive demonstrations of Xerox technology in exchange for Xerox being able to purchase stock options in Apple. After two visits to see the Alto, Apple engineers used the concepts to introduce the Apple Lisa and Macintosh systems.
Xerox eventually commercialized a heavily modified version of the Alto concepts as the Xerox Star, first introduced in 1981. A complete office system including several workstations, storage and a laser printer cost as much as $100,000, and like the Alto, the Star had little direct impact on the market.
History
The first computer with a graphical operating system, the Alto built on earlier graphical interface designs. It was conceived in 1972 in a memo written by Butler Lampson, inspired by the oN-Line System (NLS) developed by Douglas Engelbart and Dustin Lindberg at SRI International (SRI). Of further influence was the PLATO education system developed at the Computer-based Education Research Laboratory at the University of Illinois. The Alto was designed mostly by Charles P. Thacker. Industrial Design and manufacturing was sub-contracted to Xerox, whose Special Programs Group team included Doug Stewart as Program Manager, Abbey Silverstone Operations, Bob Nishimura, Industrial Designer. An initial run of 30 units was produced by Xerox El Segundo (Special Programs Group), working with John Ellenby at PARC and Doug Stewart and Abbey Silverstone at El Segundo, who were responsible for re-designing the Alto's electronics. Due to the success of the pilot run, the team went on to produce approximately 2,000 units over the next ten years.
Several Xerox Alto chassis are now on display at the Computer History Museum in Mountain View, California, one is on display at the Computer Museum of America in Roswell, Georgia, and several are in private hands. Running systems are on display at the System Source Computer Museum in Hunt Valley, Maryland. Charles P. Thacker was awarded the 2009 Turing Award of the Association for Computing Machinery on March 9, 2010, for his pioneering design and realization of the Alto. The 2004 Charles Stark Draper Prize was awarded to Thacker, Alan C. Kay, Butler Lampson, and Robert W. Taylor for their work on Alto.
On October 21, 2014, Xerox Alto's source code and other resources were released from the Computer History Museum.
Architecture
The following description is based mostly on the August 1976 Alto Hardware Manual by Xerox PARC.
Alto uses a microcoded design, but unlike many computers, the microcode engine is not hidden from the programmer in a layered design. Applications such as Pinball take advantage of this to accelerate performance. The Alto has a bit-slice arithmetic logic unit (ALU) based on the Texas Instruments 74181 chip, a ROM control store with a writable control store extension and has 128 (expandable to 512) kB of main memory organized in 16-bit words. Mass storage is provided by a hard disk drive that uses a removable 2.5 MB one-platter cartridge (Diablo Systems, a company Xerox later bought) similar to those used by the IBM 2310. The base machine and one disk drive are housed in a cabinet about the size of a small refrigerator; one more disk drive can be added via daisy-chaining.
Alto both blurred and ignored the lines between functional elements. Rather than a distinct central processing unit with a well-defined electrical interface (e.g., system bus) to storage and peripherals, the Alto ALU interacts directly with hardware interfaces to memory and peripherals, driven by microinstructions that are output from the control store. The microcode machine supports up to 16 cooperative multitasking tasks, each with fixed priority. The emulator task executes the normal instruction set to which most applications are written; that instruction set is similar to, but not the same as, that of a Data General Nova. Other tasks serve the display, memory refresh, disk, network, and other I/O functions. As an example, the bitmap display controller is little more than a 16-bit shift register; microcode moves display refresh data from main memory to the shift register, which serializes it into a display of pixels corresponding to the ones and zeros of the memory data. Ethernet is likewise supported by minimal hardware, with a shift register that acts bidirectionally to serialize output words and deserialize input words. Its speed was designed to be 3 Mbit/s because the microcode engine could not go faster and continue to support the video display, disk activity and memory refresh.
Unlike most minicomputers of the era, Alto does not support a serial terminal for user interface. Apart from an Ethernet connection, the Alto's only common output device is a bi-level (black and white) cathode ray tube (CRT) display with a tilt-and-swivel base, mounted in portrait orientation rather than the more common "landscape" orientation. Its input devices are a custom detachable keyboard, a three-button mouse, and an optional 5-key chorded keyboard (chord keyset). The last two items had been introduced by SRI's On-Line System; while the mouse was an instant success among Alto users, the chord keyset never became popular.
In the early mice, the buttons were three narrow bars, arranged top to bottom rather than side to side; they were named after their colors in the documentation. The motion was sensed by two wheels perpendicular to each other. These were soon replaced with a ball-type mouse, which was invented by Ronald E. Rider and developed by Bill English. These were photo-mechanical mice, first using white light, and then infrared (IR), to count the rotations of wheels inside the mouse.
The keyboard is interesting in that each key is represented as a separate bit in a set of memory locations. As a result, it is possible to read multiple key presses concurrently. This trait can be used to alter from where on the disk the Alto boots. The keyboard value is used as the sector address on the disk to boot from, and by holding specific keys down while pressing the boot button, different microcode and operating systems can be loaded. This gave rise to the expression "nose boot" where the keys needed to boot for a test OS release required more fingers than you could come up with. Nose boots were made obsolete by the move2keys program that shifted files on the disk so that a specified key sequence could be used.
Several other I/O devices were developed for the Alto, including a TV camera, the Hy-Type daisywheel printer and a parallel port, although these were quite rare. The Alto could also control external disk drives to act as a file server. This was a common application for the machine.
Software
Early software for the Alto was written in the programming language BCPL, and later in Mesa, which was not widely used outside PARC but influenced several later languages, such as Modula. The Alto used an early version of ASCII which lacked the underscore character, instead having the left-arrow character used in ALGOL 60 and many derivatives for the assignment operator: this peculiarity may have been the source of the CamelCase style for compound identifiers. Altos were also microcode-programmable by users.
The Alto helped popularize the use of raster graphics model for all output, including text and graphics. It also introduced the concept of the bit block transfer operation (bit blit, BitBLT), as the fundamental programming interface to the display. Despite its small memory size, many innovative programs were written for the Alto, including:
the first WYSIWYG typesetting document preparation systems, Bravo and Gypsy;
the Laurel email tool, and its successor, Hardy
the Sil vector graphics editor, used mainly for logic circuits, printed circuit board, and other technical diagrams;
the Markup bitmap editor (an early paint program);
the Draw graphical editor using lines and splines;
the first WYSIWYG integrated circuit editor based on the work of Lynn Conway, Carver Mead, and the Mead and Conway revolution;
the first versions of the Smalltalk environment
Interlisp
one of the first network-based multi-person video games (Alto Trek by Gene Ball).
There was no spreadsheet or database software. The first electronic spreadsheet program, VisiCalc, did not arise until 1979.
Diffusion and evolution
Technically, the Alto was a small minicomputer, but it could be considered a personal computer in the sense that it was used by one person sitting at a desk, in contrast with the mainframe computers and other minicomputers of the era. It was arguably "the first personal computer", although this title is disputed by others. More significantly (and perhaps less controversially), it may be considered to be one of the first workstation systems in the style of single-user machines such as the Apollo, based on the Unix operating system, and systems by Symbolics, designed to natively run Lisp as a development environment.
In 1976 to 1977 the Swiss computer pioneer Niklaus Wirth spent a sabbatical at PARC and was excited by the Alto. Unable to bring back one of the Alto systems to Europe, Wirth decided to build a new system from scratch and he designed with his group the Lilith. Lilith was ready to use around 1980, quite some time before Apple Lisa and Macintosh were released. Around 1985 Wirth started a complete redesign of the Lilith under the Name "Project Oberon".
In 1978 Xerox donated 50 Altos to the Massachusetts Institute of Technology, Stanford University, Carnegie Mellon University, and the University of Rochester. The National Bureau of Standards's Institute for Computer Sciences in Gaithersburg, Maryland received one Alto in late 1978 along with Xerox Interim File System (IFS) file servers and Dover laser printers. These machines were the inspiration for the ETH Zürich Lilith and Three Rivers Company PERQ workstations, and the Stanford University Network (SUN) workstation, which was eventually marketed by a spin-off company, Sun Microsystems. The Apollo/Domain workstation was heavily influenced by the Alto.
Following the acquisition of an Alto, the White House information systems department sought to lead federal computer suppliers in its direction. The Executive Office of the President of the United States (EOP) issued a request for proposal for a computer system to replace the aging Office of Management and Budget (OMB) budget system, using Alto-like workstations, connected to an IBM-compatible mainframe. The request was eventually withdrawn because no mainframe producer could supply such a configuration.
In December 1979, Apple Computer's co-founder Steve Jobs visited Xerox PARC, where he was shown the Smalltalk-80 object-oriented programming environment, networking, and most importantly the WYSIWYG, mouse-driven graphical user interface provided by the Alto. At the time, he didn't recognize the significance of the first two, but was excited by the last one, promptly integrating it into Apple's products; first into the Lisa and then in the Macintosh, attracting several key researchers to work in his company.
In 1980–1981, Xerox Altos were used by engineers at PARC and at the Xerox System Development Department to design the Xerox Star workstations.
Xerox and the Alto
Xerox was slow to realize the value of the technology that had been developed at PARC. The Xerox corporate acquisition of Scientific Data Systems (SDS, later XDS) in the late 1960s had no interest with PARC. PARC built their own emulation of the Digital Equipment Corporation PDP-10 named the MAXC. The MAXC was PARC's gateway machine to the ARPANET. The firm was reluctant to get into the computer business again with commercially untested designs, although many of the philosophies would ship in later products.
Byte magazine stated in 1981,
After the Alto, PARC developed more powerful workstations (none intended as projects) informally termed "the D-machines": Dandelion (least powerful, but the only to be made a product in one form), Dolphin; Dorado (most powerful; an emitter-coupled logic (ECL) machine); and hybrids like the Dandel-Iris.
Before the advent of personal computers such as the Apple II in 1977 and the IBM Personal Computer (IBM PC) in 1981, the computer market was dominated by costly mainframes and minicomputers equipped with dumb terminals that time-shared the processing time of the central computer. Through the 1970s, Xerox showed no interest in the work done at PARC. When Xerox finally entered the PC market with the Xerox 820, they pointedly rejected the Alto design and opted instead for a very conventional model, a CP/M-based machine with the then-standard 80 by 24 character-only monitor and no mouse.
With the help of PARC researchers, Xerox eventually developed the Xerox Star, based on the Dandelion workstation, and later the cost reduced Star, the 6085 office system, based on the Daybreak workstation. These machines, based on the 'Wildflower' architecture described in a paper by Butler Lampson, incorporated most of the Alto innovations, including the graphical user interface with icons, windows, folders, Ethernet-based local networking, and network-based laser printer services.
Xerox only realized their mistake in the early 1980s, after Apple's Macintosh revolutionized the PC market via its bitmap display and the mouse-centered interface. Both of these were copied from the Alto. While the Xerox Star series was a relative commercial success, it came too late. The expensive Xerox workstations could not compete against the cheaper GUI-based workstations that arose in the wake of the first Macintosh, and Xerox eventually quit the workstation market for good.
See also
NLS (computer system)
Mousepad
Alan Kay
Adele Goldberg (computer scientist)
Apple Lisa
References
Notes
Alto User's Handbook, Xerox PARC, September 2013
Further reading
External links
Xerox Alto documents at bitsavers.org
At the DigiBarn museum
Xerox Alto Source Code - CHM (computerhistory.org)
Xerox Alto source code (computerhistory.org)
"Hello world" in the BCPL language on the Xerox Alto simulator (righto.com)
The Alto in 1974 video
A lecture video of Butler Lampson describing Xerox Alto in depth. (length: 2h45m)
A microcode-level Xerox Alto simulator
ContrAlto Xerox Alto emulator
brainsqueezer/salto_simulator: SALTO - Xerox Alto I/II Simulator (github.com)
SALTO-Xerox Alto emulator (direct download)
ConrAltoJS Xerox Alto Online
Computer-related introductions in 1973
Alto
Personal computers
Computer workstations
16-bit computers |
53435773 | https://en.wikipedia.org/wiki/Paul%20Spirakis | Paul Spirakis | Paul Spirakis is a Professor of Computer Science at the University of Liverpool, specialising in Algorithms, Complexity and Algorithmic Game Theory. He has been a Professor at the University of Liverpool since 2013 and, he also is a Professor at Patras University. He leads the Algorithms Research section in the Department of Computer Science at the University of Liverpool. He is a Fellow of EATCS and a Member of Academia Europaea.
He is the Editor in Chief (Track A) of the journal Theoretical Computer Science.
He completed his S.M in Applied Mathematics (Computer Science) at Harvard University in 1979 followed by a PhD in Applied Mathematics (Computer Science) also at Harvard University in 1982 (supervised by John Reif).
References
Greek computer scientists
English computer scientists
Harvard School of Engineering and Applied Sciences alumni
Year of birth missing (living people)
Living people
National Technical University of Athens alumni
Academics of the University of Liverpool |
3634949 | https://en.wikipedia.org/wiki/Jorge%20Cortell | Jorge Cortell | Jorge Cortell is an activist and commentator known for his opposition to the concept of Intellectual Property. He was forced to resign as visiting professor of the Polytechnic University of Valencia (UPV) after delivering a talk in the university where he defended copyleft and P2P networks, and criticized copyright and patents, defying pressure from the dean and the MPAA who tried to censor his talk.
Work
Cortell, a serial entrepreneur, is currently founder and CEO of Kanteron systems a precision medicine software company whose story was picked up by Microsoft's news site. He's also a European Commission Member of the Expert Group on Venture Philanthropy and Social Investments.
Cortell has founded La Resistencia Digital and has actively cooperated with the Creative Commons and the FFII. He was president of the Oxford University Society Valencia and a member of the Free Software Foundation, the Electronic Frontier Foundation, AI (Asociación de Internautas, mainly in Spanish) and Hispalinux.
Controversy
In May 2005 Spanish newspapers reported widely on an attempt by Universitat Politècnica de Valencia in Spain to block a talk by Cortell. The story was reportedly picked up by a large number of bloggers and Cortell resigned from his post at the University.
Cortell was "invited by the ETSIA Student Union and Linux Users' Group" to give a talk at the University analysing the legal use and benefits of the P2P networks, "even when dealing with copyrighted works" according to Cortell. At the time Cortell was a teaching assistant on intellectual property at the Universitat Politècnica de Valencia. On May 4, 2005 he was forced to resign, after his very critical talk on Intellectual Property. Cortell eventually gave the talk at the university cafeteria.
Some months after the incident, the dean admitted that he had been pressured by the Spanish Recording Industry Association (Promusicae) in a quote to the national newspaper El País, and also by the MPAA as appeared in another newspaper.
Speeches
In English
2006-07-23 New York. HOPE6 Selfness-Copyfight: From Censorship to New Business Models Video and Audio at archive.org slideshow PDF
2005-11-17 Oslo, Faculty of Engineering, organized by the NUUG. Free culture, P2P networks, alternative economic models, and why some people do not want freedom.
2004-09-14 Stanford university. Free culture for all: a sustainable real example.
In Spanish
2005-12-21 Madrid, Centro de Convenciones Mapfre. Suidad: Un mundo sin copyright, as part of the Hispalinux Congress.
2005-11-21 Sevilla, Facultad de Comunicación of the Universidad de Sevilla. Round table "Música y derechos de propiedad intellectual", composed by a SGAE representer, an AIE member, Pedro of Zemos98.org, the singer Kiko Veneno, the sister of María Jiménez, the producer Antonio Escobar and Cortell. Eventually, both SGAE and AIE members did not come to the round table alleging personal problems with one of the members of the table, Cortell himself.
2005-11-21 Murcia, Facultad de Informática, Universidad de Murcia. On the circle of conferences of its patronal festivities.
2005-11-16 Barcelona, Ateneu Barcelonés. Round table of the Campaña contra la Trazabilidad sin orden judicial (Campaign against traceability without order from judge), with David de Ugarte(Ciberpunk) and Javier Cuchí (Asociación de Internautas).
2005-11-12 A coruño, Coliseum de A Coruña.
2005-11-10 Vigo, Edificio Miralles, of Ciudad Universitaria, Campus de Vigo. Part of Charlas-conferencias sobre el copyleft, la música, etc., organized by the Vicerrectorado de Extensión Universitaria of the Universidad de Vigo. With David Bravo and Ignasi Labastida.
2005-10-31 Murcia, Murcia Lan Party.
2005-10-31 Tarragona, Sala Santiago Costa de la Diputació. Part of the Tinet congress, organized by OASI.
2005-10-15 Mataró, Centro Cívico Pla d¡en Boet. Round Table Creative Commons vs Devolución, with Iñigo Medina, Pere Quintana and Ignasi Labastida. Part of the IV Semana de las Nuevas Tecnologías para todos, organized by the Ajuntament de Mataró and the Fundación Tecnocampus.
2005-08-10 Málaga, Fuente de Piedra. Semana del software libre en Fuente de Piedra.
2005-07-20 to 2005-07-31 Valencia, Campus Party. Round Table Modelos de Negocio alternativos, conference Lo que no quieren que sepas de tu ordenador e Internet.
2005-06-19 Alicante, Universidad de Alicante - Campus San Vicente del Raspeig. Propiedad Intellectual y Software Libre / Patentes de Software, as part of the La alternativa del software libre en la sociedad de la información program.
2005-07-15 Málaga, Málaga LAN Party. Las redes P2P y la entelequia de la Propiedad Intellectual
2005-06-16 Córdoba, La Rambla. Jornadas Software Libre e Internet
2005-06-09 Barcelona, Internet Global Congress. Globalización, propiedad intellectual y licencias abiertas.
2005-06-08 Barcelona, Centro de Cultura Contemporánea de Barcelona. Derechos de autor y Software Libre, las claves en la difusión del conocimiento para las bibliotecas, organized by the Grupo de Trabajo de Software Libre para los profesionales de la información of Colegio Oficial de Bibliotecarios y Documentalistas de Catalunya.
2005-06-07 Barcelona, Auditorio Caixa Catalunya (La Pedrera). El acceso a las TIC ¿Un derecho humano fundamental?, given together with Marcelo d'Elia Branco, as part of the II Jornadas Internet y Solidaridad.
2005-05-27 Barcelona, Universitat Pompeu Fabra. Audio torrent .
2005-05-26 Alicante, Universidad de Alicante. Round table ¿Hace falta una nueva ley/cultura de propiedad intellectual? inside the Copy ¿Right? journeys. Participants on the round table include Antonio Martínez, from SGAE, and Gabriel Marín, an attorney specialized in intellectual property and teacher at the Universidad de Alicante . Audio part one, part two, part three. Video .
2005-05-25 Madrid, Universidad Politécnica de Madrid, Facultad de Informática.
2005-05-06 Basque Country software libre en la empresa .
2005-05-04 Valencia, Universitat Politècnica de Valencia The cursed speech
2005-04-22 Granada Los peligros de las Patentes de Software. Video
2004-04-17 Barcelona.- Seminario jurídico: leyes de propiedad intellectual, licencias y patentes inside the II Jornadas Copyleft together with Andrea Cappoci (Laser, Italia) and Thomas Margoni.
See also
Anti-copyright
Copyright
Intellectual Property
References
External links
English
Cortell's personal website, english articles
Slashdot article on the UPV matter
Boing Boing article on the UPV matter
Spanish
Cortell's personal website
El País newspaper article on the UPV matter
Linux Preview Interview on Jorge Cortell
Living people
Spanish activists
Intellectual property activism
Year of birth missing (living people)
Creative Commons-licensed authors |
14075576 | https://en.wikipedia.org/wiki/Ferranti%20Argus | Ferranti Argus | Ferranti's Argus computers were a line of industrial control computers offered from the 1960s into the 1980s. Originally designed for a military role, a re-packaged Argus was the first digital computer to be used to directly control an entire factory. They were widely used in a variety of roles in Europe, particularly in the UK, where a small number continue to serve as monitoring and control systems for nuclear reactors.
Original series
Blue Envoy, hearing aid computer
The original concept for the computer was developed as part of the Blue Envoy missile project. This was a very long-range surface-to-air missile system with a range on the order of . To reach these ranges, the missile was "lofted" in a nearly vertical trajectory at launch, so that it spent more time flying through the thin high-altitude air. Once it reached high altitude, it would tip over and begin to track the target. During the initial vertical climb the missile's radar would not be able to see the target, so during this period it was command guided from the ground.
Argus began as a system to read the radar data, compute the required trajectory, and send that to the missile in-flight. The system not only had to develop the trajectory, but also directly controlled the control surfaces of the missile and thus had a complete control feedback system. Development was carried out by Maurice Gribble at Ferranti's Automation Division in Wythenshawe starting in 1956. The system used the new OC71 transistors from Mullard, originally designed for use in hearing aids. They could only be run at the low speed of 25 kHz, but this was enough for the task.
Blue Envoy was cancelled in 1957 as part of the sweeping 1957 Defence White Paper. Ferranti decided to continue the development of the computer for other uses. During a visit by Prince Philip, Duke of Edinburgh in November 1957, they set up a system with an automotive headlamp connected to a handle that could be moved by hand to shine at any point on a wall, while the computer attempted to move a second headlamp to lay on the same spot on the wall.
Prototype Argus
Ferranti continued development of the system, and during 1958 they completed a prototype of a commercial product which they showed publicly for the first time at the Olympia in November. This machine used new circuitry that ran at the much faster rate of 500 kHz. The name "Argus" (from the Greek God of that name) was assigned the next year, keeping with the Ferranti tradition of using Greek names for their computers. They selected Argus as this was the all-seeing god, appropriate for a machine that would be tasked with controlling complex systems.
The new system had a number of differences from the hearing aid machine. Among these was the introduction of interrupts to better handle timing of various events. The earlier machine was so slow that these sorts of issues were dealt with simply by checking every physical input in a loop, but with the much faster performance of the new design this was no longer appropriate as most of the tests would reveal no changes and thus be wasted. These sorts of tasks were now controlled by interrupts, so the device could indicate when its data was ready to be processed. The system added core memory for temporary storage, replacing the flip-flops from the earlier system, and a plugboard for programming.
The first delivery would be to Imperial Chemical Industries (ICI) to go into use as the control system for ICI's soda ash/ammonia plant at Fleetwood. An agreement was reached in March 1960 and the machine was installed April/May 1962. This was the first large factory to be controlled directly by a digital computer.<ref>"Process-control computers make a hit with chemical manufacturers", New Scientist, 15 October 1964, p. 165</ref> Other European sales followed.
The Argus circuitry was based on germanium transistors with 0 and -6 volts representing binary 1 and 0, respectively. The computer was based on a 12-bit word length with 24-bit instructions. The arithmetic was handled in two parallel 6-bit ALUs operating at 500 kHz. Additions in the ALU took 12 µs, but adding in the memory access time meant simple instructions took about 20 µs. Double-length (24-bit) arithmetic operations were also provided. Data memory was supplied in a 12-bit, 4096 word, core memory store, while up to 64 instruction words were stored in a separate plugboard array, using ferrite pegs dropped into holes to create a "1". Opcodes were 6 bits, registers 3 bits, index register (modifier) 2 bits and data address 13 bits.
Bloodhound Mark II
Shortly after the cancellation of the Blue Envoy in 1957, an emergency meeting between the primary contractors, Ferranti and Bristol Aerospace, led to the idea of combining components of the Blue Envoy with the existing Bristol Bloodhound to produce a much more capable design. This produced the Bloodhound Mark II, roughly doubling the range to about and using the new radar systems from the Envoy which allowed the missile to track targets much closer to the ground whilst also much more resistant to radar jamming.
Unlike Blue Envoy, Bloodhound was expected to be able to see the target through the entire attack. Guidance was semi-active radar homing, with an illuminator radar lighting up the targets, and a receiver in the missile using the reflected signal to track. For this to work, the illuminator had to be pointed at the target using information from a separate tactical control radar, and the receiver in the nose of the missile had to be pointed at the target. The illuminator and missiles would not necessarily be close together, complicating the calculations. Further, the receiver had to filter out signals that were not of the expected Doppler shifted frequency range, so the computer also had to calculate the expected frequency shift to set the receiver's filters.
The accuracy required of the calculations was beyond the capability of small military computers used to that point. An experimental system by Derek Whitehead using a digital computer was easily able to accomplish the calculations. He suggested placing the computers at the Orange Yeoman radar sites as calculation centers that would feed this information to the missile batteries.
Whitehead was a friend of Gribble's and was aware of his work on a small computer, and first raised the issue sometime in autumn 1959. Once the decision had been made to move to a digital computer, all sorts of secondary tasks were handed off to the machine. This included everything from maintenance testing to missile launch control to the calculation of Doppler "zero points" where the signal would be expected to drop to zero as the target crossed at right angles to the radar.
Argus 200 and 100
The original design was followed in 1963 by the single-ALU Argus 100, which cost around £20,000 (equivalent to approximately £430,000 ). Unlike the original, the Argus 100 used a flat 24-bit addressing scheme with both data and code stored in a single memory. A smaller 5-bit opcode was used in order to simplify the basic logic and gain an address bit. The single ALU and other changes resulted in a basic operation time of 72 μs. One notable use of the Argus 100 was to control the Jodrell Bank Mark II telescope in 1964. With the 100's release, the original design was retroactively renamed Argus 200.
The Argus 200 model would eventually sell 63 machines, and the 100 14.
Argus 300
The design of the Argus 300 was started in 1963, with the first delivery in 1965. This was a much faster machine featuring a fully parallel-architecture arithmetic logic unit, as opposed to the earlier and much slower serial units. Its instruction set was nevertheless fully compatible with the Argus 100. The 300 was very successful and used throughout the 1960s in various industrial roles.
A variant of the 300 was the Argus 350, which allowed external access to its core to allow direct memory access. This improved performance of input/output, avoiding having to move data via code running on the processor. The 350 was used in various military simulators, including the Royal Navy for frigate, submarine and helicopter-based anti-submarine training, and the Royal Air Force for a Bloodhound Mk.II simulator and the Vickers VC10 flight simulator built at Redifon and delivered to RAF Brize Norton in 1967. The model used on the VC10 Simulator was a 3520B, this meant that it had (20)kWords of memory and a (B)acking Store. Redifon also used the 350 on the Air Canada DC9 flight simulator that was installed in Montreal in the Spring of 1966. The 350's were delivered in the 1967 to 1969 timeframe.
Silicon replacements
The design of the Argus 400 started at the same time as the Argus 300. In logical terms the 400 was similar to the earlier 100, using serial ALUs. However, it featured an entirely new electrical system. Previous machines used germanium transistors to form the logic gates. The Argus 400 used silicon transistors in a NOR-logic designed by Ferranti Wythenshawe called MicroNOR II'', with more "conventional" logic where 0 and +4.5 represented binary 1 and 0, respectively. The rest of the world however used 0 volts to represent 0 and + 2.4 (to 5) volts to represent 1. This was called NAND logic. They are in fact both the same circuitry. When Texas Instruments brought out their “74” series of integrated circuits the specification of MicroNOR II was changed from 4.5 volts to 5 volts so the two families could work together. The machine was packaged to fit into a standard Air Transport Rack. Multilayer PCBs were not routine in 1963 and Ferranti developed processes for bonding the boards and plating through the circuit boards. The drawing office had to learn how to design multilayer boards. which was first laid out on tape then transferred to film. It took around two years for the Argus 400 to go into production (first delivery in 1966).
Weighed more than .
The Argus 500, designed about 3 years later, used parallel arithmetic and was much faster. It was designed to be plugged into a larger 19 inch rack mounted frame, together with up to four core store (memory) units. The Argus 400 was repackaged to be the same as the Argus 500 and the two machines were plug compatible. The Argus 400 used 18 small PCBs for its CPU each of which was wire-wrapped to the backplane using 70 miniature wire wraps. Removing a card was tedious. The Argus 500 initially used the same packages, and also wire-wrap, on larger boards, but later versions employed dual-in-line ICs which were soldered flat onto the PCB and were much easier to remove.
Like the earlier designs, the 400 and 500 used the same 14-bit address space and 24-bit instruction set and were compatible. The 500 added new instructions that used three-bits of the accumulator for offset indexing as well. Both machines ran at a 4 MHz basic clock cycle, much faster than the earlier machines' 500 kHz. Both used core memory which was available in two cycle times, The Argus 400 used a 2 μs core whereas the Argus 500 had 2 μs in earlier machines and 1 μs for later ones, doubling performance. The difference between the 400 and 500 was similar to the split between the 100 and 300, in that the 500 had a parallel ALU and the 400 was serial. The Argus 400 had an add time (two 24 bit numbers of 12 μs. The Argus 500 (with 1 μs store) took 3 μs. Divide (the longest instruction) took 156 μs on the Argus 400 and the Argus 500 took 9 μs. The Argus 500 was of course much more expensive.
A CORAL 66 high-level programming language compiler for the Argus 500 was developed by the Royal Signals and Radar Establishment under contract to Ferranti for use in industrial control and automation projects.
Typical Argus 500 installations were chemical plants (process control) and nuclear power stations (process monitoring). A later application was for Police Command and Control installations, one of the more famous ones being for Strathclyde Police in Glasgow. This system provided the first visual display of resource locations using maps provided by 35mm slide projectors projecting through a port-hole in the tube of the VDU screen.
An Argus 400 replaced the 100 at Jodrell Bank in 1971. There was a special version of the Argus 400 made for the Boadicea seat booking network for BOAC. This removed the multiply and divide functions as these used a significant number of expensive JK flip-flops and it was cost effective at the time to save these 24 and a few other components. Overall, the 500 proved to be one of Ferranti's best-selling products, and found especially wide use on oil platforms during the opening of the North Sea oil fields during the 1970s.
Argus 600 and 700
Breaking with the past, the next series of Argus machines were completely new designs and not backward compatible. The Argus 600 was an 8-bit machine, intended for use by manufacturers of electrical and electronic equipment who required a relatively simple computer or programmable control device. It possessed a basic core memory of 1,024 words, expandable in blocks of the same size up to a maximum of 8,192 words. A simple mnemonic programming language called ASSIST, comprising 17 single-address instructions, was developed for the new machine. Costing around £1,700 when introduced in 1970, at the time the Argus 600 was cheapest digital computer available in the United Kingdom. It could be linked directly or via telephone lines to larger computers and its hardware interface allowed modules from the Argus range of peripheral and plant connection equipment to be added as required.
The Argus 600 was followed by the Argus 700, which used 16-bit architecture. Design of the 700 started around 1968/9 and the range was still in production in the mid 1980s achieving international success for industrial and military applications. The 700 is still operational at several British nuclear power stations in 2020 in control and data processing applications. It was also used as a production control platform for companies such as Kodak.
The Argus 700 could be configured in shared memory multi-processor configurations. The Argus 700E was a low-end model. The Argus 700F used 500 ns cycle time MOS memory of up to 64k 16-bit words. The Argus 700G supported a virtual address space with up to 256k words of memory. The Argus 700S had the option of faster 150 ns bipolar memory with independent access for input-output processors.
The Argus 700 also played an important historical role in the development of packet switching networks in the UK. These machines were used by Ferranti during early experiments at the General Post Office as the basis for early routers. In this respect they are similar to the Interface Message Processors built in the US to serve a similar role during the development of the Internet.
Over 70 Argus 700G processors were used in the control and instrumentation systems of the Torness nuclear power station, which had a far more sophisticated control system than earlier members of the advanced gas-cooled reactor fleet, including Digital Direct Control (DDC) of the reactors. When first installed it was probably the most sophisticated and complex computerised control system for a nuclear power station worldwide; the system was implemented using the CORAL high-level programming language. Each reactor in the dual reactor station had 10 input multiplexing computers, 11 control dual-processor computers, and a supervisory triple-processor computer with a standby backup.
M700
The M700 series of computers was based on the architecture and instruction set of the Ferranti Argus 700 computer series. Both
M700 computers and Argus 700 computers have a common overall instruction set. However, particular models do not necessarily
implement the complete instruction set. M700 included a range of computers which were all based on the same architectural features and instruction set ensuring a high level of compatibility and interchangeability in hardware and software terms. Within these limits there existed different implementations from more than one manufacturer to reflect specific commercial and application requirements.
Notes
References
Citations
Bibliography
External links
Argus 100-500, Micronor I and II:
Early British computers
Argus
Military computers
Avionics computers
Industrial computing
Computer-related introductions in 1958
Computer-related introductions in 1962 |
1414107 | https://en.wikipedia.org/wiki/Apple%20Advanced%20Typography | Apple Advanced Typography | Apple Advanced Typography (AAT) is Apple Inc.'s computer technology for advanced font rendering, supporting internationalization and complex features for typographers, a successor to Apple's little-used QuickDraw GX font technology of the mid-1990s. It is a set of extensions to the TrueType outline font standard, with smartfont features similar to the OpenType font format that was developed by Adobe and Microsoft, and to Graphite. It also incorporates concepts from Adobe's "multiple master" font format, allowing for axes of traits to be defined and morphing of a glyph independently along each of these axes. AAT font features do not alter the underlying typed text; they only affect the characters' representation during glyph conversion.
Features
Significant features of AAT currently include:
Several degrees of ligature control
Kashida justification and joiners
Cross-stream kerning (required for Nasta'liq Urdu, for example)
Indic vowel rearrangement
Independently controllable substitution of:
Old style figures
Small caps and drop caps
Swash variants
Alternative glyphs:
Individual alternatives on a per-glyph basis
Wholesale alternatives, such as engraved text
Anything else the font designer wants to add
Glyph variation axes
AAT font features are supported on Mac OS 8.5 and above and all versions of macOS. The cross-platform ICU library provided basic AAT support for left-to-right scripts. HarfBuzz version 2 has added AAT shaping support, an open-source implementation of the technology which Chrome/Chromium as version 72 and LibreOffice as version 6.3 uses it instead of CoreText for rendering macOS AAT fonts in cross-platform way.
As of OS X Yosemite and iOS 8, AAT supports language-specific shaping—that is, changing how glyphs are processed depending on the human language they are being used to represent. This support is available through the use of language tags in Core Text. Provision was added at the same time for the relative positioning of two glyphs via anchor points via the ‘kerx’ and ‘ankr’ tables.
AAT and OpenType in macOS
As of Mac OS X 10.5 Leopard, partial support for OpenType is available. As of 2011, support is limited to Western and Arabic scripts. If a font has AAT tables, they will be used for typography. If the font does not have AAT tables but does have OpenType tables, they will be used to the extent that the system supports them.
This means that many OpenType fonts for Western or Middle Eastern scripts can be used without modification on Mac OS X 10.5, but South Asian scripts such as Thai and Devanagari cannot. These require AAT tables for proper layout.
AAT Layout
AAT first requires the text to be turned entirely into glyphs before text layout occurs. Operations on the text take place entirely within the glyph layer.
The core table used in the AAT layout process is the "morx" table. This table is divided into a series of chains, each further divided into subtables. The chains and subtables are processed in order. When each subtable is encountered, the layout engine compares flags in the subtable against control flags, generally derived from user settings. This determines whether or not the subtable is processed.
The set of available features in the font is made accessible to the user via the "feat" table. This table provides pointers to the localizable strings that can be used to describe a feature to the end user and the appropriate flags to send to the text engine if the feature is selected. Features can be made invisible to the user by the simple expedient of not including entries in the "feat" table for them. Apple uses this approach, for example, to support required ligatures.
Subtables may perform non-contextual glyph substitutions, contextual glyph substitutions, glyph rearrangements, glyph insertions, and ligature formation. Contextual actions are sensitive to the surrounding text. They can be used, for example, to automatically turn an s into a medial s anywhere in a word except at its end.
The "morx" subtables for non-contextual glyph substitutions are simple mapping tables between the glyph substituted and its substitute. The others all involve the use of finite state machines.
For the purposes of processing the finite state machine, glyphs are organized into classes. A class may be small, containing only a single glyph (for something like ligature formation), or it may include dozens glyphs or even more. A special class is automatically defined for any glyph not included in any of the explicit classes. Special classes are also available for the end of the glyph stream and glyphs deleted from the glyph stream.
Beginning with a start-of-text state, the layout engine parses the text, glyph by glyph. Depending on its current state and the class of the glyph it encounters, it will switch to a new state and possibly perform an appropriate action. The process continues until the glyph stream is exhausted.
The use of finite state machines allows "morx" tables to be relatively small and to be processed relatively quickly. They also provide considerable flexibility. Inasmuch, however, as Apple's font tools require the generation of "morx" tables via raw state table information, they can be difficult to produce and debug. The font designer is also responsible for making sure that "morx" subtables are ordered correctly for the desired effect.
Since AAT operates entirely with glyphs and never with characters, all the layout information necessary for producing the proper display resides within the font itself. This allows fonts to be added for new scripts without requiring any specific support from the OS. Third parties can produce fonts for scripts not officially supported by Apple, and they will work with macOS. On the other hand, this also means that every font for a given script requires its own copy of the script's shaping information in its own "morx" tables.
Other AAT tables (or AAT-specific extensions to standard TrueType tables) allow for context-sensitive kerning, justification, and ligature splitting. AAT also supports variation fonts, in which a font's shape can vary depending on a scaled value supplied by the user. Variation fonts are similar to Adobe's defunct Multiple master fonts, where the endpoints are defined and any medial value is valid. With this, the user can then drag sliders in the user interface to make glyphs taller or shorter, to make them fatter or thinner, to increase or decrease the size of the serifs, and the like, all independently of one another. Glyphs may even have their fundamental shapes radically altered. Before OpenType introduced Font Variation in September 2016, there is nothing like this in OpenType.
Other AAT tables can also have point-size dependent effects; for example, at 12 points, the horizontal and vertical strokes can be of similar width, but at 300 points, the stroke width variation could be quite great.
In practice, few AAT fonts use any features of the technology other than those available through the "morx" table. Zapfino, Hoefler Text, and Skia are fonts that ship with macOS that illustrate a variety of AAT's capabilities.
AAT for Indic scripts
For Indic scripts, the only features that are necessary are glyph reordering and substitution; AAT supports both of these. As noted above, OpenType fonts for Indic scripts require AAT tables to be added before they will function properly on macOS. However, this applies only to software dependent on the system support of OpenType. Programs that provide their own implementation of OpenType will render Indic properly with OpenType fonts. (They may, however, not render Indic fonts with AAT tables correctly.)
Mac OS X 10.5 shipped with fonts for Devanagari, Gurmukhi, Gujarati, Thai, Tibetan, and Tamil. Fonts for other Indic scripts were included in later versions of macOS and iOS, as well as being available from third parties.
See also
Apple typography
Graphite (SIL) technology on MS Windows and Linux
List of typographic features
XeTeX
References
External links
About Apple Advanced Typography Fonts, Apple's developer documentation
- a set of command-line tools to work with fonts
(in PDF format)
An example of an AAT table
Fontforge documentation
Discussion on AAT used in Persian fonts
Text rendering libraries
Font formats
MacOS APIs |
50760465 | https://en.wikipedia.org/wiki/Commander%20Keen%20in%20Invasion%20of%20the%20Vorticons | Commander Keen in Invasion of the Vorticons | Commander Keen in Invasion of the Vorticons is a three-part episodic side-scrolling platform video game developed by Ideas from the Deep (a precursor to id Software) and published by Apogee Software in 1990 for MS-DOS. It is the first set of episodes of the Commander Keen series. The game follows the titular Commander Keen, an eight-year-old child genius, as he retrieves the stolen parts of his spaceship from the cities of Mars, prevents a recently arrived alien mothership from destroying landmarks on Earth, and hunts down the leader of the aliens, the Grand Intellect, on the alien home planet. The three episodes feature Keen running, jumping, and shooting through various levels while opposed by aliens, robots, and other hazards.
In September 1990, John Carmack, while working at programming studio Softdisk, developed a way to implement side-scrolling video games on personal computers (PCs), which at the time was the province of dedicated home video game consoles. Carmack and his coworkers John Romero and Tom Hall, along with Jay Wilbur and Lane Roathe, developed a demo of a PC version of Super Mario Bros. 3, but failed to convince Nintendo to invest in a PC port of their game. Soon afterwards, however, they were approached by Scott Miller of Apogee Software to develop an original game to be published through the Apogee shareware model. Hall designed the three-part game, John Carmack and Romero programmed it, Wilbur managed the team, and artist Adrian Carmack helped later in development. The team worked continuously for almost three months on the game, working late into the night at the office at Softdisk and taking their work computers to John Carmack's home to continue developing it.
Released by Apogee in December 1990, the trilogy of episodes was an immediate success; Apogee, whose monthly sales had been around US$7,000, made US$30,000 on Commander Keen alone in the first two weeks and US$60,000 per month by June, while the first royalty check convinced the development team, then known as Ideas from the Deep, to quit their jobs at Softdisk. The team founded id Software shortly thereafter and went on to produce another four episodes of the Commander Keen series over the next year. The trilogy was lauded by reviewers due to the graphical achievement and humorous style, and id Software went on to develop other successful games, including Wolfenstein 3D (1992) and Doom (1993). The Vorticons trilogy has been released as part of several collections by id and Apogee since its first release, and has been sold for modern computers through Steam since 2007.
Gameplay
The three episodes of Commander Keen in Invasion of the Vorticons make up one side-scrolling platform video game: most of the game features the player-controlled Commander Keen viewed from the side while moving on a two-dimensional plane. Keen can move left and right and can jump; after finding a pogo stick in the first episode, he can also bounce continuously and jump higher than he can normally with the correct timing. The levels are composed of platforms on which Keen can stand, and some platforms allow Keen to jump up through them from below. The second episode introduces moving platforms as well as switches that extend bridges over gaps in the floor. Once entered, the only way to exit a level is to reach the end, and the player cannot save and return to the middle of a level. In between levels, Keen travels on a two-dimensional map, viewed from above; from the map the player can enter levels by moving Keen to the entrance or save their progress in the game. Some levels are optional and can be bypassed, while others are secret and can only be reached by following specific procedures.
Each of the three episodes contain a different set of enemies in their levels, which Keen must kill or avoid. The first episode includes Martians, the second largely uses robots, and the third more species of aliens. All three episodes also include Vorticons, large blue canine-like aliens. Levels can also include hazards like electricity or spikes. Touching a hazard or most enemies causes Keen to lose a life, and the game is ended if all of Keen's lives are lost. After finding a raygun in the first episode, Keen can shoot at enemies using ammunition found throughout the game; different enemies take differing numbers of shots to kill, or in some cases are immune. Some enemies can also be stunned if they are jumped on, such as the one-eyed Yorps, which block Keen's path but do not harm him. Keen can find food items throughout the levels which grant points, with an extra life awarded every 20,000 points. There are also colored keycards that grant access to locked parts of levels, and in the third episode on rare occasions an ankh, which gives Keen temporary invulnerability.
Plot
The game is broken up into three episodes: "Marooned on Mars", "The Earth Explodes", and "Keen Must Die!". In the first episode, eight-year-old child genius Billy Blaze builds a spaceship and puts on his older brother's football helmet to become Commander Keen. One night while his parents are out of the house, he flies to Mars to explore, but while away from the ship the Vorticons steal four vital components and hide them in Martian cities. Keen journeys through Martian cities and outposts to find the components, despite the efforts of Martians and robots to stop him. After securing the final component, which is guarded by a Vorticon, Keen returns to Earth—discovering a Vorticon mothership in orbit—and beats his parents home, who discover that he now has a pet Yorp.
In the second episode, the Vorticon mothership has locked its X-14 Tantalus Ray cannons on eight of Earth's landmarks, and Keen journeys to the ship to find and deactivate each of the cannons. Keen does so by fighting more varied enemies and hazards and a Vorticon at each cannon's control. At the end of the episode, he discovers that the Vorticons are being mind-controlled by the mysterious Grand Intellect, who is actually behind the attack on Earth.
In the third episode, Keen journeys to the Vorticon homeworld of Vorticon VI to find the Grand Intellect. He travels through Vorticon cities and outposts to gain access to the Grand Intellect's lair, fighting mostly against the Vorticons themselves. Upon reaching the lair, he discovers that the Grand Intellect is actually his school rival Mortimer McMire, whose IQ is "a single point higher" than Keen's. Keen defeats Mortimer and his "Mangling Machine" and frees the Vorticons from mind-control; the Vorticon king and "the other Vorticons you haven't slaughtered" then award him a medal for saving them.
Development
Genesis
In September 1990, John Carmack, a game programmer for the Gamer's Edge video game subscription service and disk magazine at Softdisk in Shreveport, Louisiana, with the aid of a copy of Michael Abrash's Power Graphics Programming, developed from scratch a way to create graphics which could smoothly scroll in any direction in a computer game. At the time, IBM-compatible general-purpose computers were not able to replicate the common feat of video game consoles such as the Nintendo Entertainment System, that of redrawing the entire screen fast enough for a smooth side-scrolling video game due to their specialized hardware. Carmack, rejecting the "clever little shortcuts" that other programmers had attempted to solve the problem, created adaptive tile refresh: a way to slide the majority of the visible screen to the side both horizontally and vertically when the player character moved as if it had not changed, and only redraw the newly-visible portions of the screen. Other scrolling computer games had previously redrawn the whole screen in chunks, or like Carmack's earlier games were limited to scrolling in one direction. He discussed the idea with coworker Tom Hall, who encouraged him to demonstrate it by recreating the first level of the recently released Super Mario Bros. 3 on a computer. The pair did so in a single overnight session, with Hall recreating the graphics of the game—replacing the player character of Mario with Dangerous Dave, a character from an eponymous previous Gamer's Edge game—while Carmack optimized the code. The next morning, September 20, Carmack and Hall showed the resulting game, Dangerous Dave in Copyright Infringement, to another coworker, John Romero. Romero recognized Carmack's programming feat as a major accomplishment: Nintendo was one of the most successful companies in Japan, largely due to the success of their Mario franchise, and the ability to replicate the gameplay of the series on a computer could have large implications. The scrolling technique did not meet Softdisk's coding guidelines as it needed at least a 16-color EGA graphics processor, and the programmers in the office who did not work on games were not as impressed as Romero.
Romero felt that the potential of Carmack's idea should not be "wasted" on Softdisk; while the other members of the Gamer's Edge team more or less agreed, he especially felt that their talents in general were wasted on the company, which needed the money their games brought in but in his opinion neither understood nor appreciated video game design as distinct from general software programming. The manager of the team, fellow programmer Jay Wilbur, recommended that they take the demo to Nintendo itself to position themselves as capable of building a PC version of Super Mario Bros. for the company. The group—composed of Carmack, Romero, Hall, Wilbur, and Gamer's Edge editor Lane Roathe—decided to build a full demo game to send to Nintendo. As they lacked the computers to build the project at home and could not work on it at Softdisk, they "borrowed" their work computers over the weekend, taking them in their cars to a house shared by Carmack, Wilbur, and Roathe. The group then spent the next 72 hours working non-stop on the demo, which copied Super Mario Bros. 3 with some shortcuts taken in the artwork, sound, and level design, and a title screen that credited the game to the programmers under the name Ideas from the Deep, a name Romero had used for some prior Softdisk projects. The response from Nintendo a few weeks later was not as hoped for: while Nintendo was impressed with their efforts, they wanted the Mario series to remain exclusive to Nintendo consoles.
Around the same time the group was rejected by Nintendo, Romero was receiving fan mail about some of the games he had developed for Gamer's Edge. Upon realizing that all the letters came from different people but shared the same address—that of Scott Miller of Apogee Software—he wrote back an angry reply, only to receive a phone call from Miller soon after. Miller explained that he was trying to get in contact with Romero unofficially, as he expected that Softdisk would screen his mail to him at the company. He wanted to convince Romero to publish more levels for his previous Pyramids of Egypt—an adventure game in which the player navigates mazes while avoiding Egyptian-themed traps and monsters—through Apogee's shareware model. Miller was pioneering a model of game publishing in which part of a game would be released for free, with the remainder of the game available for purchase from Apogee. Romero said he could not, as Pyramids of Egypt was owned by Softdisk, but that it did not matter as the game he was now working on was much better. Romero sent Miller the Mario demo, and the two agreed that Ideas from the Deep would create a new game for Apogee. The group negotiated that Miller would front them money for the development costs, which Miller later claimed was all the money Apogee had. Miller sent the group a US$2,000 advance in return for an agreement that they would create a game before Christmas of 1990, only a few months away. This advance was the team's entire budget for development. The game was planned to be split into three parts to match Apogee's shareware model of giving away the first part for free to attract interest in the whole.
Creation
Ideas from the Deep convened to come up with the design for the game, and Hall suggested a console-style platformer in the vein of Super Mario Bros., as they had the technology made for it; he further recommended a science fiction theme. John Carmack added the idea of a genius child protagonist saving the world, and Hall quickly created a short summary for the game: a dramatic introduction about eight-year-old genius Billy Blaze, defending the Earth with his spaceship. When he read out the summary in an over-dramatic voice to the group, they laughed and applauded, and the group agreed to begin work on Commander Keen in the Invasion of the Vorticons.
The Ideas from the Deep team, who referred to themselves as the "IFD guys", could not afford to leave their jobs to work on the game full-time, so they continued to work at Softdisk making Gamer's Edge games during the day while working on Commander Keen at night. They also continued to take home their work computers to Carmack's house on the weekends, putting them in their cars at night and bringing them back in the morning before anyone else arrived; they even began to request upgrades to the computers from Softdisk, nominally for their work. The group split into different roles: Hall became the game designer and creative director, John Carmack and Romero were the programmers, and Wilbur the manager. They invited artist Adrian Carmack from Softdisk to join them late in development, while Roathe was soon kicked out of the group; Romero, the self-appointed leader of the team, liked him but felt that his work ethic did not match well with the rest of the team and pushed for his removal. Ideas from the Deep spent nearly every waking moment when they were not working at Softdisk from October through December 1990 working on Commander Keen, with Wilbur forcing them to eat and take breaks. Several members of the team have mentioned in interviews as an example of the team's commitment a night during development when a heavy storm flooded the path to get to the house, preventing them from working, and John Romero waded through a flooding river to make it to the house anyway.
As the principal designer, Hall's personal experiences and philosophies strongly impacted the game: Keen's red shoes and Green Bay Packers football helmet were items Hall wore as a child, dead enemies left behind corpses due to his belief that child players should be taught that death had permanent consequences, and enemies were based loosely on his reading of Sigmund Freud's psychological theories, such as that of the id. Other influences on Hall for the game were Duck Dodgers in the 24½th Century (1953) and other Chuck Jones cartoons, and "The Available Data on the Worp Reaction", a 1953 short story by Lion Miller about a child constructing a spaceship. Keen's "Bean-with-Bacon" spaceship was taken from a George Carlin skit about using bay leaves as deodorant so as to smell like soup. Keen was intended to be a reflection of Hall as he had wanted to be as a child. The team separated the game from its Super Mario Bros. roots by adding non-linear exploration and additional mechanics like the pogo stick. A suggestion from Miller that part of the popularity of Super Mario Bros. was the presence of secrets and hidden areas in the game led Hall to add several secrets, such as an entire hidden level in the first episode, and a "Galactic Alphabet" in which signs in the game were written, which if deciphered by the players revealed hidden messages, jokes, and instructions. The level maps were designed using a custom-made program called Tile Editor (TEd), which was first created for Dangerous Dave and was used for the entire Keen series as well as several other games. To maintain the mood of the game's storyline and entice players to play the next episode, the team incorporated cliffhangers between each part of the trilogy.
As the game neared completion, Miller began to market the game to players. Strongly encouraged by the updates the team was sending him, he began heavily advertising the game in all the bulletin board systems (BBS) and game magazines he had access to, as well as sending the team US$100 checks every week labelled "pizza bonus" after one of the game's food items to keep them motivated. The game was completed in early December 1990, and on the afternoon of December 14, Miller began uploading the completed first episode to BBSs, with the other two episodes listed as available for purchase as a mailed plastic bag with floppy disks for US$30.
Reception
Commander Keen was an immediate hit for Apogee: the company's previous sales levels had been around US$7,000 per month, but by Christmas Keen already had sales of almost US$30,000. Miller described the game as "a little atom bomb" to magazine editors and BBS controllers when asked about it; its success led him to recruit his mother and hire his first employee to handle sales and phone calls from interested players, and to quit his other job and move Apogee from his house into an office. By June 1991, the game was bringing in over US$60,000 per month; in 1995 the team estimated that the game had made US$300,000 to $400,000. Chris Parker of PC Magazine later in 1991 referred to the game's release as a "tremendous success". Apogee announced plans to license the game to another publisher for a Nintendo Entertainment System port in an advertising flyer that year, but no such version was ever created. Scott Miller estimated in 2009 that the trilogy eventually sold between 50,000 and 60,000 copies.
A contemporary review by Barry Simon of PC Magazine praised the game's graphical capabilities as having a "Nintendo feel", though he termed the graphics as "well drawn" but "not spectacular" in terms of resolution. He noted that the game was very much an arcade game that players would not purchase for "its scintillating plot or ground-breaking originality", but said that all three episodes were very fun to play and that the scrolling graphics set it apart from similar games. A short summary of the trilogy in 1992 in PC World termed it "one of the most spectacular games available" and praised the "superb" sound and graphics, while a similar summary in CQ Amateur Radio described it as "Nintendo comes to the PC" and the "best action/adventure game" the reviewer had ever seen. In October 1992, the Shareware Industry Awards gave the Commander Keen series the "Best Entertainment Software and Best Overall" award. A review of the entire Commander Keen series in 1993 by Sandy Petersen in the first "Eye of the Monitor" column for Dragon described the series as action games with "hilarious graphics". Acknowledging its debt to Super Mario Bros., he called it, including the Vorticon trilogy, "one of the best games of its type" and praised it for not being "mindlessly hard", though still requiring some thought to play through, and especially for the humor in the graphics and gameplay.
Legacy
Ideas from the Deep's first royalty check from Apogee in January 1991 for US$10,500 convinced them that they no longer needed their day jobs at Softdisk but could devote themselves full-time to their own ideas. Hall and Wilbur were concerned about the risk of being sued if they did not break the news gently to Softdisk, but Romero and John Carmack were dismissive of the possibility, especially as they felt they had no assets for which they could be sued. Shortly thereafter, John Carmack was confronted by their boss, Softdisk owner Al Vekovius, who had become suspicious of the group's increasingly erratic, disinterested, and surly behavior at work, as well as their multiple requests for computer upgrades. Vekovius had been told by another employee that the group were making their own games, and he felt that Carmack was generally incapable of lying. Carmack in turn bluntly admitted that they had made Keen with Softdisk computers, that they felt no remorse for their actions, and that they were all planning on leaving. Vekovius felt that the company was reliant on the Gamer's Edge subscriptions and tried to convince the group to instead form a new company in partnership with Softdisk; when Ideas from the Deep made no secret of the offer in the office, the other employees threatened to all quit if the team was "rewarded" for stealing from the company. After several weeks of negotiation, the Ideas team agreed to produce a series of games for Softdisk, one every two months, and on February 1, 1991, founded id Software.
In the 2017 History of Digital Games, author Andrew Williams claims that Vorticons "signaled a new direction for computer games in general", as well as setting a tone of gameplay mechanics for future id games, by introducing a sense of "effortless movement" as players explored through large open spaces instead of disconnected screens like prior platforming computer games. In the summer of 1991, id hosted a seminar for game developers with the intention of licensing the Commander Keen engine; they did so, forming the spiritual predecessor to both QuakeCon and id's standard of licensing their game engines. Id Software also produced several more games in the Commander Keen series; the first of these, Commander Keen in Keen Dreams, was published in 1991 through their agreement with Softdisk. Commander Keen in Goodbye, Galaxy, composed of the episodes "Secret of the Oracle" and "The Armageddon Machine", was published through Apogee in December 1991, and the final id-developed Keen game, Commander Keen in Aliens Ate My Babysitter, was published through FormGen around the same time. Another trilogy of episodes, titled The Universe Is Toast!, was planned for December 1992, but was cancelled after the success of id's Wolfenstein 3D and development focus on Doom. A final Keen game, Commander Keen, was developed for the Game Boy Color in 2001 by David A. Palmer Productions in association with id Software, and published by Activision.
The original trilogy has been released as part of several collections since its first release: the id Anthology compilation in 1996, a compilation release by Apogee in 1998 of Invasion of the Vorticons and Goodbye, Galaxy, a similar compilation in 2001 by 3D Realms titled Commander Keen Combo CD, and the 3D Realms Anthology in 2014. They have also been released for modern computers through a DOS emulator, and sold through Steam since 2007 as a part of the Commander Keen Complete Pack. According to Steam Spy, as of 2017 over 200,000 copies are registered through Steam.
References
Sources
External links
Official 3D Realms (formerly Apogee) site for Invasion of the Vorticons, including download for "Marooned on Mars"
All three episodes of Invasion of the Vorticons ("Marooned on Mars", "The Earth Explodes", and "Keen Must Die!") can be played for free in the browser at the Internet Archive
1990 video games
Apogee games
Commander Keen
DOS games
Games commercially released with DOSBox
Id Software games
Shareware
Side-scrolling platform games
Single-player video games
Video games developed in the United States
Video games set on fictional planets
Video games set on Mars
Windows games |
9800018 | https://en.wikipedia.org/wiki/Lo%C3%AFc%20Dachary | Loïc Dachary | Loïc Dachary (born 1965) is a French free software developer and activist who has been active since 1987. Dachary currently contributes to free software projects and acts as president of the Free Software Foundation in France. He is a speaker for the GNU Project and the April association. Right now, he is a full-time volunteer for the SecureDrop project.
Career
Dachary started as a C and Lisp developer in 1986. In 1987, he was hired to teach Unix and programming at Axis. In 1988, Lectra launched a 68k based hardware, and Dachary led a Unix System V port using the GNU Compiler Collection. When the project was completed, he took a sabbatical in 1989 to create the GNA, a non-profit group for the development and distribution of free software. Dachary went back to development for Tic Tac Toon and Agence France-Presse (AFP), where he learned C++.
In 1995, he founded the Ecila search engine, which was acquired by Tiscali and discontinued in 2001. After two years dedicated to the FSF France, Dachary worked for INRIA on RFID software in 2003. Starting with Mekensleep in 2004, Dachary wrote poker-related free software, which was published as part of the PokerSource project. In 2007 he was hired by OutFlop, a startup that specialized in poker software services based on PokerSource. In 2011 he became a Free Software freelancer and worked on XiVO.
In January 2012, Loïc became involved in OpenStack and worked as a Ceph developer. Since 2017 he is working on the SecureDrop project as a full-time volunteer.
Non-profit career
GNA
When he was employed by Axis in 1987, Dachary offered copies of free software on magnetic tapes, such as Emacs or the GNU Compiler Collection. The Internet was difficult to access at the time, and he became a distributor of software. After working on the Lectra project, Dachary created the GNA (Gna's Not Axis) non-profit organization in 1989.
Although the original goal of GNA was to develop free software, it started as a news and mail provider. The connectivity to the news and mail provider in Washington, D.C. was provided by Agence France-Presse (AFP) over a half-used satellite link. In 1990, GNA provided mail and news feeds to over 200 non-profit organizations, individuals, and companies.
GNA kept distributing free software on magnetic tapes. Compiling from the sources was difficult, and Dachary packaged pre-compiled binaries for the Motorola 68000 family, x86 and SPARC. He left tapes at his computer book store, Le Monde en "tique," who sold them to his customers.
Richard Stallman occasionally visited France and met Dachary when he started to spread free software. During his year at GNA, Dachary took advantage of Stallman's presence in France to organize conferences at École nationale supérieure d'arts et métiers, Paris 8 University and his former school École Pour l'Informatique et les Techniques Avancées.
Nineties
In 1990, Dachary took a secondary role in GNA as his friend Hugues Lafarge became president. In 1996, he met with the founders of April, a French Free Software NGO, to whom he donated their first server, hosted at Ecila's office.
Savannah
Dachary returned to a more active role in non-profit work in 2000, when he wrote the code for the newly created SourceForge platform. In an attempt to understand the project dynamics, he traveled to VA Linux offices and met with Eric Raymond and Tim Perdue, but felt the team was too focused on in-house interactions to welcome an external contributor. However, he became familiar with the published code base, and when VA Linux started to use proprietary software, he created an alternative platform using the latest free software code base published: GNU Savannah. In early 2001, Dachary secured the funds to buy hardware for the Free Software Foundation offices and moved to Boston to install them. In late 2001, when VA Linux asked him to assign copyright for his contributions, Dachary published an article explaining why it was time to move away from SourceForge. In 2002, Dachary gradually delegated his responsibilities to GNU Savannah contributors and Free Software Foundation staff and retired in 2003.
Copyleft compliance
In 2001, the Free Software Foundation established a presence in Europe, and Dachary took part in the process by creating FSF France (reusing the publication number 20010027 initially given to GNA in the French Journal Officiel) and serving as the first vice president of FSF Europe. In addition to supporting the growth of GNU Savannah, FSF France assisted French developers and companies with GPL compliance. With Bradley M. Kuhn, executive director of the Free Software Foundation at the time, Dachary worked the first contract of the new GPL Compliance Labs for a French governmental agency. His ongoing interest in enforcing Copyleft licenses led to a few visible outcomes, such as the publication of C&CS by Orange and SFR, and the court case involving Free, but most of it is kept confidential. In 2009, Dachary joined the Software Freedom Conservancy board of directors for a few years.
EUCD.INFO
In late 2002, Dachary became concerned by the lack of protest against the European Copyright Directive that was about to be transposed into French law. Instead of acting in the name of FSF France and because the problems were not limited to free software, he launched the EUCD.INFO initiative. After six months of full-time lobbying, he hired Christophe Espern to work for FSF France and delegated his responsibilities. Dachary has not played an active role since, although he keeps in touch through his friend Jérémie Zimmermann, Christophe Espern's successor, co-founder, and spokesperson of La Quadrature du Net.
GNA! and hosting
In late 2003, the GNU Savannah compromise created tensions in the governance. In early 2004, Dachary worked with Mathieu Roy to set up GNA! using the same policy and software, but with a different decisional process. In 2010, the maintainers of Savannah and GNA! overlapped and worked as a network administrator for both.
Although centralized forges such as GNA! are popular, Dachary has recently gradually transgressed towards distributed hosting. He collects machines and hosting facilities under the umbrella of FSF France and runs OpenStack based clusters on them. Dachary helps projects such as the GNU Compiler Collection and Software Freedom Conservancy by providing and maintaining machines on this infrastructure. In late 2008, he created an amateur datacenter (codename microdtc34) in a seven-square-meter room in the center of Paris with Laurent Guerby, who was in the process of creating the tetaneutral.net, a non-profit ISP.
April.org
Dachary was appointed an honorary member of the April association. On the behalf of April he participated in the commission spécialisée de terminologie et de néologie de l'informatique et des composants électroniques from 2006 to 2009, primarily to discuss the definition of "free software".
Upstream University
Shortly after the April 2012 OpenStack summit, Dachary founded Upstream University to train developers to become better Free Software contributors.
References
External links
1965 births
Living people
French computer programmers
Free software programmers
GNU people |
530534 | https://en.wikipedia.org/wiki/IXI%20Limited | IXI Limited | IXI Limited was a British software company that developed and marketed windowing products for Unix, supporting all the popular Unix platforms of the time. Founded in 1987, it was based in Cambridge. The product it was most known for was X.desktop, a desktop environment graphical user interface built on the X Window System. IXI was acquired by the Santa Cruz Operation (SCO) in February 1993.
Origins in the Cambridge hi-tech cluster
In the beginning of the 1970s, the so-called Cambridge hi-tech cluster became the site of a network of new firms in the rapidly growing computer field, many of which featured founders and employees who had studied at the University of Cambridge.
And in particular, as an article in the journal Regional Studies has noted, IXI was one of many companies started by founders or employees or those in the nexus of Cambridge-based Acorn Computers, the most noted of which is Arm Holdings.
IXI founder Ray Anderson was a graduate of the university who had become director of research and development at Torch Computers, a computer systems firm located in the Cambridge area that was most known for making peripherals for the BBC Micro made by Acorn. Torch built workstations among its products, and also had a license agreement to provide NeXT with aspects of workstation technology.
In the end, Torch was not successful, but its work inspired Anderson to carry the idea on.
Independent company
IXI Limited was founded by Ray Anderson in 1987 as a private company. Anderson originally had a former colleague as a partner, but the partner decided a start-up was too uncertain and pulled out within a year or so. Anderson found funding for IXI from sources in the United Kingdom, Germany, Austria, and Japan, but avoiding US investors as his prior experiences had made him leery of them.
As one former SCO UK employee has succinctly summarised, "IXI specialised in software that ran on Unix and made Unix easier to use." In particular, a goal was to make Unix workstations as easy to use as a Macintosh, which would allow non-technical people to use such platforms.
IXI's best-known product was X.desktop, an X11-based graphical desktop environment with finder and file management capabilities for Unix systems. There was an opportunity for such a product because when the X11 version of the X Window System came out in 1987, it made a point of separation of mechanism and policy (indeed, it has been termed a canonical example of that design philosophy). Consequently, while it supported the ability to provide such things, it contained no specification for application user-interface design such as buttons, menus, or window title-bar styles, nor did it provide a standard window manager, file manager, or desktop.
The initial unreleased version of X.desktop, intended as a proof of concept, was programmed to the Xlib level; the first version that saw public release, 1.3, was based on the Xt library and Athena widgets. The X.desktop product then came to be based on Motif toolkit from the Open Software Foundation (OSF), a switch that happened in 1989 with release 2.0.
The first customers for IXI came in the financial industry, who were early adopters of Unix-based workstations. These were generally American companies, with sales to the Japanese market coming soon thereafter.
Indeed, IXI has been characterized as an example of a "global start-up", in that instead of following the expected route for a start-up of establishing a domestic business first and then slowly expanding into international operations, it worked to establish an international business right away.
A crucial part of IXI's standard was to capitalize on standards and thus defeat competitors who were based more upon proprietary solutions. This later paid benefits when X.desktop proved cost-effective to internationalize to other languages.
X.desktop was sold as both shrink-wrapped software for end users, at a price of $495 for any platform. But it was also sold on an OEM basis to system manufacturers. Early OEM customers included Locus Computing Corporation, BiiN, Olivetti, Acorn Computers and Compaq. An OEM sale was to IBM in 1989 for its RS/6000 and IBM AIX proved to be a turning point in IXI's fortunes. The SCO OpenDesktop product was another early adopter, and was another key usage of the product for the company. OEM customers gained by 1990 included NCR Corporation, Dell, Uniplex, Parallel Systems, and Network Computing Devices.
As an InfoWorld article from 1990 stated, the ability to have OEM customers was a key factor in IXI Limited being successful. Part of this success was due to X.desktop coming with a customization toolkit that allowed system manufacturers to modify the appearance and functionality of the desktop environment to match their needs. The customization took the form not just of customizing icons, but the ability to tie icons to arbitrary series of commands.
The X.desktop product was stated as being ported to, and sold on, over a dozen different Unix variants. And as a paper published for a 1994 USENIX conference detailed, versions of X.desktop were actually built for over 30 different Unix platforms. The ability to maintain the portability of the X.desktop code base became a key factor in IXI's success with the product.
The primary competitor of X.desktop was the Looking Glass product from the American company Visix Software, Inc. Trade publications ran comparisons of the two desktop environments, and detailed cases where one beat another for an account.
Eventually over a million instances of X.desktop were in use. In 1992 IXI released Deskworks, a suite of productivity tools that included such things as a clock, a text editor, a mail client, a time management tool, and the like.
For 1992, IXI Ltd had revenues of about $6 million. By early 1993, the firm employed around 50 people, and in addition to its Cambridge headquarters, it also had offices in San Ramon, California, in the US and in Tokyo in Japan. According to a later article in MoneyWeek, by this time IXI had some 70 percent of the workstation market.
Acquisition by SCO
IXI was acquired by the Santa Cruz Operation (SCO), in an announcement made on 25 February 1993. Terms of the purchase were not publicly disclosed, but did involve an exchange of stock. As mentioned, SCO had previously licensed IXI technology in its operating system product, and there were existing ties between the engineering and marketing functions of both companies. Anderson later said of his motivation to sell, that having gotten a 70 percent share, "getting the remaining 30 percent of the market would have required heavy reinvestment. I felt ready to move on."
The IXI brand continued on for the next couple of years as a relatively independent subsidiary of SCO. The announcement in mid-1993 by several major Unix vendors of the Common Desktop Environment (CDE) project posed a competitive threat to X.desktop, but it took two years of further development until CDE actually came out.
Several new products were introduced during this time. IXI Panorama, introduced about a month after the acquisition, was a Motif-based window manager, that could run with or without X.desktop; it had the ability to plot and manage a virtual space much greater than the physical space of the monitor itself. Panorama was extended in March 1994 with IXI Mosaic, reflecting the incorporation of the first popular web browser, Mosaic, into the SCO Global Access product, a modified version of SCO Open Desktop that served as an Internet gateway. In doing this, SCO and IXI put out the world's first commercial web browser based on Mosaic, and, according to Anderson, the first commercial web browser of any kind.
IXI Premier Motif was a product that came from IXI taking the OSF-released Motif source code and applying a set of bug fixes and enhancements and then porting it such that it would give identical behavior across platforms. IXI also offered some twenty different Motif training courses for users. The IXI Wintif product, which became available in 1994, built further upon Premier Motif to create a version of Motif that had the look-and-feel of Microsoft Windows 3.1 and thus would enable Windows users to operate Unix applications without confusion or need for additional training. A later version extended this capability to Windows 95.
Then in 1995, the IXI business unit of SCO was merged with another SCO acquisition, the Leeds-based Visionware, to form IXI Visionware. (IXI had previously collaborated with Visionware, going back to 1988 when the Visionware technologies were first being developed within Systime Computers Ltd.) Later in 1995 the merged business unit was subsumed more fully into its parent and became the Client Integration Division of SCO, which put out products from both former companies under the "Vision"-branded family name. This included the creation of VisionFS, an SMB server that could do network installs of the Windows components of the Vision family from a Unix server with minimal user configuration needed. This division then developed and released the Tarantella terminal services application in 1997 and that became the core of Tarantella, Inc. in 2001. Tarantella, Inc. struggled and following company-wide layoffs, the Cambridge development site closed in the summer of 2003.
Fates
The X.desktop code gradually went into maintenance mode as X.desktop OEM providers migrated to CDE and many end-users abandoned Unix-based workstations altogether and switched to Wintel platforms. Ray Anderson left SCO after several years there, and in 1999 founded Bango plc, a mobile commerce company based in Cambridge.
References
Defunct software companies
Defunct companies based in Cambridgeshire
Software companies established in 1987
Software companies disestablished in 1993
Software companies of England
1987 establishments in England
1993 disestablishments in England |
58340405 | https://en.wikipedia.org/wiki/Rossen%20Petkov | Rossen Petkov | Rossen Kirchev Petkov is a Bulgarian writer and teacher, one of the country's pioneers in the field of digital arts, computer graphics and multimedia. He is the author of dozens of articles about modern media in education and learning, founded a network of students information and career centers in Bulgaria and is chair of the organizational committee of Computer Space forum - an international forum for computer art.
Biography
Rossen Petkov is born in Haskovo. In 1985 he graduated the Todor Velev Math High School, and in 1990 - the Technical University, Sofia.
In 1987, while still a student, he started working on projects, developing algorithms and programs for electronic music and computer graphics. In 1991 – 1992 he is editor and writer of the Graphics with computer magazine, where he has his own column, Computer Arts. In the beginning of the 1990s Petkov founded one of the first computer arts organizations in Bulgaria, the Student Computer Art Society (SCAS), with members among students, artists and experts in the field of modern media and digital arts.
Rossen Petkov is teaching Methods, algorithms and applications in computer graphics at the Technical University - Sofia, New Bulgarian University and others.
At the end of the 1990s Petkov founded students and youth information centers, using the Internet and data bases for search of information that would benefit young people – educational, job, travel, funding for start-up projects, etc. Under his guidance, SCAS joined effort with the governmental Committee for Youth and Sports to found a representative office of the international youth information network Eurodesk.
Bibliography
Petkov R. Musical Creativity and Microcomputers, "Young Constructor" Magazine, ISSN 0204-8469, Sofia, vol. 1, 1988, p. 2-3
Boyanov Y., R. Petkov, Program "Composer1", "Young constructor" magazine, ISSN 0204-8469, Sofia, vol. 4, 1988, pp. I-IV
Boyanov Y., S. Lazarov, R. Petkov, Method and Program for Various Variations, "Computer for You", ISSN 0205-1893, Sofia, vol. 1., 1989,
Petkov R., Program for Three-dimensional Graphics, Computer Graphics Magazine, ISSN 0861-4636, Sofia, issue no. 1, 1992, pp. 14–15
Petkov R., MIDI standard for communication between digital musical institutions, Computer Graphics Magazine, ISSN 0861-4636, Sofia, issue 2, 1992, pp. 14–16
Petkov R., Graphic design, PC & Mac World magazine, ISSN 0737-8939, Sofia, issue no. 7, 1994, pp. 92–94
Petkov, R., Multimedia development and computer arts in Bulgaria, Balkanmedia magazine, V3, Balkanmedia, , C: 1996, p. 25-26
Petkov, R. Trends and Problems in the Development of Multimedia Technologies in Bulgaria, Bulgarian Media-learning (collected papers), Balkanmedia, C: 1996, .- 1 (1996), pp. 354–360
Petkov R., D. Davitt, D. Donedd, Career Development Manual for Consultants, SCAS, C: 2004,
Petkov R. ed., Computer Space festival catalog (CD), SCAS, C: 2004,
Petkov R. ed., EuroNET- Youth Resources in Internet Catalog (CD), 5th edition, SCAS, C: 2005, MC 11645
Petkov R., Tsv. Ilieva, others, E-Games: successful implementation of e-games in youth work manual, SCAS, C: 2007,
Petkov R., Tsv. Ilieva, others, Quality Assurance in Youth Career Consultancy Manual, SCAS, C: 2008,
Petkov R., W. Hilzensauer, others, ePortfolio for your future manual, SCAS, C: 2009,
Petkov R., About Old Books and Computer Arts, Monograph, SCAS, C: 2010,2012 (Second revised Edition),
Petkov R., E. Licheva, others, "Validation of self-acquired learning and credit transfer in web design and computer animation" manual, Rosen Petkov, SCAS: 2013,
Elitsa Licheva, Rosen Petkov, "E-Portfolio for the Evaluation of Informal Learning in Web Design and Computer Animation", Proceedings of ePIC 2013, 11th International ePortfolio and Identity Conference, London, 8-9-10 July 2013, ISBN 978- 2-9540144-3-2, p. 179-180
Petkov R., E. Licheva et al., Guide "Design of Binding and Conservation of Old Books, Albums and Documents", , S: Central Library of Bulgarian Academy of Sciences, 2014
Petkov R., E. Licheva, D. Atanasova, Electronic Learning and Computer Design (CAD) of Binding, Vocational Education Magazine, vol. 16 pcs. 6, 2014, pp. 573–588
Petkov, R., D. Krastev, E. Licheva and others, Binding Design and Paper Conservation of Antique Books, Albums and Documents, Manual, SCAS 2014,
Petkov R., E. Licheva and others, Mobile Games in Youth Work Manual, NSICC, 2015,
Petkov R., E. Licheva and Others, Self-Guidance and Modern Media Literacy Manual, 2016,
References
1967 births
Bulgarian businesspeople
Living people |
51662260 | https://en.wikipedia.org/wiki/Paul%20Douglas%20%28musician%29 | Paul Douglas (musician) | Earl “Paul” Douglas (born c. 1950) is a Jamaican Grammy Award-winning drummer and percussionist, best known for his work as the drummer, percussionist and bandleader of Toots and the Maytals. His career spans more than five decades as one of reggae's most recorded drummers. Music journalist and reggae historian David Katz wrote, “dependable drummer Paul Douglas played on countless reggae hits."
Douglas has worked with artists including Bob Marley and the Wailers, Bonnie Raitt, and Eric Gale. Douglas has also toured with artists including The Rolling Stones, Willie Nelson, Dave Matthews Band, The Who, Eagles and Sheryl Crow.
Early life
Paul Douglas was born in St. Ann, Jamaica. His career as a professional musician began in 1965 at the age of 15.
Influences
Douglas’ musical influences include Lloyd Knibb, Steve Gadd, Harvey Mason, Sonny Emory, Elvin Jones, William Kennedy, Carlos Santana, Bob Marley, John Coltrane, Sam Cooke, George Duke, Boris Gardiner, The Skatalites, Eric Gale, Leslie Butler, George Benson, Marvin Gaye, David Garibaldi, and David Sanborn.
Affiliated groups
While Douglas has maintained an active career as a studio musician for reggae, jazz, and funk artists since 1965, he has also been a member of several notable musical groups.
Toots and the Maytals
In 1969 Douglas joined Toots and the Maytals as a founding member of the band as it is known today, which up to that time had consisted of a vocal trio. Douglas has been the group's drummer, percussionist and bandleader from 1985 to the present day.
Excerpt from "The Rise of Reggae and the influence of Toots and the Maytals" by Matthew Sherman:"...Reggae was born. Toots (Toots Hibbert) heralded the new sound with the seminal, complex groove monster "Do the Reggay"...Toots could do no wrong recording for Leslie Kong. With the consistent nucleus of musicians, the Beverley's All-Stars (Jackie Jackson, Winston Wright, Hux Brown, Rad Bryan, Paul Douglas and Winston Grennan) and the Maytals’ brilliant harmonizing..."Reggae is listed in the dictionary as:reggae [reg-ey] (noun) - a style of Jamaican popular music blending blues, calypso, and rock-'n'-roll, characterized by a strong syncopated rhythm and lyrics of social protest. Origin of reggae: Jamaican English, respelling of reggay (introduced in the song “Do the Reggay” (1968) by Frederick “Toots” Hibbert).Accompanied by Paul Douglas and Radcliffe "Dougie" Bryan in studio, Jackie Jackson explained the formation of the group in a radio interview for Kool 97 FM Jamaica:“We’re all original members of Toots and the Maytals band. First it was Toots and the Maytals, three guys: Toots, Raleigh, and Jerry. …And then they were signed to Island Records, Chris Blackwell. And we were their recording band. One day we were summoned to Chris’ house. And he says, “Alright gentleman, I think it’s time. Toots and the Maytals looks like it’s going to be a big thing”. By this time he had already signed Bob (Marley). So in his camp, Island Records, there was Toots and the Maytals / Bob Marley; we were talking about reggae is going international now. We kept on meeting and he (Blackwell) decided that the backing band that back all of the songs, the recording band, should be the Maytals band. So everything came under Toots and the Maytals. So we became Maytals also. And then we hit the road in 1975...we were the opening act for the Eagles, Linda Ronstadt, and Jackson Browne. We were the opening act for The Who for about two weeks.”
Paul Douglas, Jackie Jackson and Radcliffe ‘Dougie’ Bryan are recognized as founding members who, along with frontman Toots Hibbert, continue to perform in the group to the present day.
The first Toots and the Maytals album released and distributed by Chris Blackwell's Island Records was Funky Kingston. Music critic Lester Bangs described the album in Stereo Review as “perfection, the most exciting and diversified set of reggae tunes by a single artist yet released.” As Chris Blackwell says, “The Maytals were unlike anything else...sensational, raw and dynamic.” Blackwell had a strong commitment to Toots and the Maytals, saying “I’ve known Toots longer than anybody – much longer than Bob (Marley). Toots is one of the purest human beings I’ve met in my life, pure almost to a fault.”
On 1 October 1975, Toots and the Maytals were broadcast live on KMET-FM as they performed at The Roxy Theatre in Los Angeles. This broadcast was re-mastered and released as an album entitled “Sailin’ On” via Klondike Records.
President Donald Trump was quoted as appreciating the reggae music of Toots and the Maytals when he said, “I heard the guest band, Toots & The Maytals, practising out on the set [of Saturday Night Live; Trump co-hosted an episode in April 2004]. They sounded terrific, and I went out to listen to them for a while. My daughter Ivanka had told me how great they were, and she was right. The music relaxed me, and surprisingly, I was not nervous."
In 2015, Vogue magazine listed the song “54-46 Was My Number” by Toots and the Maytals as one of their “15 Roots Reggae Songs You Should Know”; and in an interview with Patricia Chin of VP Records, Vogue listed the group as part of an abbreviated list of early “reggae royalty” that recorded at Studio 17 in Kingston, which included Bob Marley, Peter Tosh, Gregory Isaacs, Dennis Brown, Burning Spear, Toots and the Maytals, The Heptones, and Bunny Wailer.
In 2017, Toots and the Maytals became the second reggae-based group to ever perform at the Coachella festival, after Chronixx in 2016.
Bob Marley and the Wailers
Douglas contributed to several of Bob Marley's albums, including Small Axe and Soul Shakedown Party which were released on the Beverley's label, and performed live with Bob Marley and the Wailers in the early 70s. The Wailers worked with reggae producer Leslie Kong, who used his studio musicians called Beverley's All-Stars (Jackie Jackson, Paul Douglas, Gladstone Anderson, Winston Wright, Rad Bryan, Hux Brown) to record the songs that would be released as an album entitled “The Best of The Wailers”. The tracks included “Soul Shakedown Party,” “Stop That Train,” “Caution,” “Go Tell It on the Mountain,” “Soon Come,” “Can’t You See,” “Soul Captives,” “Cheer Up,” “Back Out,” and “Do It Twice”.
Excerpt from an interview of Winston Grennan by Carter Van Pelt:"...Chris Blackwell say, 'Yeah, Yeah, Yeah. I give them the money to make this record.' But at that time they was forming the band. Bob (Marley]) came to me, figure it was me, Gladdy, Winston Wright, Jackie and Hux to be the band. That was the band that Bob did really want, but those guys didn't want to get involved. You know that the situation around Bob was pretty hectic...They turned it down. So right away, I couldn't get involved, because I didn't want to leave the guys. We was doing all the sessions.Robin Kenyatta came to Jamaica, we played for him. Garland Jeffreys, Paul Simon, Peter, Paul and Mary we play for them. The Rolling Stones came down we played for them. We were the guys... we could read music. If I leave, I feel it would be a bad vibes. When Hugh Malcolm joined the group, he couldn't keep up, so they got rid of him. A little later on a drummer came along name Paul Douglas, every so often we would bring him in, because I couldn't play on a session. Paul was about the only guy, that these other guys would trust to really come and play amongst them."
“The Perfect Beat" is a song on the album Eardrum from Talib Kweli that sampled a song from Bob Marley and the Wailers called, “Do It Twice”, which is a drum beat from Paul Douglas.
Lee "Scratch" Perry and Leslie Kong
Excerpt from the book “People Funny Boy - The Genius Of Lee "Scratch" Perry by David Katz:"On the instrumental front, Perry (Lee "Scratch" Perry) began more serious experimentation, exploring diverse influences and styles with a range of musicians. ...Perry also started working with Paul Douglas, an occasional Supersonics member and mainstay of Leslie Kong's productions."
Alton Ellis
Douglas is credited as the drummer on Alton Ellis' "Girl I've Got A Date". "Girl I've Got A Date" is recognized as one of the first songs to define the rocksteady genre.
Tommy McCook & The Supersonics
Douglas was a member of Tommy McCook & The Supersonics from 1968 - 1969, during which time the group released three LP's.
The Boris Gardiner Happening
Between 1970 - 1973 Douglas was the drummer for The Boris Gardiner Happening, completing five LP's with the group. The Boris Gardiner Happening recorded a version of "Ain't No Sunshine" in 1973 with Paul Douglas singing lead, and Boris Gardiner playing bass guitar, for the album Is What's Happening.
Leroy Sibbles
Douglas worked as a bandleader for the Leroy Sibbles band.
John Holt, The Pioneers, Eddy Grant
Douglas toured the UK with John Holt (singer) in 1974. This was the first major reggae tour that was accompanied by a major orchestra, a 15-piece orchestra out of England. The members of this tour included six veteran session musicians: Hux Brown (Guitar), Jackie Jackson (Bass), Paul Douglas (Drums), Rad Bryan (Guitar), Winston Wright (Organ), and Gladstone Anderson (Piano). Douglas also joined and played with The Pioneers band which featured Eddy Grant from The Equals that same year in England.
Byron Lee and the Dragonaires
In 1975 Douglas joined Byron Lee and the Dragonaires as a session musician, and later became a band member, as the group's drummer on the Sparrow Dragon Again LP.
Touring
Douglas has toured with many artists over the course of his career, including:
Toots and the Maytals
Jackson Browne
Linda Ronstadt
Eagles
The Who
The Rolling Stones
Dave Matthews Band
The J. Geils Band
Carlos Santana
The Roots
Sheryl Crow
James Blunt
On June 24, 2017 at the Glastonbury Festival, reggae group Toots and the Maytals were slotted for 17:30 with BBC Four scheduled to show highlights from their set. When they did not show it was suspected they missed their time slot, and BBC broadcaster Mark Radcliffe apologized on their behalf stating, "If you were expecting Toots and the Maytals – and, frankly, we all were – it seems like they were on Jamaican time or something because they didn't make it to the site on time." The group credited with coining the term "reggae" in song was subsequently rescheduled by the Glastonbury Festival organizers giving them the midnight slot, with all other acts being shifted by one hour.
On July 29, 2017 Toots and The Maytals headlined the 35th anniversary of the WOMAD UK festival.
Studio work
Douglas’ work as a session musician crosses several genres. His talent on the drums earned him recognition and respect from producers
Excerpt from an article on "Clancy Eccles":"In the U.K. Trojan Records released Clancy (Eccles)’s productions...The finest musicians available were used, with the core of his regular session crew, The Dynamites, featuring the talents of Hux Brown (guitar), Clifton "Jackie" Jackson (bass), Gladstone Anderson (piano), Winston Wright (organ) and Paul Douglas (drums)."
In addition to recordings completed as a member of affiliated acts, Douglas’ studio work includes sessions with:
Trojan Records (Chalk Farm Studios London England)
Beverley's All-Stars
Federal Allstars
Harry J Allstars
Joe Gibbs Allstars
The Upsetters
Randy's
Channel One Studios
Derrick Harriot's Chariot
Treasure Isle Records (Duke Reid)
Prince Buster Allstars
Bonnie Raitt
The MG's
Van McCoy
Eddie Floyd
Herbie Mann
Cat Stevens (Dynamic Sounds Studio)
In an interview with Mikey Thompson on November 27, 2016 for Kool 97 FM, Jackie Jackson along with Paul Douglas and Radcliffe "Dougie" Bryan were asked about the many recordings they did together as the rhythm section for Treasure Isle Records, Beverley's Records, Channel One Studios and Federal Records. In addition to work mentioned with Sonia Pottinger, Duke Reid, Lynn Taitt, Delroy Wilson, and Lee "Scratch" Perry, they were interviewed about working on the following songs:
Bob Marley and the Wailers - “Nice Time”, “Hypocrites”, “Thank You Lord”, “Bus Dam Shut”, “Can’t You See” and “Small Axe”
Phyllis Dillon - “Don’t Stay Away” and “Perfidia”
The Melodians - “Little Nut Tree”, “Swing and Dine”, “Sweet Sensation”, and “Rivers of Babylon”
U-Roy & The Melodians - “Version Galore”
Bob Andy - “Fire Burning”
Ken Boothe - “Everything I Own”, “Say You”, and “Freedom Street”
The Gaylads - “It’s Hard To Confess” and “There’s A Fire”
Hopeton Lewis - “Take It Easy”
Winston Wright - “Stealing Vol. II” from “Greater Jamaica (Moon Walk-Reggay)”
Ernie Smith - “Duppy or Gunman”
Desmond Dekker - “Israelites”
Desmond Dekker and the Aces - “Intensified”
Roy Shirley - “Hold Them”
Errol Dunkley - “You’re Gonna Need Me”
The Congos - “Fisherman”
John Holt & The Paragons - “Only A Smile”, “Wear You To The Ball”, “Ali Baba”, “I’ve Got To Get Away”, and “You Mean The World To Me”
Toots and the Maytals - “Monkey Man”, “Pomps & Pride”, “Scare Him” and “Pressure Drop”
Notable televised performances
1990 VH1 New Visions World Beat hosted by Nile Rodgers
2001 Late Night with Conan O'Brien
2004 The Tonight Show with Jay Leno featuring Bonnie Raitt & Toots and the Maytals
2004 Saturday Night Live
2004 Last Call with Carson Daly
2004 Later... with Jools Holland
2010 Late Night with Jimmy Fallon
2018 The Tonight Show Starring Jimmy Fallon
Film
In 2011, Douglas was part of the documentary released by Director George Scott and Producer Nick De Grunwald called Reggae Got Soul: The Story of Toots and the Maytals which was featured on BBC Television. Described as “The untold story of one of the most influential artists ever to come out of Jamaica”, it features appearances by Marcia Griffiths, Jimmy Cliff, Bonnie Raitt, Eric Clapton, Keith Richards, Willie Nelson, Anthony DeCurtis, Ziggy Marley, Chris Blackwell, Paolo Nutini, Sly Dunbar, and Robbie Shakespeare.
Awards and recognition
1981 Grammy Award Nomination for Toots Live!
1989 Grammy Award Nomination for Best Reggae Album of the Year: Toots in Memphis
1991 Grammy Award Nomination for Best Reggae Album of the Year: Toots & the Maytals – An Hour Live
1997 Canadian Reggae Music Awards
1998 Canadian Reggae Music Awards
1998 Grammy Award Nomination for Best Reggae Album of the Year: Toots & the Maytals – Ska Father
2004 Grammy Award Winner for Best Reggae Album of the Year: Toots & the Maytals - True Love
2008 Grammy Award Nomination for Best Reggae Album of the Year: Toots & the Maytals – Light Your Light
2013 Grammy Award Nomination for Best Reggae Album of the Year: Toots & The Maytals – Reggae Got Soul: Unplugged on Strawberry Hill
2020 Grammy Award Winner for Best Reggae Album of the Year: Got To Be Tough
2021 Named one of Drummerworld's 'Top 500 Drummers'
Interviews
In an interview with Batterie Magazine for their 2017 September/October edition, Douglas was asked about his work as the main drummer and musical director for Toots and the Maytals, in addition to being called upon by artists and producers such as Bob Marley, Lee Scratch Perry, Eric Gale, Ken Boothe, The Congos and Delroy Wilson. In the interview, Douglas explains one of his heroes to be Lloyd Knibb of The Skatalites, as well as being influenced my musicians such as George Benson, Carlos Santana, John Coltrane, Sam Cooke, and David Sanborn. On Sept. 10, 2021 Paul Douglas was featured on an episode of The 212 Podcast.
Museums and expositions
From April 2017 to August 2017, Douglas is part of the Exposition Jamaica Jamaica ! at the Philharmonie de Paris in France. Douglas is featured on the poster displayed at the exposition showing the early formation of Bob Marley & The Wailers on the Tuff Gong record label, and he is additionally part of the exposition as a member of Toots and the Maytals for their importance in the development of reggae music.
Discography
Paul Douglas is credited on over 250 works. In 2021 he released a full-length solo album titled "Jazz Mi Reggae".
Toots & The Maytals (1965) The Sensational Maytals
Toots & The Maytals & Prince Buster's All Stars (1965) Dog War / Little Flea (Prince Buster)
Toots & The Maytals (1966) Never Grow Old, (Studio One)
Toots & The Maytals (1966) Life Could Be A Dream
Toots & The Maytals (1968) Sweet and Dandy, (Beverley's Records)
Tommy McCook & The Supersonics (1968) Mary Poppins
King Stitt (1969) Herdsman Shuffle
King Stitt (1969) Lee Van Cleef
The Maytals (1969) Sweet And Dandy / Oh - Yea (7”) (Beverley's Records)
Toots & The Maytals (1969) Monkey Man
Tommy McCook & The Supersonics (1969) Red Ash
Tommy McCook & The Supersonics (1969) Tribute to Rameses
King Stitt & The Dynamites (1969) Vigorton 2
The Melodians (1970) Everybody Bawling
Ken Boothe (1970) Freedom Street
Clancy Eccles And The Dynamites (1970) Herbsman Reggae
Boris Gardiner (1970) Reggae Happening
Delano Stewart (1970) Stay A Little Bit Longer
The Melodians (1970) Sweet Sensation
Delano Stewart (1970) That's Life
The Gaylads (1970) There's A Fire
Bob and Marcia (1970) Young Gifted and Black
Toots & The Maytals (1970) Feel Alright (7") (Beverley's Records)
Bob Marley & The Wailers (1970) Baby Baby Come Home
Bob Marley & The Wailers (1970) Sophisticated Psychedelication
Bob Marley & The Wailers (1971) Soul Shakedown Party
Bob Marley & The Wailers (1971) The Best of the Wailers
Toots & The Maytals (1971) Bam-Bam / Pomps And Pride (7") (Dynamic Sounds)
Boris Gardiner (1971) Soulful Experience
Toots & The Maytals (1971) Greatest Hits (Beverley Records)
Toots & The Maytals (1972) The Harder They Come, (Island)
Toots & The Maytals (1972) Slatyam Stoot
Toots & The Maytals (1972) Daddy / It Was Written Down (Jaguar)
Toots & The Maytals (1972) Pomps And Pride (Jaguar)
Toots & The Maytals (1972) Country Road / Louie Louie (Jaguar)
Toots & The Maytals (1972) Louie Louie / Pressure Drop '72 (Trojan Records)
Boris Gardiner (1972) For All We Know
The Boris Gardiner Happening (1973) Is What's Happening
Toots & The Maytals (1973) Sit Right Down (Dragon)
Toots & The Maytals (1973) Country Road / Funky Kingston (Dragon)
Toots & The Maytals (1973) In The Dark / Sailing On (Jaguar)
Jimmy Cliff / Toots & The Maytals (1973) You Can Get It If You Really Want / Sweet & Dandy (Mango)
Toots & The Maytals (1973) Screwface Underground (Jaguar)
Toots & The Maytals (1973) Daddy (7”, Single) (Blue Mountain)
Toots & The Maytals (1973) Country Road (Island Records)
Toots & The Maytals (1973) From the Roots, (Trojan)
Toots & The Maytals (1973) Funky Kingston, (Trojan)
Toots & The Maytals (1973) The Original Golden Oldies Vol.3
Vic Taylor (1973) Reflections
Ernie Smith (1974) Duppy Gunman
Ken Boothe (1974) Everything I Own
Toots & The Maytals (1974) In the Dark, (Dragon Records)
Toots & The Maytals (1974) Who Knows Better (Hot Shot!)
Toots & The Maytals (1974) Time Tough (Jaguar)
Toots & The Maytals (1974) I Can't Believe / 5446 Instrumental (Starapple)
Toots & The Maytals (1974) Sailing On / If You Act This Way (7") (Dragon)
Toots & The Maytals (1974) You Don't Love Me (So Bad) (7", Single) (Jaguar)
Bob Andy (1974) Fire Burning
Fr. Richard HoLung, Harrison & Friends (1974) Letters Job To John
Toots & The Maytals (1975) Reggae's Got Soul (Jaguar)
Susan Cadogan (1975) Hurts So Good
Horace Forbes (1975) Impossible<
Faith D'Aguilar (1975) Jamaica
Eric Gale (1975) Negril
Pluto Shervington (1975) Pluto
Byron Lee And The Dragonaires & Mighty Sparrow (1975) Sparrow Dragon Again
Johnny Nash (1975) Tears On My Pillow
Ken Boothe (1976) Blood Brothers
Pluto Shervington (1976) Dat
R.D. Livingstone (1976) Home From Home
Errol Brown (1976) Pleasure Dub
Pluto Shervington (1976) Ram Goat Liver
Toots & The Maytals (1976) Reggae Got Soul (Island)
King Tubby & Clancy Eccles All Stars (1976) Sound System International Dub LP
Funky Brown (1976) These Songs Will Last Forever
Bob Marley & The Wailers / Toots & The Maytals (1976) Trenchtown Rock / Reggae Got Soul (7”) (Island Records)
Toots & The Maytals (1976) Image Get A Lick (7") (Warika)
The Congos & Friends (1977) Fisherman
The Congos (1977) Heart Of The Congos
The Mexicano (1977) Move Up Starsky
Musicism (1977) Swing Me Gentle
Musicism (1977) Riding In Rhythm
The Maytals (1977) Toots Presents The Maytals
Toots & The Maytals (1978) Famine / Pass The Pipe (Island Records)
Toots & The Maytals (1978) Take It From Me (7") (Island Records)
Harold Butler (1978) Gold Connection
Ernie Smith (1978) I'll Sing For Jesus
Derrick Morgan (1978) Love City
Lovindeer (1978) Sexy Reggae
The Mexicano (1978) Goddess Of Love
Jackie Edwards (1978) Starlight
Dandy Livingstone (1978) The South African Experience
Toots & The Maytals (1979) Israel Children / Turn It Up (7") (Louv)
Multiple Artists (1979) Children Of Babylon (Original Motion Picture Soundtrack)<
Nana McLean (1979) Dream Of Life
Danny Adams (1979) Summer In Montego Bay
Ojiji (1979) The Shadow
Toots & The Maytals (1979) Pressure Drop: Best of Toots & The Maytals (Trojan)
Toots & The Maytals (1979) Pass the Pipe, (Island)
Toots & The Maytals (1979) Just Like That, (Island)
Toots & The Maytals (1979) The Best Of Toots And The Maytals (Trojan Records)
Toots & The Maytals (1980) Just Like That / Gone With The Wind (Island Records)
Toots & The Maytals (1980) Toots & The Maytals E.P. (Island Records)
Toots & The Maytals (1980) Chatty, Chatty (Island Records)
Toots & The Maytals (1980) Live: Monkey Man / Hallelujah (7") (Island Records)
Toots & The Maytals (1980) Chatty, Chatty (7", Single) (Island Records)
Toots & The Maytals (1980) Toots “Live,” (Island)
Hearbert Lee (1980) Love Songs Vol. 1
Bobby Stringer (1980) Reggae Love Songs
Ossie Scott (1980) Many Moods Of Ossie Scott
Toots & The Maytals (1981) I Can See Clearly Now (Island Records)
Toots & The Maytals (1981) Beautiful Woman (Island Records)
Toots & The Maytals (1981) Papa D / You Never Know (Louv)
Toots & The Maytals (1981) Beautiful Woman / Show Me The Way (12") (Island Records)
Toots & The Maytals (1981) Papa Dee Mama Dear / Dilly Dally (7", Single) (Island Records)
Toots & The Maytals (1981) His Songs Live On (7") (Louv)
Toots & the Maytals (1981) Knock Out!
Beres Hammond (1981) Let's Make A Song
Multiple Artists (1981) The King Kong Compilation: The Historic Reggae Recordings
Toots & The Maytals (1982) I Know We Can Make It / Spend A Weekend (7", Single) (Island Records)
Dennis Brown / Toots & The Maytals (1982) Sitting & Watching / Bam Bam (7", Single) (Island Records)
Toots & The Maytals (1982) Knockout, (Island)
Live at Reggae Sunsplash: Best of the Festival r(1982) Day One<
Toots & the Maytals (1982) Hour Live
Pluto Shervington (1982) I Man Born Ya
Pioneers (1982) Reggae For Lovers
Pluto Shervington (1982) Your Honour
Lovindeer (1983) Man Shortage
Ochi Brown (1983) Danger Date
Boyo (1983) You're My World
George Pioneer & Jackie Pioneer (1983) Reggae For Lovers Volume 2
Toots & The Maytals (1984) Live At Reggae Sunsplash
Toots & The Maytals (1984) Reggae Greats (Island)
Owen Gray (1985) Watch This Sound
Lovindeer (1987) Caribbean Christmas Cheer
Lovindeer (1988) Octapussy
Toots & The Maytals (1988) Toots in Memphis, (Island)
Toots & The Maytals (1988) Do The Reggae 1966-1970 (Attack Records)
(1990) Clancy Eccles Presents His Reggae Revue>
Toots & The Maytals (1990) An Hour Live
Bob Marley & The Wailers (1992) Songs Of Freedom CD-01
Toots & The Maytals (1992) Knock Out!
The Maytals (1993) Bla. Bla. Bla.
Multiple Artists (1993) Kingston Town: 18 Reggae Hits
Multiple Artists (1993) The Story of Jamaican Music: Tougher Than Tough
Toots & The Maytals (1995) The Collection (Spectrum)
Clancy Eccles (1996) Joshua's Rod of Correction
King Stitt (1996) Reggae Fire Beat
The Dynamites (1996) The Wild Reggae Bunch
Toots & The Maytals (1996) Time Tough: The Anthology (Island)
Toots & The Maytals (1996) Monkey Man ((Compilation) (House Of Reggae)
Toots & The Maytals (1997) Recoup, (Alia Son)
Multiple Artists (1997) Fire On The Mountain: Reggae Celebrates The Grateful Dead Vol. 1 & 2
Clancy Eccles & The Dynamites (1997) Nyah Reggae Rock
Bob Marley & The Wailers (1998) The Complete Wailers CD-03
From Chapter To Version (1998) 20 Reggae DJ Classics
Multiple Artists (1998) From GG's Reggae Hit Stable Volume 1 & 2
Derrick Harriott (1998) Riding The Roots Chariot
Toots & The Maytals (1998) Live in London, (Trojan)
Toots & The Maytals (1998) The Very Best of Toots & The Maytals, (Music Club)
Toots & The Maytals (1998) Ska Father, (Artists Only)
Toots & The Maytals (1998) Jamaican Monkey Man (Recall 2 cd)
The Maytals / Toots & the Maytals (1999) Monkey Man & From The Roots
Toots & the Maytals (1999) That's My Number
The Maytals (1999) The Originals (Charly)
Morgan Heritage & Denroy Morgan / Toots & The Maytals (1999) Harvest Is Plenty / Lost Your Character (7") (HMG Records)
Toots & The Maytals (1999) Bam Bam / 54 - 46 (7") (Marvellous Records)
Toots & The Maytals (1999) Prayer of David (7", Single) (Treasure Chest)
Toots & The Maytals (2000) Live At Red Rocks (PRG Records, Allah Son Records)
Toots & The Maytals (2000) The Very Best Of Toots & The Maytals (Island Records)
Toots & The Maytals (2000) 20 Massive Hits (Compilation) (Metro)
The Maytals (2001) Fever
The Maytals (2001) Dressed to Kill
Toots & The Maytals (2001) 54-46 Was My Number - Anthology 1964 To 2000 (Trojan Records)
Toots & The Maytals (2001) Best Of Toots & The Maytals / Broadway Jungle (Trojan Records)
Toots & The Maytals (2001) The Best Of Toots & The Maytals (Island Records)
Clancy Eccles (2001) Reggae Revue at the VIP Club, Vol. 3
Clancy Eccles (2001) Reggae Revue at the Ward Theatre 1969-1970
(2001) The Reggae Box
Toots & The Maytals / L.M.S.* (2002) Humble / Respect All Woman (7") (71 Records)
Toots & The Maytals (2002) Sweet And Dandy: The Best of Toots and the Maytals (Trojan Records)
Toots & the Maytals (2003) World Is Turning
Toots & The Maytals (2003) 54 - 46 / Pressure Drop (7”) (Beverley's Records)
Toots & The Maytals (2003) Funky Kingston / In The Dark (Compilation) (Island Records)
Toots & The Maytals (2003) Jungle (Single) (XIII BIS Records)
Paul Douglas (2004) Eyes Down
Toots & the Maytals (2004) True Love
Toots & The Maytals (2004) This Is Crucial Reggae (Compilation) (Sanctuary Records)
Toots & The Maytals Featuring Shaggy And Rahzel (2004) Bam Bam (V2)
Toots & The Maytals (2005) Pressure Drop: The Definitive Collection (Trojan Records)
Toots & The Maytals (2005) Roots Reggae - The Classic Jamaican Albums (Trojan Records)
Toots & The Maytals (2005) Rhythm Kings (Compilation) (Xtra)
Toots & The Maytals (2005) Deep In My Soul / Daddy (Beverley's Records)
Toots & The Maytals (2005) Border Line (Single) (XIII Bis Records)
Toots & The Maytals (2006) The Essential Collection (Compilation) (Sanctuary Records)
The Congos & Friends (2006) Fisherman Style
Toots & The Maytals (2006) I’ve Got A Woman (A Tribute To Ray Charles) (7") (D&F Productions)
Toots & The Maytals (2006) Acoustically Live at Music Millennium (CD, EP) (Junketboy)
Toots & the Maytals (2007) Light Your Light
Ben Harper & The Skatalites / Toots & The Maytals (2007) Be My Guest / I Want You To Know (Imperial)
Toots & The Maytals (2008) Sweet And Dandy: The Best of Toots & The Maytals (Compilation) (Trojan Records)
Glen Ricketts (2008) Rise Up
Tommy McCook & The Supersonics (2009) Pleasure Dub
The Dynamites / King Tubby (2009) Sound System International
Eugene Grey (2010) Diversity
Toots & the Maytals (2010) Flip and Twist
Toots And The Maytals / Roland Alphonso (2010) Hold On / On The Move (7") (Pyramid)
Toots & The Maytals (2010) Pee Pee Cluck Cluck (7”) (Pyramid)
Toots & The Maytals / Don Drummond (2010) Alidina / Dragon Weapon (7") (Pyramid)
Toots & The Maytals (2011) Pressure Drop: The Golden Tracks (Cleopatra)
Toots & The Maytals (2012) Pressure Drop: The Best of Toots and The Maytals (Compilation) (Universal UMC, Island Records)
Toots & The Maytals (2012) Live! (Island Records)
Toots & The Maytals (2012) 54 - 46 (Beverley's Records)
Delroy Wilson / Toots & The Maytals (2012) Gave You My Love / One Eye Enos (7”) (Beverley's Records)
Toots & The Maytals (2012) Unplugged On Strawberry Hill
Toots & The Maytals (2014) Sunny (7", Single) (Notable Records, Measurable Music)
Toots & The Maytals (2020) Got to Be Tough (Trojan Jamaica/BMG)
Paul Douglas (2021) Jazz Mi Reggae
Priscilla Rollins (197X) I Love You
Tito Simon (197X) The Heat Is On
Demo Cates (197X) Precious Love
Milton Douglas (198X) Can't Trust No One
George Allison (198X) Exclusive
Marie Bowie & K.C. White & Hortense Ellis (198X) More Reggae Love Songs
Bobby Davis (198X) Satisfaction
Toots & The Maytals - Reggae Live Sessions Volume 2 (Jahmin' Records)
Danny Ray - All The Best
Ossie Scott - The Great Pretender
Toots & The Maytals - Peeping Tom (7", Single) (Beverley's Records)
Toots & The Maytals - Sweet & Dandy (Single) (Beverley's Records)
Toots & The Maytals - Pain In My Belly / Treating Me Bad (7”) (Prince Buster)
Toots & The Maytals / Byron Lee - She Never Let Me Down / River To The Bank (Federal)
Toots & The Maytals / Desmond Dekker - Pressure Drop / Mother's Young Gal (7", Single) (Beverley's Records)
Toots & The Maytals - Never Go Down (7") (Warika)
Toots & The Maytals - Israel Children (7", Single) (Righteous)
Toots & The Maytals / Ansel Collins - Monkey Man / High Voltage (7") (Beverley's Records)
Tony Tribe / Eric Donaldson / The Upsetters / Toots & The Maytals - Classic Tracks (CD, EP) (Classic Tracks - CDEP4)
Toots & The Maytals - Scare Him (7") (Gorgon Records)
Toots & The Maytals - Careless Ethiopians (7") (Nyahman)
Toots & The Maytals - Do Good All The Time (7") (Nyahman)
Toots & The Maytals - Daddy (7") (Jaguar)
Desmond Dekker And The Aces, Toots And The Maytals - You Can Get It If You Really Want / Pressure Drop (7") (Beverley's Records)
Toots & The Maytals - Monkey Man / It Was Written (7") (D&F Records)
Toots & The Maytals - Prayer of David (7", Single) (Charm)
Toots & The Maytals - Happy Days (7", Single) (Righteous)
Toots & The Maytals - Happy Christmas / If You Act This Way (7", Single) (Jaguar)
Toots & The Maytals - One Family (7", Single) (Righteous)
Toots & The Maytals - Pressure Drop (7”) (Island Records)
Toots & The Maytals - Have A Talk (7") (Black Noiz Music)
Toots & The Maytals / The Dynamic Sisters - We Are No Strangers (7") (Thunder Bolt)
Toots & The Maytals - Fool For You / Version (7", Single) (Allah Son Records)
Toots & The Maytals - More And More / Version (7", Single) (Allah Son Records)
Toots & The Maytals - Hard Road / Version (7", Single) (Allah Son Records)
Bob Marley / Toots & The Maytals - Classic Tracks (CD, EP) (Classic Tracks - CDEP 3C)
Toots & The Maytals - 54-46 Was My Number (Slow Cut) (7") (Beverley's Records)
Desmond Dekker & The Aces / Toots & The Maytals - You Can Get It If You Really Want / Sweet & Dandy (7") (Beverley's Records)
Instruments and sponsorships
Paul Douglas is an official artist of Sabian, one of the "big four" manufacturers of cymbals.
Favourite Sabian Cymbal: 16'' O Zone Evolution Crash, AAH 14'' Stage Hi Hats, HHX, 18'' HHX China
References
Jamaican drummers
1950 births
Living people
People from Saint Ann Parish
Toots and the Maytals members |
1582286 | https://en.wikipedia.org/wiki/Kubuntu | Kubuntu | Kubuntu ( ) is an official flavor of the Ubuntu operating system that uses the KDE Plasma Desktop instead of the GNOME desktop environment. As part of the Ubuntu project, Kubuntu uses the same underlying systems. Every package in Kubuntu shares the same repositories as Ubuntu, and it is released regularly on the same schedule as Ubuntu.
Kubuntu was sponsored by Canonical Ltd. until 2012 and then directly by Blue Systems. Now, employees of Blue Systems contribute upstream, to KDE and Debian, and Kubuntu development is led by community contributors. During the changeover, Kubuntu retained the use of Ubuntu project servers and existing developers.
Name
"Kubuntu" is a registered trademark held by Canonical. It is derived from the name "Ubuntu", prefixing a K to represent the KDE platform that Kubuntu is built upon (following a widespread naming convention of prefixing K to the name of any software released for use on KDE platforms), as well as the KDE community.
Since ubuntu is a Bantu term translating roughly to "humanity", and since Bantu grammar involves prefixes to form noun classes, it turns out that the prefix ku- having the meaning "toward" in Bemba, kubuntu is also a meaningful Bemba word or phrase translating to "toward humanity".
Reportedly, the same word, by coincidence, also takes the meaning of "free" (in the sense of "without payment") in Kirundi.
Comparison with Ubuntu
Kubuntu typically only differs from Ubuntu in graphical applications and tools:
History
Kubuntu was born on 10 December 2004 at the Ubuntu Mataro Conference in Mataró, Spain. Canonical employee Andreas Mueller, from Gnoppix, had the idea to make an Ubuntu KDE variant and got the approval from Mark Shuttleworth to start the first Ubuntu variant, called Kubuntu. On the same evening Chris Halls from the Openoffice project and Jonathan Riddell from KDE started volunteering on the newborn project.
Mark Shuttleworth, in an interview shortly after Ubuntu (which now uses GNOME, previously having used the Unity desktop environment, before then GNOME) was started, stated:
The Kubuntu team released the first edition, Hoary Hedgehog, on .
K Desktop Environment 3 was used as default interface until Kubuntu 8.04. That version included KDE Plasma Desktop as unsupported option which became default in the subsequent release, 8.10.
On , Canonical employee Jonathan Riddell announced the end of Canonical's Kubuntu sponsorship. On , Blue Systems was announced on the Kubuntu website as the new sponsor. As a result, both developers employed by Canonical to work on Kubuntu – Jonathan Riddell and Aurélien Gâteau – transferred to Blue Systems.
Releases
Kubuntu follows the same naming/versioning system as Ubuntu, with each release having a code name and a version number (based on the year and month of release). Canonical provides support and security updates for Kubuntu components that are shared with Ubuntu for 18 months – five years in case of long-term support (LTS) versions – after release. Both a desktop version and an alternative (installation) version (for the x86 and AMD64 platforms) are available. Kubuntu CDs were also available through the ShipIt service (which was discontinued as of April 2011).
System requirements
The desktop version of Kubuntu currently supports the AMD64 architectures, Intel x86 support will be discontinued after the 18.04 release and existing 32-bit users will be supported until 2023.
Deployments
Kubuntu rollouts include the world's largest Linux desktop deployment, that includes more than 500,000 desktops in Brazil (in 42,000 schools of 4,000 cities).
The software of the 14,800 Linux workstations of Munich was switched to Kubuntu LTS 12.04 and KDE 4.11.
The Taipei City Government decided to replace Windows with a Kubuntu distribution on 10,000 PCs for schools.
The French Parliament announced in 2006 that they would switch over 1,000 workstations to Kubuntu by June 2007.
A Kubuntu distribution, by La Laguna University, is used in more than 3,000 computers spread in several computer labs, laboratories and libraries, among other internal projects in the Canary Islands. Since October 2007, Kubuntu is now used in all of the 1,100 state schools in the Canary Islands.
The second point release update in February 2021 to Kubuntu 20.04.2 LTS (Focal Fossa) contains all the bug-fixes added to 20.04 since its first release. Users can run the normal update procedure to get these bug-fixes.
Gallery
See also
Lubuntu
Xubuntu
KDE neon
References
External links
2005 software
IA-32 Linux distributions
KDE
Operating system distributions bootable from read-only media
Ubuntu derivatives
X86-64 Linux distributions
Linux distributions |
18339484 | https://en.wikipedia.org/wiki/Viktor%20von%20Hacker | Viktor von Hacker | Viktor von Hacker (October 21, 1852 – May 20, 1933) was an Austrian surgeon born in Vienna.
In 1878 he received his medical doctorate at the University of Vienna, and after graduation remained in Vienna as an assistant to Theodor Billroth (1829–1894). Later he was a professor of surgery at the Universities of Innsbruck (1894–1903) and Graz (1904–1924).
Hacker is remembered for his work involving esophagoscopy, esophageal surgery and gastrointestinal surgery. With German-American surgeon Carl Beck (1856–1911), he is credited with developing a surgical technique for balanic hypospadias.
In 1885, Hacker assisted Billroth when the latter performed the first resection of the pylorus followed by posterior gastrojejunostomy. Afterwards, Hacker documented a detailed account of the operation. With surgeon Georg Lotheissen (1868–1941), he published two treatises concerning the esophagus, Angeborene Missbildungen, Verletzungen und Erkrankungen der Speiseröhre (Congenital abnormalities, injuries and diseases of the esophagus) and Chirurgie der Speiseröhre (Surgery of the esophagus).
References
1852 births
1933 deaths
19th-century Austrian people
20th-century Austrian people
Austrian surgeons
University of Graz faculty
University of Innsbruck faculty
Austrian untitled nobility
Physicians from Vienna |
2667629 | https://en.wikipedia.org/wiki/DNALinux | DNALinux | DNALinux is a Linux distribution with bioinformatics software included. It is a Live CD. It is based on Slax but includes programs like BLAST and EMBOSS.
DNALinux is made by Genes Digitales and Quilmes National University, Argentina.
See also
List of Linux distributions
External links
DNALinux homepage
SLAX-based distributions
Linux distributions |
36092794 | https://en.wikipedia.org/wiki/InterWorking%20Labs | InterWorking Labs | InterWorking Labs is a privately owned company in Scotts Valley, California, in the business of optimizing application performance for applications and embedded systems. Founded in 1993 by Chris Wellens and Marshall Rose, it was the first company formed specifically to test network protocol compliance. Its products and tests allow computer devices from many different companies to communicate over networks.
Products
InterWorking Labs' Products diagnose, replicate, and re-mediate application performance problems.
The company's first product, SilverCreek, tests a Simple Network Management Protocol (SNMP) agent implementation (switch, server, phone) with hundreds of thousands of individual tests, including conformance, stress, robustness, and negative testing. The tests detect and diagnose implementation errors in private and standard MIBs as well as SNMPv1, v2c, and v3 stacks and implementations.
The Maxwell family products emulate real world networks, with problems such as delays, rerouting, corruption, impaired packets or protocols, Domain Name System delays or limited bandwidth. New impairments are added to Maxwell using C, C++, or Python extensions. It is controlled via graphical, command line, and script interfaces. It supports a set of protocol impairments for TCP/IP, DHCP, ICMP, TLS, and SIP testing.
The Maxwell products are named after Maxwell's Demon, a thought experiment by 19th century physicist James Clerk Maxwell. Maxwell’s Demon demonstrated that the Second Law of Thermodynamics—which says that entropy increases—is true only on average. In his thought experiment, Maxwell imagined a double chamber with a uniform mixture of hot and cold gas molecules. A demon (some intelligent being) sits between the two chambers operating a trap door. Every time a cold (low-energy) molecule comes by, the demon opens the door and lets the molecule through to the other side. Eventually, the cold gas molecules are all on one side of the chamber and the hot ones all on the other. Although the molecules continue to move randomly, the introduction of intelligence into the system reduces entropy instead of increasing it.
The Maxwell product sits in the middle of a network conversation and opens or closes a figurative "door" on the basis of specific criteria. Maxwell intelligently modifies the packet based on pre-selected criteria and sends the packet on its way.
InterWorking Labs is advised by the Internet Engineering Task Force (IETF). Maxwell's network emulations reproduce real conditions in the lab before products are deployed.
History
InterWorking Labs was co-founded in 1993 by Chris Wellens and Marshall Rose. The two met in 1992 at the Interop Company, in Mountain View, California, where Wellens was Director of Technology and Rose was on the Interop Program Committee and also Working Group Chair of the IETF for the Simple Network Management Protocol (SNMP).
Wellens—who was overseeing the trade fair's 5000-node InteropNet as well as an array of interoperability demonstrations for network protocols— noticed that engineers from different companies often interpreted network protocols differently and ended up struggling to make their products send and receive data to one another—sometimes just minutes before showcase demonstrations. The engineers asked Interop to create an interoperability lab where these network communication issues could be worked out in a private and less stressful environment. Interop's founder and CEO Dan Lynch concluded that the industry needed an interoperability testing lab.
Lynch asked Wellens to write a business plan for a permanent Interoperability Lab. Rose proposed that the Lab's first task should be to create and try out a set of tests of the SNMP protocol, since SNMP was an area he was familiar with and one where engineers were having particular problems at Interop. Wellens volunteered to organize a group of developers for an interoperability test summit if Rose would create a set of tests and assist in developing the initial plan.
At about the same time, however, Ziff-Davis acquired Interop and chose not to proceed with the Interoperability Lab. Wellens and Lynch agreed that she should pursue the idea on her own. In 1993, Wellens established InterWorking Labs, and, in January 1994, organized the first SNMP interoperability test summit using 50 SNMP tests written by Rose. During that first test summit, a large number of implementations failed Rose's tests.. The results persuaded several major corporations that interoperability testing would be a critical component of functioning networks. Participants at the second (1994) SNMP Test Summit included Cabletron Systems, Cisco Systems, Eicon Technology, Empirical Tools and Technologies, Epilogue Technology Corp., Fujitisu OSSI, IBM and IBM Research, Network General Corp., PEER Networks, SNMP Research, SynOptics Communications, TGV, Inc., and Wellfleet Communications.
In 2000, Wellens asked Karl Auerbach to join the InterWorking Labs Board of Directors. In 2002, Wellens hired Auerbach, who was part of Cisco's Advanced Internet Architecture group, to serve as chief technology officer at InterWorking Labs. An advisory board consisted of several members of the IETF who have expertise in networking protocols (Andy Bierman, Jeff Case, Dave Perkins, Randy Presuhn, and Steve Waldbusser).
Markets
Wireless networks used by hospitals, police, and military have turned computer networks into essential lifeline utilities. Computer networks that keep economies, transportation, energy, and food supplies flowing commonly belong to the critical infrastructure of a region. As such, the performance of networks under adverse conditions is a significant concern for militaries, industry, and local and regional governments.
According to Wellens, UCITA has significantly protected software publishers from liability for the failure of their products. But software publishers may become increasingly liable for the consequences of network failures—especially where comprehensive networking testing existed and was not used. Online retailers, for example, can demonstrate multimillion-dollar losses due to network problems such as security flaws or a network collapse after a denial of service attack.
References
External links
Video links:
Slow network speed on cruise ships
Introduction to SilverCreek
Implementing the IPv6 MIBs
Computer security companies
Software testing
Companies based in Santa Cruz County, California
Companies established in 1993 |
10499839 | https://en.wikipedia.org/wiki/Linux%20Technology%20Center | Linux Technology Center | The IBM Linux Technology Center (LTC) is an organization focused on development for the Linux kernel and related open-source software projects. In 1999, IBM created the LTC to combine its software developers interested in Linux and other open-source software into a single organization. Much of the LTC's early effort was focused on making "all of its server platforms Linux friendly." The LTC collaborated with the Linux community to make Linux run optimally on processor architectures such as x86, mainframe, PowerPC, and Power ISA. In recent years, the focus of the LTC has expanded to include several other open source initiatives.
With about 185 IBM employees working for the LTC in 1999, this number grew steadily to about 600 in 2006, 300 of whom worked full-time on Linux.
In December 2000, IBM claimed to have invested approximately one billion US dollars in Linux by the year 2000, and to currently have about 1,500 developers working on the alternative operating system. It announced that it would invest a similar amount in 2001 and also build the largest Linux-based supercomputer for Royal Dutch/Shell Oil. While most of the money was invested in Linux development, some of it went into others, mainly AIX. The following year, senior vice president Bill Zeitler claimed to have recouped most of this spending in the first year through the sale of software and systems.
Details
Developers in the LTC contribute to various open-source projects such as:
Kernel-based Virtual Machine (KVM) on x86 and Power systems, including Kimchi
Apache Hadoop
OpenStack
OpenPOWER Foundation
GNU toolchain
Open source standards
LTC is a worldwide team with main locations in Australia, Brazil, China, Germany, India, Israel, and the United States.
References
External links
Joe Barr, 2001: Inside IBM's Linux Technology Center
IBM
Linux organizations |
142983 | https://en.wikipedia.org/wiki/IBM%20Db2%20Family | IBM Db2 Family | Db2 is a family of data management products, including database servers, developed by IBM. They initially supported the relational model, but were extended to support object–relational features and non-relational structures like JSON and XML. The brand name was originally styled as DB/2, then DB2 until 2017 and finally changed to its present form.
History
Historically, and unlike other database vendors, IBM produced a platform-specific Db2 product for each of its major operating systems. However, in the 1990s IBM changed track and produced a Db2 common product, designed with a mostly common code base for L-U-W (Linux-Unix-Windows); DB2 for System z and DB2 for IBM i are different. As a result, they use different drivers.
DB2 traces its roots back to the beginning of the 1970s when Edgar F. Codd, a researcher working for IBM, described the theory of relational databases, and in June 1970 published the model for data manipulation.
In 1974, the IBM San Jose Research center developed a relational DBMS, System R, to implement Codd's concepts. A key development of the System R project was the Structured Query Language (SQL). To apply the relational model, Codd needed a relational-database language he named DSL/Alpha. At the time, IBM didn't believe in the potential of Codd's ideas, leaving the implementation to a group of programmers not under Codd's supervision. This led to an inexact interpretation of Codd's relational model, that matched only part of the prescriptions of the theory; the result was Structured English QUEry Language or SEQUEL.
When IBM released its first relational-database product, they wanted to have a commercial-quality sublanguage as well, so it overhauled SEQUEL, and renamed the revised language Structured Query Language (SQL) to differentiate it from SEQUEL and also because the acronym "SEQUEL" was a trademark of the UK-based Hawker Siddeley aircraft company.
IBM bought Metaphor Computer Systems to utilize their GUI interface and encapsulating SQL platform that had already been in use since the mid 80's.
In parallel with the development of SQL, IBM also developed Query by Example (QBE), the first graphical query language.
IBM's first commercial relational-database product, SQL/DS, was released for the DOS/VSE and VM/CMS operating systems in 1981. In 1976, IBM released Query by Example for the VM platform where the table-oriented front-end produced a linear-syntax language that drove transactions to its relational database. Later, the QMF feature of DB2 produced real SQL, and brought the same "QBE" look and feel to DB2. The inspiration for the mainframe version of DB2's architecture came in part from IBM IMS, a hierarchical database, and its dedicated database-manipulation language, IBM DL/I.
The name DB2 (IBM Database 2), was first given to the Database Management System or DBMS in 1983 when IBM released DB2 on its MVS mainframe platform.
For some years DB2, as a full-function DBMS, was exclusively available on IBM mainframes. Later, IBM brought DB2 to other platforms, including OS/2, UNIX, and MS Windows servers, and then Linux (including Linux on IBM Z) and PDAs. This process occurred through the 1990s. An implementation of DB2 is also available for z/VSE and z/VM. An earlier version of the code that would become DB2 LUW (Linux, Unix, Windows) was part of an Extended Edition component of OS/2 called Database Manager.
IBM extended the functionality of Database Manager a number of times, including the addition of distributed database functionality by means of Distributed Relational Database Architecture (DRDA) that allowed shared access to a database in a remote location on a LAN. (Note that DRDA is based on objects and protocols defined by Distributed Data Management Architecture (DDM).)
Eventually, IBM took the decision to completely rewrite the software. The new version of Database Manager was called DB2/2 and DB2/6000 respectively. Other versions of DB2, with different code bases, followed the same '/' naming convention and became DB2/400 (for the AS/400), DB2/VSE (for the DOS/VSE environment) and DB2/VM (for the VM operating system). IBM lawyers stopped this handy naming convention from being used, and decided that all products needed to be called "product FOR platform" (for example, DB2 for OS/390). The next iteration of the mainframe and the server-based products were named DB2 Universal Database (or DB2 UDB).
In the mid-1990s, IBM released a clustered DB2 implementation called DB2 Parallel Edition, which initially ran on AIX. This edition allowed scalability by providing a shared-nothing architecture, in which a single large database is partitioned across multiple DB2 servers that communicate over a high-speed interconnect. This DB2 edition was eventually ported to all Linux, UNIX, and Windows (LUW) platforms, and was renamed to DB2 Extended Enterprise Edition (EEE). IBM now refers to this product as the Database Partitioning Feature (DPF) and bundles it with their flagship DB2 Enterprise product.
When Informix Corporation acquired Illustra and made their database engine an object-SQL DBMS by introducing their Universal Server, both Oracle Corporation and IBM followed suit by changing their database engines to be capable of object–relational extensions. In 2001, IBM bought Informix Software, and in the following years incorporated Informix technology into the DB2 product suite. DB2 can technically be considered to be an object–SQL DBMS.
In mid-2006, IBM announced "Viper," which is the codename for DB2 9 on both distributed platforms and z/OS. DB2 9 for z/OS was announced in early 2007. IBM claimed that the new DB2 was the first relational database to store XML "natively". Other enhancements include OLTP-related improvements for distributed platforms, business intelligence/data warehousing-related improvements for z/OS, more self-tuning and self-managing features, additional 64-bit exploitation (especially for virtual storage on z/OS), stored procedure performance enhancements for z/OS, and continued convergence of the SQL vocabularies between z/OS and distributed platforms.
In October 2007, IBM announced "Viper 2," which is the codename for DB2 9.5 on the distributed platforms. There were three key themes for the release, Simplified Management, Business Critical Reliability and Agile XML development.
In June 2009, IBM announced "Cobra" (the codename for DB2 9.7 for LUW. DB2 9.7 added data compression for database indexes, temporary tables, and large objects. DB2 9.7 also supported native XML data in hash partitioning (database partitioning), range partitioning (table partitioning), and multi-dimensional clustering. These native XML features allows users to directly work with XML in data warehouse environments. DB2 9.7 also added several features that make it easier for Oracle Database users to work with DB2. These include support for the most commonly used SQL syntax, PL/SQL syntax, scripting syntax, and data types from Oracle Database. DB2 9.7 also enhanced its concurrency model to exhibit behavior that is familiar to users of Oracle Database and Microsoft SQL Server.
In October 2009, IBM introduced its second major release of the year when it announced DB2 pureScale. DB2 pureScale is a cluster database for non-mainframe platforms, suitable for Online transaction processing (OLTP) workloads. IBM based the design of DB2 pureScale on the Parallel Sysplex implementation of DB2 data sharing on the mainframe. DB2 pureScale provides a fault-tolerant architecture and shared-disk storage. A DB2 pureScale system can grow to 128 database servers, and provides continuous availability and automatic load balancing.
In 2009, it was announced that DB2 can be an engine in MySQL. This allows users on the IBM i platform and users on other platforms to access these files through the MySQL interface. On IBM i and its predecessor OS/400, DB2 is tightly integrated into the operating system, and comes as part of the operating system. It provides journaling, triggers and other features.
In early 2012, IBM announced the next version of DB2, DB2 10.1 (code name Galileo) for Linux, UNIX, and Windows. DB2 10.1 contained a number of new data management capabilities including row and column access control which enables ‘fine-grained’ control of the database and multi-temperature data management that moves data to cost effective storage based on how "hot" or "cold" (how frequently the data is accessed) the data is. IBM also introduced ‘adaptive compression’ capability in DB2 10.1, a new approach to compressing data tables.
In June 2013, IBM released DB2 10.5 (code name "Kepler").
On 12 April 2016, IBM announced DB2 LUW 11.1, and in June 2016, it was released.
In mid-2017, IBM re-branded its DB2 and dashDB product offerings and amended their names to "Db2".
On June 27, 2019, IBM released Db2 11.5, the AI Database. It added AI functionality to improve query performance as well as capabilities to facilitate AI application development.
Db2 (LUW) Family
Db2 embraces a "hybrid data" strategy to unify and simplify the entire ecosystem of data management, integration and analytical engines for both on-premises and cloud environments to gain value from typically siloed data sources. The strategy allows access, sharing and analyzing all types of data - structured, semi-structured or unstructured - wherever it's stored or deployed.
Db2 Database
Db2 Database is a relational database that delivers advanced data management and analytics capabilities for transactional workloads. This operational database is designed to deliver high performance, actionable insights, data availability and reliability, and it is supported across Linux, Unix and Windows operating systems.
The Db2 database software includes advanced features such as in-memory technology (IBM BLU Acceleration), advanced management and development tools, storage optimization, workload management, actionable compression and continuous data availability (IBM pureScale).
Db2 Warehouse
"Data warehousing" was first mentioned in a 1988 IBM Systems Journal article entitled, "An Architecture for Business Information Systems." This article illustrated the first use-case for data warehousing in a business setting as well as the results of its application.
Traditional transaction processing databases were not able to provide the insight business leaders needed to make data-informed decisions. A new approach was needed to aggregate and analyze data from multiple transactional sources to deliver new insights, uncover patterns and find hidden relationships among the data. Db2 Warehouse, with capabilities to normalize data from multiple sources, performs sophisticated analytic and statistical modeling, provides businesses these features at speed and scale.
Increases in computational power resulted in an explosion of data inside businesses generally and data warehouses specifically. Warehouses grew from being measured in GBs to TBs and PBs. As both the volume and variety of data grew, Db2 Warehouse adapted as well. Initially purposed for star and snowflake schemas, Db2 Warehouse now includes support for the following data types and analytical models, among others:
Relational data
Non-Relational data
XML data
Geospatial data
RStudio
Apache Spark
Embedded Spark Analytics engine
Multi-Parallel Processing
In-memory analytical processing
Predictive Modeling algorithms
Db2 Warehouse uses Docker containers to run in multiple environments: on-premise, private cloud and a variety of public clouds, both managed and unmanaged. Db2 Warehouse can be deployed as software only, as an appliance and in Intel x86, Linux and mainframe platforms. Built upon IBM's Common SQL engine, Db2 Warehouse queries data from multiple sources—Oracle, Microsoft SQL Server, Teradata, open source, Netezza and others. Users write a query once and data returns from multiple sources quickly and efficiently.
Db2 on Cloud/Db2 Hosted
Db2 on Cloud: Formerly named “dashDB for Transactions”, Db2 on Cloud is a fully managed, cloud SQL database with a high-availability option featuring a 99.99 percent uptime SLA. Db2 on Cloud offers independent scaling of storage and compute, and rolling security updates.
Db2 on Cloud is deployable on both IBM Cloud and Amazon Web Services (AWS).
Key features include:
Elasticity: Db2 on Cloud offers independent scaling of storage and compute through the user interface and API, so businesses can burst on compute during peak demand and scale down when demand falls. Storage is also scalable, so organizations can scale up as their storage needs grow.
Backups and Recovery: Db2 on Cloud provides several disaster recovery options: (1) Fourteen days’ worth of back-ups, (2) point in time restore options, (3) 1-click failover to the DR node at an offsite data center of user's choice.
Encryption: Db2 on Cloud complies with data protection laws and includes at-rest database encryption and SSL connections. The Db2 on Cloud high availability plans offer rolling security updates and all database instances include daily backups. Security patching and maintenance is managed by the database administrator.
High availability options: Db2 on Cloud provides a 99.99% uptime service level agreement on the high availability option. Highly available option allows for updates and scaling operations without downtime to applications running on Db2 on Cloud, using Db2's HADR technology.
Data federation: A single query displays a view of all your data by accessing data distributed across Db2 on-premises and/or Db2 Warehouse on-premises or in the cloud.
Private networking: Db2 on Cloud can be deployed on an isolated network that is accessible through a secure Virtual Private Network (VPN).
Db2 Hosted: Formally named “DB2 on Cloud”, Db2 Hosted is an unmanaged, hosted version of Db2 on Cloud's transactional, SQL cloud database.
Key features:
Server control: Db2 Hosted provides custom software for direct server installation. This reduces application latency and integrates with a business's current data management set up. Db2 Hosted offers exact server configuration based on the needs of the business.
Encryption: Db2 Hosted supports SSL connections.
Elasticity: Db2 Hosted allows for independent scaling of compute and storage to meet changing business needs.
Db2 Warehouse on Cloud
Formerly named “dashDB for Analytics”, Db2 Warehouse on Cloud is a fully managed, elastic, cloud data warehouse built for high-performance analytics and machine learning workloads.
Key features include:
Autonomous cloud service: Db2 Warehouse on Cloud runs on an autonomous platform-as-a-service, and is powered by Db2's autonomous self-tuning engine. Day-to-day operations, including database monitoring, uptime checks and failovers, are fully automated. Operations are supplemented by a DevOps team that are on-call to handle unexpected system failures.
Optimized for analytics: Db2 Warehouse on Cloud delivers high performance on complex analytics workloads by utilizing IBM BLU Acceleration, a collection of technologies pioneered by IBM Research that features four key optimizations: (1) a columnar organized storage model, (2) in-memory processing, (3) querying of compressed data sets, and (4) data skipping.
Manage highly concurrent workloads: Db2 Warehouse on Cloud includes an Adaptive Workload Management technology that automatically manages resources between concurrent workloads, given user-defined resource targets. This technology ensures stable and reliable performance when tackling highly concurrent workloads.
Built-in machine learning and geospatial capabilities: Db2 Warehouse on Cloud comes with in-database machine learning capabilities that allow users to train and run machine learning models on Db2 Warehouse data without the need for data movement. Examples of algorithms include Association Rules, ANOVA, k-means, Regression, and Naïve Bayes. Db2 Warehouse on Cloud also supports spatial analytics with Esri compatibility, supporting Esri data types such as GML, and supports native Python drivers and native Db2 Python integration into Jupyter Notebooks.
Elasticity: Db2 Warehouse on Cloud offers independent scaling of storage and compute, so organizations can customize their data warehouses to meet the needs of their businesses. For example, customers can burst on compute during peak demand, and scale down when demand falls. Users can also expand storage capacity as their data volumes grow. Customers can scale their data warehouse through the Db2 Warehouse on Cloud web console or API.
Data security: Data is encrypted at-rest and in-motion by default. Administrators can also restrict access to sensitive data through data masking, row permissions, and role-based security, and can utilize database audit utilities to maintain audit trails for their data warehouse.
Polyglot persistence: Db2 Warehouse on Cloud is optimized for polyglot persistence of data, and supports relational (columnar and row-oriented tables), geospatial, and NoSQL document (XML, JSON, BSON) models. All data is subject to advanced data compression.
Deployable on multiple cloud providers: Db2 Warehouse on Cloud is currently deployable on IBM Cloud and Amazon Web Services (AWS). .
Db2 BigSQL
In 2018 the IBM SQL product was renamed and is now known as IBM Db2 Big SQL (Big SQL). Big SQL is an enterprise-grade, hybrid ANSI-compliant SQL on the Hadoop engine delivering massively parallel processing (MPP) and advanced data query. Additional benefits include low latency, high performance, security, SQL compatibility and federation capabilities.
Big SQL offers a single database connection or query for disparate sources such as HDFS, RDMS, NoSQL databases, object stores and WebHDFS. Exploit Hive, Or to exploit Hbase and Spark and whether on the cloud, on premises or both, access data across Hadoop and relational data bases.
Users (data scientists and analysts) can run smarter ad hoc and complex queries supporting more concurrent users with less hardware compared to other SQL options for Hadoop. Big SQL provides an ANSI-compliant SQL parser to run queries from unstructured streaming data using new APIs.
Through the integration with the IBM Common SQL Engine, Big SQL was designed to work with all the Db2 family of offerings, as well as with the IBM Integrated Analytics System. Big SQL is a part of the IBM Hybrid Data Management Platform, a comprehensive IBM strategy for flexibility and portability, strong data integration and flexible licensing.
Db2 Event Store
Db2 Event Store targets the needs of the Internet of things (IOT), industrial, telecommunications, financial services, online retail and other industries needing to perform real-time analytics on streamed high volume, high velocity data. It became publicly available in June 2017. It can store and analyze 250 billion events in a day with just 3 server nodes with its high speed data capture and analytics capabilities. The need to support AI and machine learning was envisioned from the start by including IBM Watson Studio into the product, and integrating Jupyter notebooks for collaborative app and model development. Typically combined with streaming tools, it provides persistent data by writing the data out to object storage in an open data format (Apache Parquet). Built on Spark, Db2 Event Store is compatible with Spark Machine Learning, Spark SQL, other open technologies, as well as the Db2 family Common SQL Engine and all languages supported – including Python, GO, JDBC, ODBC, and more.
Db2 for IBM i
In 1994, IBM renamed the integrated relational database of the OS/400 to DB2/400 to indicate comparable functionality to DB2 on other platforms. Despite this name, it is not based on DB2 code, but instead it evolved from the IBM System/38 integrated database. The product is currently named IBM Db2 for i.
Other Platforms
Db2 for Linux, UNIX and Windows (informally known as Db2 LUW)
Db2 for z/OS (mainframe)
Db2 for VSE & VM
Db2 on IBM Cloud
Db2 on Amazon Web Services (AWS)
Db2 for z/OS is available in its traditional product packaging, or in the Value Unit Edition, which allows customers to instead pay a one-time charge.
Db2 also powers IBM InfoSphere Warehouse, which offers data warehouse capabilities. InfoSphere Warehouse is available for z/OS. It includes several BI features such as ETL, data mining, OLAP acceleration, and in-line analytics.
Db2 11.5 for Linux, UNIX and Windows, contains all of the functionality and tools offered in the prior generation of DB2 and InfoSphere Warehouse on Linux, UNIX and Windows.
Technical information
Db2 can be administered from either the command-line or a GUI. The command-line interface requires more knowledge of the product but can be more easily scripted and automated. The GUI is a multi-platform Java client that contains a variety of wizards suitable for novice users. Db2 supports both SQL and XQuery. DB2 has native implementation of XML data storage, where XML data is stored as XML (not as relational data or CLOB data) for faster access using XQuery.
Db2 has APIs for Rexx, PL/I, COBOL, RPG, Fortran, C++, C, Delphi, .NET CLI, Java, Python, Perl, PHP, Ruby, and many other programming languages. Db2 also supports integration into the Eclipse and Visual Studio integrated development environments.
pureQuery is IBM's data access platform focused on applications that access data. pureQuery supports both Java and .NET. pureQuery provides access to data in databases and in-memory Java objects via its tools, APIs, and runtime environment as delivered in IBM Data Studio Developer and IBM Data Studio pureQuery Runtime.
Error processing
An important feature of Db2 computer programs is error handling. The SQL communications area (SQLCA) structure was once used exclusively within a Db2 program to return error information to the application program after every SQL statement was executed. The primary, but not singularly useful, error diagnostic is held in the field SQLCODE within the SQLCA block.
The SQL return code values are:
0 means successful execution.
A positive number means successful execution with one or more warnings. An example is +100, which means no rows found.
A negative number means unsuccessful with an error. An example is -911, which means a lock timeout (or deadlock) has occurred, triggering a rollback.
Later versions of Db2 added functionality and complexity to the execution of SQL. Multiple errors or warnings could be returned by the execution of an SQL statement; it may, for example, have initiated a database trigger and other SQL statements. Instead of the original SQLCA, error information should now be retrieved by successive executions of a GET DIAGNOSTICS statement.
See SQL return codes for a more comprehensive list of common SQLCODEs.
See also
Comparison of relational database management systems
Comparison of database tools
List of relational database management systems
List of column-oriented DBMSes
Data Language Interface
References
External links
IBM Db2 trial and downloads
Db2 - IBM Data for developers
Made in IBM Labs: New IBM Software Accelerates Decision Making in the Era of Big Data
What's new in DB2 10.5 for Linux, UNIX, and Windows
Db2 Tutorial
IBM DB2
Cross-platform software
Relational database management systems
IBM software
RDBMS software for Linux
Client-server database management systems
Proprietary database management systems
Db2 Express-C |
4003020 | https://en.wikipedia.org/wiki/Sun-1 | Sun-1 | Sun-1 was the first generation of UNIX computer workstations and servers produced by Sun Microsystems, launched in May 1982. These were based on a CPU board designed by Andy Bechtolsheim while he was a graduate student at Stanford University and funded by DARPA. The Sun-1 systems ran SunOS 0.9, a port of UniSoft's UniPlus V7 port of Seventh Edition UNIX to the Motorola 68000 microprocessor, with no window system. Early Sun-1 workstations and servers used the original Sun logo, a series of red "U"s laid out in a square, rather than the more familiar purple diamond shape used later.
The first Sun-1 workstation was sold to Solo Systems in May 1982. The Sun-1/100 was used in the original Lucasfilm EditDroid non-linear editing system.
Models
Hardware
The Sun-1 workstation was based on the Stanford University SUN workstation designed by Andy Bechtolsheim (advised by Vaughan Pratt and Forest Baskett), a graduate student and co-founder of Sun Microsystems. At the heart of this design were the Multibus CPU, memory, and video display cards. The cards used in the Sun-1 workstation were a second-generation design with a private memory bus allowing memory to be expanded to 2 MB without performance degradation.
The Sun 68000 board introduced in 1982 was a powerful single-board computer. It combined a 10 MHz Motorola 68000 microprocessor, a Sun designed memory management unit (MMU), 256 KB of zero wait state memory with parity, up to 32 KB of EPROM memory, two serial ports, a 16-bit parallel port and an Intel Multibus (IEEE 796 bus) interface in a single , Multibus form factor.
By using the Motorola 68000 processor tightly coupled with the Sun-1 MMU the Sun 68000 CPU board was able to support a multi-tasking operating system such as UNIX. It included an advanced Sun designed multi-process two-level memory management unit with facilities for memory protection, code sharing and demand paging of memory. The Sun-1 MMU was necessary because the Motorola 68451 MMU did not always work correctly with the 68000 and could not always restore the processor state after a page fault.
The CPU board included 256 KB of memory which could be replaced or augmented with two additional memory cards for a total of 2 MB. Although the memory cards used the Multibus form factor, they only used the Multibus interface for power; all memory access was via the smaller private P2 bus. This was a synchronous private memory bus that allowed for simultaneous memory input/output transfers. It also allowed for full performance zero wait state operation of the memory. When installing the first 1 MB expansion board either the 256 Kb of memory on the CPU board or the first 256 KB on the expansion board had to be disabled.
On-board I/O included a dual serial port UART and a 16-bit parallel port. The serial ports were implemented with an Intel 8274 UART and later with a NEC D7201C UART. Serial port A was wired as a data communications equipment (DCE) port and had full modem control. It was also the console port if no graphical display was installed in the system. Serial port B was wired as a data terminal equipment (DTE) port and had no modem control. Both serial ports could also be used as terminal ports allowing three people to use one workstation, although two did not have graphical displays. The 16-bit parallel port was a special-purpose port for connecting 8-bit parallel port keyboard and 8-bit parallel port optical mouse for workstations with graphical displays. The parallel port was never used as a general purpose parallel printer port.
The CPU board included a fully compatible Multibus (IEEE 796 bus). It was an asynchronous bus that accommodated devices with various transfer rates while maintaining maximum throughput. It had 20 address lines so it could address up to 1 MB of Multibus memory and 1 MB of I/O locations although most I/O devices only decoded the first 64 KB of address space. The Sun CPU board fully supported multi-master functionality that allowed it to share the Multibus with other DMA devices.
The keyboard was a Micro Switch 103SD30-2, or a KeyTronic P2441 for the German market. The memory-mapped, bit-mapped frame buffer (graphics) board had a resolution of 1024×1024 pixels, but only 1024×800 was displayed on the monitor. The graphics board included hardware to accelerate raster operations. A Ball model HD17H 17-inch video display monitor was used. An Ethernet board was available, originally implementing the 3 Mbit/s Xerox PARC Ethernet specification, which was later upgraded to the 3Com 10 Mbit/s version. An Interphase SMD 2180 disk controller could be installed to connect up to four Fujitsu 84 MB M2313K or CDC 16.7 MB (8.35 MB fixed, 8.35 MB removable) 9455 Lark drives. All of the boards were installed in a 6 or 7-slot Multibus card cage.
Later documentation shows that a 13- or 19-inch color display was available. The color frame buffer had a resolution of 640×512 pixels, with 640×480 displayed on the monitor. The board could display 256 colors from a palette of 16 million. ½-inch 9-track reel-to-reel tape drives and QIC-02 ¼-inch cartridge tape drives were also added to the offering.
There was also a second generation Sun-1 CPU board referred to as the Sun-1.5 CPU board.
Sun-1 systems upgraded with Sun-2 Multibus CPU boards were identified with a U suffix to their model number.
References
Bibliography
External links
Sun Microsystems
The Sun Hardware Reference, Part 1
Online Sun Information Archive Sun-1 page
Sun Field Engineer Handbook, 20th edition
Pictures of a Sun1/100U
Sun-1 display at Stanford University's Gates Information Science
Sun-1 board images and manual PDFs
Sun 1 manuals at bitsavers.org
DARPA
Sun Microsystems
Sun servers
Sun workstations
68k architecture
Computer-related introductions in 1982
32-bit computers |
7492 | https://en.wikipedia.org/wiki/Capability%20Maturity%20Model | Capability Maturity Model | The Capability Maturity Model (CMM) is a development model created in 1986 after a study of data collected from organizations that contracted with the U.S. Department of Defense, who funded the research. The term "maturity" relates to the degree of formality and optimization of processes, from ad hoc practices, to formally defined steps, to managed result metrics, to active optimization of the processes.
The model's aim is to improve existing software development processes, but it can also be applied to other processes.
In 2006, the Software Engineering Institute at Carnegie Mellon University developed the Capability Maturity Model Integration, which has largely superseded the CMM and addresses some of its drawbacks.
Overview
The Capability Maturity Model was originally developed as a tool for objectively assessing the ability of government contractors' processes to implement a contracted software project. The model is based on the process maturity framework first described in IEEE Software and, later, in the 1989 book Managing the Software Process by Watts Humphrey. It was later published in a report in 1993 and as a book by the same authors in 1995.
Though the model comes from the field of software development, it is also used as a model to aid in business processes generally, and has also been used extensively worldwide in government offices, commerce, and industry.
History
Prior need for software processes
In the 1980s, the use of computers grew more widespread, more flexible and less costly. Organizations began to adopt computerized information systems, and the demand for software development grew significantly. Many processes for software development were in their infancy, with few standard or "best practice" approaches defined.
As a result, the growth was accompanied by growing pains: project failure was common, the field of computer science was still in its early years, and the ambitions for project scale and complexity exceeded the market capability to deliver adequate products within a planned budget. Individuals such as Edward Yourdon, Larry Constantine, Gerald Weinberg, Tom DeMarco, and David Parnas began to publish articles and books with research results in an attempt to professionalize the software-development processes.
In the 1980s, several US military projects involving software subcontractors ran over-budget and were completed far later than planned, if at all. In an effort to determine why this was occurring, the United States Air Force funded a study at the Software Engineering Institute (SEI).
Precursor
The first application of a staged maturity model to IT was not by CMU/SEI, but rather by Richard L. Nolan, who, in 1973 published the stages of growth model for IT organizations.
Watts Humphrey began developing his process maturity concepts during the later stages of his 27-year career at IBM.
Development at Software Engineering Institute
Active development of the model by the US Department of Defense Software Engineering Institute (SEI) began in 1986 when Humphrey joined the Software Engineering Institute located at Carnegie Mellon University in Pittsburgh, Pennsylvania after retiring from IBM. At the request of the U.S. Air Force he began formalizing his Process Maturity Framework to aid the U.S. Department of Defense in evaluating the capability of software contractors as part of awarding contracts.
The result of the Air Force study was a model for the military to use as an objective evaluation of software subcontractors' process capability maturity. Humphrey based this framework on the earlier Quality Management Maturity Grid developed by Philip B. Crosby in his book "Quality is Free". Humphrey's approach differed because of his unique insight that organizations mature their processes in stages based on solving process problems in a specific order. Humphrey based his approach on the staged evolution of a system of software development practices within an organization, rather than measuring the maturity of each separate development process independently. The CMM has thus been used by different organizations as a general and powerful tool for understanding and then improving general business process performance.
Watts Humphrey's Capability Maturity Model (CMM) was published in 1988 and as a book in 1989, in Managing the Software Process.
Organizations were originally assessed using a process maturity questionnaire and a Software Capability Evaluation method devised by Humphrey and his colleagues at the Software Engineering Institute.
The full representation of the Capability Maturity Model as a set of defined process areas and practices at each of the five maturity levels was initiated in 1991, with Version 1.1 being completed in January 1993. The CMM was published as a book in 1995 by its primary authors, Mark C. Paulk, Charles V. Weber, Bill Curtis, and Mary Beth Chrissis.
United States of America
New York, USA.
Capability Maturity Model Integration
The CMM model's application in software development has sometimes been problematic. Applying multiple models that are not integrated within and across an organization could be costly in training, appraisals, and improvement activities. The Capability Maturity Model Integration (CMMI) project was formed to sort out the problem of using multiple models for software development processes, thus the CMMI model has superseded the CMM model, though the CMM model continues to be a general theoretical process capability model used in the public domain.
Adapted to other processes
The CMM was originally intended as a tool to evaluate the ability of government contractors to perform a contracted software project. Though it comes from the area of software development, it can be, has been, and continues to be widely applied as a general model of the maturity of process (e.g., IT service management processes) in IS/IT (and other) organizations.
Model topics
Maturity models
A maturity model can be viewed as a set of structured levels that describe how well the behaviors, practices and processes of an organization can reliably and sustainably produce required outcomes.
A maturity model can be used as a benchmark for comparison and as an aid to understanding - for example, for comparative assessment of different organizations where there is something in common that can be used as a basis for comparison. In the case of the CMM, for example, the basis for comparison would be the organizations' software development processes.
Structure
The model involves five aspects:
Maturity Levels: a 5-level process maturity continuum - where the uppermost (5th) level is a notional ideal state where processes would be systematically managed by a combination of process optimization and continuous process improvement.
Key Process Areas: a Key Process Area identifies a cluster of related activities that, when performed together, achieve a set of goals considered important.
Goals: the goals of a key process area summarize the states that must exist for that key process area to have been implemented in an effective and lasting way. The extent to which the goals have been accomplished is an indicator of how much capability the organization has established at that maturity level. The goals signify the scope, boundaries, and intent of each key process area.
Common Features: common features include practices that implement and institutionalize a key process area. There are five types of common features: commitment to perform, ability to perform, activities performed, measurement and analysis, and verifying implementation.
Key Practices: The key practices describe the elements of infrastructure and practice that contribute most effectively to the implementation and institutionalization of the area.
Levels
There are five levels defined along the continuum of the model and, according to the SEI: "Predictability, effectiveness, and control of an organization's software processes are believed to improve as the organization moves up these five levels. While not rigorous, the empirical evidence to date supports this belief".
Initial (chaotic, ad hoc, individual heroics) - the starting point for use of a new or undocumented repeat process.
Repeatable - the process is at least documented sufficiently such that repeating the same steps may be attempted.
Defined - the process is defined/confirmed as a standard business process
Capable - the process is quantitatively managed in accordance with agreed-upon metrics.
Efficient - process management includes deliberate process optimization/improvement.
Within each of these maturity levels are Key Process Areas which characterise that level, and for each such area there are five factors: goals, commitment, ability, measurement, and verification. These are not necessarily unique to CMM, representing — as they do — the stages that organizations must go through on the way to becoming mature.
The model provides a theoretical continuum along which process maturity can be developed incrementally from one level to the next. Skipping levels is not allowed/feasible.
Level 1 - Initial It is characteristic of processes at this level that they are (typically) undocumented and in a state of dynamic change, tending to be driven in an ad hoc, uncontrolled and reactive manner by users or events. This provides a chaotic or unstable environment for the processes. (Example - a surgeon performing a new operation a small number of times - the levels of negative outcome are not known).
Level 2 - Repeatable It is characteristic of this level of maturity that some processes are repeatable, possibly with consistent results. Process discipline is unlikely to be rigorous, but where it exists it may help to ensure that existing processes are maintained during times of stress.
Level 3 - Defined It is characteristic of processes at this level that there are sets of defined and documented standard processes established and subject to some degree of improvement over time. These standard processes are in place. The processes may not have been systematically or repeatedly used - sufficient for the users to become competent or the process to be validated in a range of situations. This could be considered a developmental stage - with use in a wider range of conditions and user competence development the process can develop to next level of maturity.
Level 4 - Managed (Capable) It is characteristic of processes at this level that, using process metrics, effective achievement of the process objectives can be evidenced across a range of operational conditions. The suitability of the process in multiple environments has been tested and the process refined and adapted. Process users have experienced the process in multiple and varied conditions, and are able to demonstrate competence. The process maturity enables adaptions to particular projects without measurable losses of quality or deviations from specifications. Process Capability is established from this level. (Example - surgeon performing an operation hundreds of times with levels of negative outcome approaching zero).
Level 5 - Optimizing (Efficient)It is a characteristic of processes at this level that the focus is on continually improving process performance through both incremental and innovative technological changes/improvements. At maturity level 5, processes are concerned with addressing statistical common causes of process variation and changing the process (for example, to shift the mean of the process performance) to improve process performance. This would be done at the same time as maintaining the likelihood of achieving the established quantitative process-improvement objectives.
Between 2008 and 2019, about 12% of appraisals given were at maturity levels 4 and 5.
Critique
The model was originally intended to evaluate the ability of government contractors to perform a software project. It has been used for and may be suited to that purpose, but critics pointed out that process maturity according to the CMM was not necessarily mandatory for successful software development.
Software process framework
The software process framework documented is intended to guide those wishing to assess an organization's or project's consistency with the Key Process Areas. For each maturity level there are five checklist types:
{| class="wikitable"
|-
! Type
! Description
|-
| Policy
|Describes the policy contents and KPA goals recommended by the Key Process Areas.
|-
| Standard
|Describes the recommended content of select work products described in the Key Process Areas.
|-
| Process
| Describes the process information content recommended by the Key Process Areas. These are refined into checklists for:
Roles, entry criteria, inputs, activities, outputs, exit criteria, reviews and audits, work products managed and controlled, measurements, documented procedures, training, and tools
|-
| Procedure
| Describes the recommended content of documented procedures described in the Key Process Areas.
|-
| Level overview
| Provides an overview of an entire maturity level. These are further refined into checklists for:
Key Process Areas purposes, goals, policies, and standards; process descriptions; procedures; training; tools; reviews and audits; work products; measurements
|}
See also
Capability Immaturity Model
Capability Maturity Model Integration
People Capability Maturity Model
Testing Maturity Model
References
External links
CMMI Institute
Architecture Maturity Models at The Open Group
Software development process
Maturity models
Information technology management |
319453 | https://en.wikipedia.org/wiki/Zero-configuration%20networking | Zero-configuration networking | Zero-configuration networking (zeroconf) is a set of technologies that automatically creates a usable computer network based on the Internet Protocol Suite (TCP/IP) when computers or network peripherals are interconnected. It does not require manual operator intervention or special configuration servers. Without zeroconf, a network administrator must set up network services, such as Dynamic Host Configuration Protocol (DHCP) and Domain Name System (DNS), or configure each computer's network settings manually.
Zeroconf is built on three core technologies: automatic assignment of numeric network addresses for networked devices, automatic distribution and resolution of computer hostnames, and automatic location of network services, such as printing devices.
Background
Computer networks use numeric network addresses to identify communications endpoints in a network of participating devices. This is similar to the telephone network which assigns a string of digits to identify each telephone. In modern networking protocols, information to be transmitted is divided into a series of network packets. Every packet contains the source and destination addresses for the transmission. Network routers examine these addresses to determine the best network path in forwarding the data packet at each step toward its destination.
Similarly to telephones being labeled with their telephone number, it was a common practice in early networks to attach an address label to networked devices. The dynamic nature of modern networks, especially residential networks in which devices are powered up only when needed, desire dynamic address assignment mechanisms that do not require user involvement for initialization and management. These systems automatically give themselves common names chosen either by the equipment manufacturer, such as a brand and model number, or chosen by users for identifying their equipment. The names and addresses are then automatically entered into a directory service.
Early computer networking was built upon technologies of the telecommunications networks and thus protocols tended to fall into two groups: those intended to connect local devices into a local area network (LAN), and those intended primarily for long-distance communications. The latter wide area network (WAN) systems tended to have centralized setup, where a network administrator would manually assign addresses and names. LAN systems tended to provide more automation of these tasks, so that new equipment could be added to a LAN with a minimum of operator and administrator intervention.
An early example of a zero-configuration LAN system is AppleTalk, a protocol introduced by Apple Inc. for the early Macintosh computers in the 1980s. Macs, as well as other devices supporting the protocol, could be added to the network by simply plugging them in; all further configuration was automated. Network addresses were automatically selected by each device using a protocol known as AppleTalk Address Resolution Protocol (AARP), while each machine built its own local directory service using a protocol known as Name Binding Protocol (NBP). NBP included not only a name, but the type of device and any additional user-provided information such as its physical location or availability. Users could look up any device on the network with the application Chooser, which filtered names based on the device type.
On Internet Protocol (IP) networks, the Domain Name System database for a network was initially maintained manually by a network administrator. Efforts to automate maintenance of this database, led to the introduction of a number of new protocols providing automated services, such as the Dynamic Host Configuration Protocol (DHCP).
Address selection
Hosts on a network must be assigned IP addresses that uniquely identify them to other devices on the same network. On some networks there is a central authority that assigns these addresses as new devices are added. Mechanisms were introduced to handle this task automatically, and both IPv4 and IPv6 now include systems for address autoconfiguration, which allows a device to determine a safe address to use through simple mechanisms. For link-local addressing, IPv4 uses the special block , while IPv6 hosts use the prefix . More commonly addresses are assigned by a DHCP server, often built into common networking hardware like computer hosts or routers.
Most IPv4 hosts use link-local addressing only as a last resort when a DHCP server is unavailable. An IPv4 host otherwise uses its DHCP-assigned address for all communications, global or link-local. One reason is that IPv4 hosts are not required to support multiple addresses per interface, although many do. Another is that not every IPv4 host implements distributed name resolution (e.g., multicast DNS), so discovering the autoconfigured link-local address of another host on the network can be difficult. Discovering the DHCP-assigned address of another host requires either distributed name resolution or a unicast DNS server with this information; Some networks feature DNS servers that are automatically updated with DHCP-assigned host and address information.
IPv6 hosts are required to support multiple addresses per interface; moreover, every IPv6 host is required to configure a link-local address even when global addresses are available. IPv6 hosts may additionally self-configure additional addresses on receipt of router advertisement messages, thus eliminating the need for a DHCP server.
Both IPv4 and IPv6 hosts may randomly generate the host-specific part of an autoconfigured address. IPv6 hosts generally combine a prefix of up to 64 bits with a 64-bit EUI-64 derived from the factory-assigned 48-bit IEEE MAC address. The MAC address has the advantage of being globally unique, a basic property of the EUI-64. The IPv6 protocol stack also includes duplicate address detection to avoid conflicts with other hosts. In IPv4, the method is called link-local address autoconfiguration. However, Microsoft refers to this as Automatic Private IP Addressing (APIPA) or Internet Protocol Automatic Configuration (IPAC). The feature is supported in Windows since at least Windows 98.
Name service discovery
Internet protocols use IP addresses for communications, but these are not easy for humans to use; IPv6 in particular uses very long strings of digits that are not easily entered manually. To address this issue, the internet has long used DNS, which allows human-readable names to be associated with IP addresses, and includes code for looking up these names from a hierarchical database system. Users type in domain names, such as example.org, which the computer's DNS software looks up in the DNS databases to retrieve an IP address, and then hands off that address to the protocol stack for further communications.
Looking up an address using DNS requires the IP address of the DNS server to be known. This has normally been accomplished by typing in the address of a known server into a field in one of the devices on the network. In early systems this was normally required on every device, but this has been pushed up one layer in the hierarchy to the DHCP servers or broadband devices like cable modems that receive this information from their internet service provider. This has reduced the user-side administration requirements and provides a key element of zero-configuration access.
DNS was intended to provide uniform names to groups of devices within the same administration realm, such as example.org, provided by a name service. Assigning an address to a local device, e.g., thirdfloorprinter.example.org, normally requires administrator access to the DNS server and is often accomplished manually. Additionally, traditional DNS servers are not expected to automatically correct for changes in configuration. For instance, if a printer is moved from one floor to another it might be assigned a new IP address by the local DHCP server.
To address the need for automatic configuration, Microsoft implemented NetBIOS Name Service, part of which is the Computer Browser Service already in Microsoft Windows for Workgroups 3.11 as early as 1992. NetBIOS Name Service is zero-configuration on networks with a single subnet and may be used in conjunction with a WINS server or a Microsoft DNS server that supports secure automatic registration of addresses. This system has small, but not zero, management overhead even on very large enterprise networks. The protocols NetBIOS can use are part of the Server Message Block (SMB) suite of open protocols which are also available on Linux and iOS, although Windows typically supports a wider range of so-called dialects which can be negotiated between Windows clients that support it. For example, Computer Browser Services running on server operating systems or later versions of Windows are elected as so-called master browser over those that are not running a server operating system or running older versions of Windows.
In 2000, Bill Manning and Bill Woodcock described the Multicast Domain Name Service which spawned the implementations by Apple and Microsoft. Both implementations are very similar. Apple's Multicast DNS (mDNS) is published as a standards track proposal , while Microsoft's Link-local Multicast Name Resolution (LLMNR) is published as informational . LLMNR is included in every Windows version from Windows Vista onwards and acts as a side-by-side alternative for Microsoft's NetBIOS Name Service over IPv4 and as a replacement over IPv6, since NetBIOS is not available over IPv6. Apple's implementation is available as the Bonjour service since 2002 in Mac OS X v10.2. The Bonjour implementation (mDNSResponder) is available under the Apache 2 Open Source License and is included in Android Jelly Bean and later under the same license.
Use of either NetBIOS or LLMNR services on Windows is essentially automatic, since using standard DNS client API's will result in the use of either NetBIOS or LLMNR depending on what name is being resolved (whether the name is a local name or not), the network configuration in effect (e.g. DNS suffixes in effect) and (in corporate networks) the policies in effect (whether LLMNR or NetBIOS are disabled), although developers may opt into bypassing these services for individual address lookups.
The mDNS and LLMNR protocols have minor differences in their approach to name resolution. mDNS allows a network device to choose a domain name in the local DNS namespace and announce it using a special multicast IP address. This introduces special semantics for the domain local, which is considered a problem by some members of the IETF. The current LLMNR draft allows a network device to choose any domain name, which is considered a security risk by some members of the IETF. mDNS is compatible with DNS-SD as described in the next section, while LLMNR is not.
Service discovery
Name services such as mDNS, LLMNR and others do not provide information about the type of device or its status. A user looking for a nearby printer, for instance, might be hindered if the printer was given the name "Bob". Service discovery provides additional information about devices. Service discovery is sometimes combined with a name service, as in Apple's Name Binding Protocol and Microsoft's NetBIOS.
NetBIOS Service Discovery
NetBIOS on Windows supports individual hosts on the network to advertise services, such as file shares and printers. It also supports, for example, a network printer to advertise itself as a host sharing a printer device and any related services it supports. Depending on how a device is attached (to the network directly, or to the host which shares it) and which protocols are supported. However, Windows clients connecting to it may prefer to use SSDP or WSD using NetBIOS. NetBIOS is one of the providers on Windows implementing the more general discovery process dubbed function discovery which includes built-in providers for PnP, Registry, NetBIOS, SSDP and WSD of which the former two are local-only and the latter three support discovery of networked devices. None of these need any configuration for use on the local subnet. NetBIOS has traditionally been supported only in expensive printers for corporate use though some entry-level printers with Wi-Fi or Ethernet support it natively, allowing the printer to be used without configuration even on very old operating systems.
WS-Discovery
Web Services Dynamic Discovery (WS-Discovery) is a technical specification that defines a multicast discovery protocol to locate services on a local network. It operates over TCP and UDP port 3702 and uses IP multicast address . As the name suggests, the actual communication between nodes is done using web services standards, notably SOAP-over-UDP. Windows supports it in the form of Web Services for Devices and Devices Profile for Web Services. Many devices, such as HP and Brother printers, support it.
DNS-based service discovery
allows clients to discover a named list of service instances and to resolve those services to hostnames using standard DNS queries. The specification is compatible with existing unicast DNS server and client software, but works equally well with mDNS in a zero-configuration environment. Each service instance is described using a DNS SRV and DNS TXT record. A client discovers the list of available instances for a given service type by querying the DNS PTR record of that service type's name; the server returns zero or more names of the form <Service>.<Domain>, each corresponding to a SRV/TXT record pair. The SRV record resolves to the domain name providing the instance, while the TXT can contain service-specific configuration parameters. A client can then resolve the A/AAAA record for the domain name and connect to the service.
Service types are given on a first-come-first-serve basis. A service type registry was originally maintained by DNS-SD.org, but has since been merged into IANA's registry for DNS SRV records.
History
In 1997 Stuart Cheshire proposed adapting Apple's mature Name Binding Protocol to IP networks to address the lack of service discovery capability. Cheshire subsequently joined Apple and authored IETF draft proposals for mDNS and DNS-based Service Discovery, supporting the transition from AppleTalk to IP networking. In 2002, Apple announced an implementation of both protocols under the name Rendezvous (later renamed Bonjour). It was first included in Mac OS X 10.2, replacing the Service Location Protocol (SLP) used in 10.1. In 2013, the proposals were ratified as and .
DNS-SD with multicast
mDNS uses packets similar to unicast DNS to resolve hostnames except they are sent over a multicast link. Each host listens on the mDNS port, 5353, transmitted to a well-known multicast address and resolves requests for the DNS record of its .local hostname (e.g. the A, AAAA, CNAME) to its IP address. When an mDNS client needs to resolve a local hostname to an IP address, it sends a DNS request for that name to the well-known multicast address; the computer with the corresponding A/AAAA record replies with its IP address. The mDNS multicast address is for IPv4 and for IPv6 link-local addressing.
DNS Service Discovery (DNS-SD) requests can also be sent using mDNS to yield zero-configuration DNS-SD. This uses DNS PTR, SRV, TXT records to advertise instances of service types, domain names for those instances, and optional configuration parameters for connecting to those instances. But SRV records can now resolve to .local domain names, which mDNS can resolve to local IP addresses.
Support
DNS-SD is used by Apple products, most network printers, many Linux distributions including Debian and Ubuntu, and a number of third-party products for various operating systems. For example, many OS X network applications written by Apple, including Safari, iChat, and Messages, can use DNS-SD to locate nearby servers and peer-to-peer clients. Windows 10 includes support for DNS-SD for applications written using JavaScript. Individual applications may include their own support in older versions of the operating system, such that most instant messaging and VoIP clients on Windows support DNS-SD. Some Unix, BSD, and Linux distributions also include DNS-SD. For example, Ubuntu ships Avahi, an mDNS/DNS-SD implementation, in its base distribution.
UPnP
UPnP has some protocol components with the purpose of service discovery.
SSDP
Simple Service Discovery Protocol (SSDP) is a UPnP protocol, used in Windows XP and later. SSDP uses HTTP notification announcements that give a service-type URI and a Unique Service Name (USN). Service types are regulated by the Universal Plug and Play Steering Committee. SSDP is supported by many printer, NAS and appliance manufacturers such as Brother. It is supported by certain brands of network equipment, and in many SOHO firewall appliances, where host computers behind it may pierce holes for applications. It is also used in home theater PC systems to facilitate media exchange between host computers and the media center.
DLNA
Digital Living Network Alliance (DLNA) is another suite of standards that uses UPnP for the discovery of networked devices. DLNA has a long list of prominent manufacturers producing devices such as TVs, NAS devices and so forth that support it. DLNA is supported by all major operating systems. DLNA service discovery is layered on top of SSDP.
Efforts toward an IETF standard protocol
SLP is supported by Hewlett-Packard's network printers, Novell, and Sun Microsystems. SLP is described in and and implementations are available for both Solaris and Linux.
AllJoyn
AllJoyn is an open-source software stack for a myriad of devices, ranging from IoT devices to full-size computers, for discovery and control of devices on networks (Wifi, Ethernet) and other links (Bluetooth, ZigBee, etc.). It uses mDNS and HTTP over UDP and other protocols.
Standardization
, the SLP standard for figuring out where to get services, was published in June 1999 by the SVRLOC IETF working group.
, a standard for choosing addresses for networked items, was published in March 2005 by the IETF Zeroconf working group. The group included individuals from Apple, Sun, and Microsoft.
LLMNR was submitted for official adoption in the IETF DNSEXT working group, however failed to gain consensus and thus was published as informational in January 2007.
Following the failure of LLMNR to become an Internet standard and given that mDNS/DNS-SD is used much more widely than LLMNR, Apple was asked by the IETF to submit the mDNS/DNS-SD specs for publication as Informational RFC as well.
In February 2013 mDNS and DNS-SD were published as Standards Track Proposals and .
Security issues
Because mDNS operates under a different trust model than unicast DNS—trusting the entire network rather than a designated DNS server, it is vulnerable to spoofing attacks by any system within the same broadcast domain. Like SNMP and many other network management protocols, it can also be used by attackers to quickly gain detailed knowledge of the network and its machines. Because of this, applications should still authenticate and encrypt traffic to remote hosts (e.g. via RSA, SSH, etc.) after discovering and resolving them through DNS-SD/mDNS. LLMNR suffers from similar vulnerabilities.
Major implementations
Apple Bonjour
Bonjour from Apple, uses mDNS and DNS Service Discovery. Apple changed its preferred zeroconf technology from SLP to mDNS and DNS-SD between Mac OS X 10.1 and 10.2, though SLP continues to be supported by Mac OS X.
Apple's mDNSResponder has interfaces for C and Java and is available on BSD, Apple Mac OS X, Linux, other POSIX based operating systems and MS Windows. The Windows downloads are available from Apple's website.
Avahi
Avahi is a Zeroconf implementation for Linux and BSDs. It implements IPv4LL, mDNS and DNS-SD. It is part of most Linux distributions, and is installed by default on some. If run in conjunction with nss-mdns it also offers host name resolution.
Avahi also implements binary compatibility libraries that emulate Bonjour and the historical mDNS implementation Howl, so software made to use those implementations can also utilize Avahi through the emulation interfaces.
MS Windows CE 5.0
Microsoft Windows CE 5.0 includes Microsoft's own implementation of LLMNR.
Systemd
Systemd implements both mDNS and LLMNR in systemd-resolved.
Link-local IPv4 addresses
Where no DHCP server is available to assign a host an IP address, the host can select its own link-local address. Using a link-local address, hosts can communicate over this link but only locally; Access to other networks and the Internet is not possible. There are some link-local IPv4 address implementations available:
Apple Mac OS and MS Windows have supported link-local addresses since 1998. Apple released its open-source implementation in the Darwin bootp package.
Avahi contains an implementation of IPv4LL in the avahi-autoipd tool.
Zero-Conf IP (zcip)
BusyBox can embed a simple IPv4LL implementation.
Stablebox, a fork from Busybox, offers a slightly modified IPv4LL implementation named llad.
Zeroconf is a package based on Simple IPv4LL, a shorter implementation by Arthur van Hoff.
The above implementations are all stand-alone daemons or plugins for DHCP clients that only deal with link-local IP addresses. Another approach is to include support in new or existing DHCP clients:
Elvis Pfützenreuter has written a patch for the uDHCP client/server.
dhcpcd is an opensource DHCP client for Linux and BSD that includes IPv4LL support. It is included as standard in NetBSD.
Neither of these implementations addresses kernel issues like broadcasting ARP replies or closing existing network connections.
See also
Bonjour Sleep Proxy
Wireless Zero Configuration
Peer Name Resolution Protocol (PNRP)
References
Notes
Sources
External links
, a pure Java implementation of mDNS/DNS-SD.
, a pure Python implementation of mDNS/DNS-SD.
, a cross platform (Linux, MS Windows, Apple Mac), unified Mono/.NET library for Zeroconf, supporting both Bonjour and Avahi.
, a cross-platform wxWidgets-based service discovery module without external dependencies.
.
.
.
, including Internet drafts.
, DNS based Service Discovery
.
Apple Inc. software
Computer configuration
Domain Name System
Network protocols
Windows communication and services |
19835580 | https://en.wikipedia.org/wiki/Nice%20model | Nice model | The Nice () model is a scenario for the dynamical evolution of the Solar System. It is named for the location of the Observatoire de la Côte d'Azur — where it was initially developed in 2005 — in Nice, France. It proposes the migration of the giant planets from an initial compact configuration into their present positions, long after the dissipation of the initial protoplanetary disk. In this way, it differs from earlier models of the Solar System's formation. This planetary migration is used in dynamical simulations of the Solar System to explain historical events including the Late Heavy Bombardment of the inner Solar System, the formation of the Oort cloud, and the existence of populations of small Solar System bodies such as the Kuiper belt, the Neptune and Jupiter trojans, and the numerous resonant trans-Neptunian objects dominated by Neptune.
Its success at reproducing many of the observed features of the Solar System has brought it wide acceptance as the current most realistic model of the Solar System's early evolution, although it is not universally favoured among planetary scientists. Later research revealed a number of differences between the Nice model’s original predictions and observations of the current Solar System — such as the orbits of the terrestrial planets and the asteroids — leading to its modification.
Description
The original core of the Nice model is a triplet of papers published in the general science journal Nature in 2005 by an international collaboration of scientists: Rodney Gomes, Hal Levison, Alessandro Morbidelli, and Kleomenis Tsiganis. In these publications, the four authors proposed that after the dissipation of the gas and dust of the primordial Solar System disk, the four giant planets (Jupiter, Saturn, Uranus, and Neptune) were originally found on near-circular orbits between ~5.5 and ~17 astronomical units (AU), much more closely spaced and compact than in the present. A large, dense disk of small rock and ice planetesimals totalling about 35 Earth masses extended from the orbit of the outermost giant planet to some 35 AU.
Scientists understand so little about the formation of Uranus and Neptune that Levison states, "the possibilities concerning the formation of Uranus and Neptune are almost endless". However, it is suggested that this planetary system evolved in the following manner: Planetesimals at the disk's inner edge occasionally pass through gravitational encounters with the outermost giant planet, which change the planetesimals' orbits. The planet scatters inward the majority of the small icy bodies that it encounters, which in turn moves the planet outwards in response as it exchanges angular momentum with the scattered objects and conserves the angular momentum of the system. The inward-deflected planetesimals then successively encounter Uranus, Neptune, and Saturn, moving each outwards in turn by the same process. Despite the minute movement each exchange of momentum produces, cumulatively these planetesimal encounters shift (migrate) the orbits of the planets by significant amounts. This process continues until the planetesimals interact with the innermost and most massive giant planet, Jupiter, whose immense gravity sends them into highly elliptical orbits or even ejects them outright from the Solar System. This, in contrast, causes Jupiter to move slightly inward.
The low rate of orbital encounters governs the rate at which planetesimals are lost from the disk, and the corresponding rate of migration. After several hundreds of millions of years of slow, gradual migration, Jupiter and Saturn, the two inmost giant planets, cross their mutual 1:2 mean-motion resonance. This resonance increases their orbital eccentricities, destabilizing the entire planetary system. The arrangement of the giant planets alters quickly and dramatically. Jupiter shifts Saturn out towards its present position, and this relocation causes mutual gravitational encounters between Saturn and the two ice giants, which propel Neptune and Uranus onto much more eccentric orbits. These ice giants then plough into the planetesimal disk, scattering tens of thousands of planetesimals from their formerly stable orbits in the outer Solar System. This disruption almost entirely scatters the primordial disk, removing 99% of its mass, a scenario which explains the modern-day absence of a dense trans-Neptunian population. Some of the planetesimals are thrown into the inner Solar System, producing a sudden influx of impacts on the terrestrial planets: the Late Heavy Bombardment.
Eventually, the giant planets reach their current orbital semi-major axes, and dynamical friction with the remaining planetesimal disc damps their eccentricities and makes the orbits of Uranus and Neptune circular again.
In some 50% of the initial models of Tsiganis and colleagues, Neptune and Uranus also exchange places. An exchange of Uranus and Neptune would be consistent with models of their formation in a disk that had a surface density that declined with distance from the Sun, which predicts that the masses of the planets should also decline with distance from the Sun.
Solar System features
Running dynamical models of the Solar System with different initial conditions for the simulated length of the history of the Solar System will produce the various populations of objects within the Solar System. As the initial conditions of the model are allowed to vary, each population will be more or less numerous, and will have particular orbital properties. Proving a model of the evolution of the early Solar System is difficult, since the evolution cannot be directly observed. However, the success of any dynamical model can be judged by comparing the population predictions from the simulations to astronomical observations of these populations. At the present time, computer models of the Solar System that are begun with the initial conditions of the Nice scenario best match many aspects of the observed Solar System.
The Late Heavy Bombardment
The crater record on the Moon and on the terrestrial planets is part of the main evidence for the Late Heavy Bombardment (LHB): an intensification in the number of impactors, at about 600 million years after the Solar System's formation. In the Nice model icy planetesimals are scattered onto planet-crossing orbits when the outer disc is disrupted by Uranus and Neptune causing a sharp spike of impacts by icy objects. The migration of outer planets also causes mean-motion and secular resonances to sweep through the inner Solar System. In the asteroid belt these excite the eccentricities of the asteroids driving them onto orbits that intersect those of the terrestrial planets causing a more extended period of impacts by stony objects and removing roughly 90% of its mass. The number of planetesimals that would reach the Moon is consistent with the crater record from the LHB. However, the orbital distribution of the remaining asteroids does not match observations. In the outer Solar System the impacts onto Jupiter's moons are sufficient to trigger Ganymede's differentiation but not Callisto's. The impacts of icy planetesimals onto Saturn's inner moons are excessive, however, resulting in the vaporization of their ice.
Trojans and the asteroid belt
After Jupiter and Saturn cross the 2:1 resonance their combined gravitational influence destabilizes the Trojan co-orbital region allowing existing Trojan groups in the L4 and L5 Lagrange points of Jupiter and Neptune to escape and new objects from the outer planetesimal disk to be captured. Objects in the Trojan co-orbital region undergo libration, drifting cyclically relative to the L4 and L5 points. When Jupiter and Saturn are near but not in resonance, the location at which Jupiter passes Saturn relative to their perihelia circulates slowly. If the period of this circulation falls into resonance with the period at which the Trojans librate, then the libration range can increase until they escape. When this phenomenon occurs, the Trojan co-orbital region is "dynamically open" and objects can both escape and enter it. Primordial Trojans escape and a fraction of the numerous objects from the disrupted planetesimal disk temporarily inhabit it. Later when the separation of the Jupiter and Saturn orbits increases, the Trojan region becomes "dynamically closed", and the planetesimals in the Trojan region are captured, with many remaining today. The captured Trojans have a wide range of inclinations, which had not previously been understood, due to their repeated encounters with the giant planets. The libration angle and eccentricity of the simulated population also matches observations of the orbits of the Jupiter Trojans. This mechanism of the Nice model similarly generates the Neptune trojans.
A large number of planetesimals would have also been captured in Jupiter's mean motion resonances as Jupiter migrated inward. Those that remained in a 3:2 resonance with Jupiter form the Hilda family. The eccentricity of other objects declined while they were in a resonance and escaped onto stable orbits in the outer asteroid belt, at distances greater than 2.6 AU as the resonances moved inward. These captured objects would then have undergone collisional erosion, grinding the population away into progressively smaller fragments that can then be subject to the Yarkovsky effect, which causes small objects to drift into unstable resonances, and to the Poynting–Robertson drag which causes smaller grains to drift toward the sun. These processes may have removed >90% of the origin mass implanted into the asteroid belt. The size frequency distribution of this simulated population following this erosion are in excellent agreement with observations. This agreement suggests that the Jupiter Trojans, Hildas, and spectral D-type asteroids some objects in the outer asteroid belt, are remnant planetesimals from this capture and erosion process. The dwarf planet may be a Kuiper-belt object that was captured by this process. A few recently-discovered D-type asteroids have semi-major axes <2.5 AU, which is closer than those that would be captured in the original Nice model.
Outer-system satellites
Any original populations of irregular satellites captured by traditional mechanisms, such as drag or impacts from the accretion disks, would be lost during the encounters between the planets at the time of global system instability. In the Nice model, the outer planets encounter large numbers of planetesimals after Uranus and Neptune enter and disrupt the planetesimal disk. A fraction of these planetesimals are captured by these planets via three-way interactions during encounters between planets. The probability for any planetesimal to be captured by an ice giant is relatively high, a few 10−7. These new satellites could be captured at almost any angle, so unlike the regular satellites of Saturn, Uranus, and Neptune, they do not necessarily orbit in the planets' equatorial planes. Some irregulars may have even been exchanged between planets. The resulting irregular orbits match well with the observed populations' semimajor axes, inclinations, and eccentricities. Subsequent collisions between these captured satellites may have created the suspected collisional families seen today. These collisions are also required to erode the population to the present size distribution.
Triton, the largest moon of Neptune, can be explained if it was captured in a three-body interaction involving the disruption of a binary planetoid. Such binary disruption would be more likely if Triton was the smaller member of the binary. However, Triton's capture would be more likely in the early Solar System when the gas disk would damp relative velocities, and binary exchange reactions would not in general have supplied the large number of small irregulars.
There were not enough interactions between Jupiter and the other planets to explain Jupiter's retinue of irregulars in the initial Nice model simulations that reproduced other aspects of the outer Solar System. This suggests either that a second mechanism was at work for that planet, or that the early simulations did not reproduce the evolution of the giant planets' orbits.
Formation of the Kuiper belt
The migration of the outer planets is also necessary to account for the existence and properties of the Solar System's outermost regions. Originally, the Kuiper belt was much denser and closer to the Sun, with an outer edge at approximately 30 AU. Its inner edge would have been just beyond the orbits of Uranus and Neptune, which were in turn far closer to the Sun when they formed (most likely in the range of 15–20 AU), and in opposite locations, with Uranus farther from the Sun than Neptune.
Gravitational encounters between the planets scatter Neptune outward into the planetesimal disk with a semi-major axis of ~28 AU and an eccentricity as high as 0.4. Neptune's high eccentricity causes its mean-motion resonances to overlap and orbits in the region between Neptune and its 2:1 mean motion resonances to become chaotic. The orbits of objects between Neptune and the edge of the planetesimal disk at this time can evolve outward onto stable low-eccentricity orbits within this region. When Neptune's eccentricity is damped by dynamical friction they become trapped on these orbits. These objects form a dynamically-cold belt, since their inclinations remain small during the short time they interact with Neptune. Later, as Neptune migrates outward on a low eccentricity orbit, objects that have been scattered outward are captured into its resonances and can have their eccentricities decline and their inclinations increase due to the Kozai mechanism, allowing them to escape onto stable higher-inclination orbits. Other objects remain captured in resonance, forming the plutinos and other resonant populations. These two population are dynamically hot, with higher inclinations and eccentricities; due to their being scattered outward and the longer period these objects interact with Neptune.
This evolution of Neptune's orbit produces both resonant and non-resonant populations, an outer edge at Neptune's 2:1 resonance, and a small mass relative to the original planetesimal disk. The excess of low-inclination plutinos in other models is avoided due to Neptune being scattered outward, leaving its 3:2 resonance beyond the original edge of the planetesimal disk. The differing initial locations, with the cold classical objects originating primarily from the outer disk, and capture processes, offer explanations for the bi-modal inclination distribution and its correlation with compositions. However, this evolution of Neptune's orbit fails to account for some of the characteristics of the orbital distribution. It predicts a greater average eccentricity in classical Kuiper belt object orbits than is observed (0.10–0.13 versus 0.07) and it does not produce enough higher-inclination objects. It also cannot explain the apparent complete absence of gray objects in the cold population, although it has been suggested that color differences arise in part from surface evolution processes rather than entirely from differences in primordial composition.
The shortage of the lowest-eccentricity objects predicted in the Nice model may indicate that the cold population formed in situ. In addition to their differing orbits the hot and cold populations have differing colors. The cold population is markedly redder than the hot, suggesting it has a different composition and formed in a different region. The cold population also includes a large number of binary objects with loosely bound orbits that would be unlikely to survive close encounter with Neptune. If the cold population formed at its current location, preserving it would require that Neptune's eccentricity remained small, or that its perihelion precessed rapidly due to a strong interaction between it and Uranus.
Scattered disc and Oort cloud
Objects scattered outward by Neptune onto orbits with semi-major axis greater than 50 AU can be captured in resonances forming the resonant population of the scattered disc, or if their eccentricities are reduced while in resonance they can escape from the resonance onto stable orbits in the scattered disc while Neptune is migrating. When Neptune's eccentricity is large its aphelion can reach well beyond its current orbit. Objects that attain perihelia close to or larger than Neptune's at this time can become detached from Neptune when its eccentricity is damped reducing its aphelion, leaving them on stable orbits in the scattered disc.
Objects scattered outward by Uranus and Neptune onto larger orbits (roughly 5,000 AU) can have their perihelion raised by the galactic tide detaching them from the influence of the planets forming the inner Oort cloud with moderate inclinations. Others that reach even larger orbits can be perturbed by nearby stars forming the outer Oort cloud with isotropic inclinations. Objects scattered by Jupiter and Saturn are typically ejected from the Solar System. Several percent of the initial planetesimal disc can be deposited in these reservoirs.
Modifications
The Nice model has undergone a number of modifications since its initial publication. Some changes reflect a better understanding of the formation of the Solar System while others were made after significant differences between its predictions and observations were identified. Hydrodynamical models of the early Solar System indicate that the orbits of the giant planets would converge resulting in their capture into a series of resonances. The slow approach of Jupiter and Saturn to the 2:1 resonance before the instability and their smooth separation of their orbits afterwards was also shown to alter the orbits of objects in the inner Solar System due to sweeping secular resonances. The first could result in the orbit of Mars crossing that of the other terrestrial planets destabilizing the inner Solar System. If the first was avoided the latter would still leave the orbits of the terrestrial planets with larger eccentricities. The orbital distribution of the asteroid belt would also be altered leaving it with an excess of high inclination objects. Other differences between predictions and observations included the capture of few irregular satellites by Jupiter, the vaporization of the ice from Saturn's inner moons, a shortage of high inclination objects captured in the Kuiper belt, and the recent discovery of D-type asteroids in the inner asteroid belt.
The first modifications to the Nice model were the initial positions of the giant planets. Investigations of the behavior of planets orbiting in a gas disk using hydrodynamical models reveal that the giant planets would migrate toward the Sun. If the migration continued it would have resulted in Jupiter orbiting close to the Sun like recently discovered exoplanets known as hot Jupiters. Saturn's capture in a resonance with Jupiter prevents this, however, and the later capture of the other planets results in a quadruple resonant configuration with Jupiter and Saturn in their 3:2 resonance. A mechanism for a delayed disruption of this the resonance was also proposed. Gravitational encounters with Pluto-massed objects in the outer disk would stir their orbits causing an increase in eccentricities, and through a coupling of their orbits, an inward migration of the giant planets. During this inward migration secular resonances would be crossed that altered the eccentricities of the planets' orbits and disrupted the quadruple resonance. A late instability similar to the original Nice model then follows. Unlike the original Nice model the timing of this instability is not sensitive to the planets initial orbits or the distance between the outer planet and the planetesimal disk. The combination of resonant planetary orbits and the late instability triggered by these long distant interactions was referred to as the Nice 2 model.
The second modification was the requirement that one of the ice giants encounters Jupiter, causing its semi-major axis to jump. In this jumping-Jupiter scenario, an ice giant encounters Saturn and is scattered inward onto a Jupiter-crossing orbit, causing Saturn's orbit to expand; then encounters Jupiter and is scattered outward, causing Jupiter's orbit to shrink. This results in a step-wise separation of Jupiter's and Saturn's orbits instead of a smooth divergent migration. The step-wise separation of the orbits of Jupiter and Saturn avoids the slow sweeping of secular resonances across the inner solar System that increases the eccentricities of the terrestrial planets and leaves the asteroid belt with an excessive ratio of high- to low-inclination objects. The encounters between the ice giant and Jupiter in this model allow Jupiter to acquire its own irregular satellites. Jupiter trojans are also captured following these encounters when Jupiter's semi-major axis jumps and, if the ice giant passes through one of the libration points scattering trojans, one population is depleted relative to the other. The faster traverse of the secular resonances across the asteroid belt limits the loss of asteroids from its core. Most of the rocky impactors of the Late Heavy Bombardment instead originate from an inner extension that is disrupted when the giant planets reach their current positions, with a remnant remaining as the Hungaria asteroids. Some D-type asteroids are embedded in inner asteroid belt, within 2.5 AU, during encounters with the ice giant when it is crossing the asteroid belt.
Five-planet Nice model
The frequent ejection in simulations of the ice giant encountering Jupiter has led David Nesvorný and others to hypothesize an early Solar System with five giant planets, one of which was ejected during the instability. This five-planet Nice model begins with the giant planets in a 3:2, 3:2, 2:1, 3:2 resonant chain with a planetesimal disk orbiting beyond them. Following the breaking of the resonant chain Neptune first migrates outward into the planetesimal disk reaching 28 AU before encounters between planets begin. This initial migration reduces the mass of the outer disk enabling Jupiter's eccentricity to be preserved and produces a Kuiper belt with an inclination distribution that matches observations if 20 Earth-masses remained in the planetesimal disk when that migration began. Neptune's eccentricity can remain small during the instability since it only encounters the ejected ice giant, allowing an in situ cold-classical belt to be preserved. The lower mass planetesimal belt in combination with the excitation of inclinations and eccentricities by the Pluto-massed objects also significantly reduce the loss of ice by Saturn's inner moons. The combination of a late breaking of the resonance chain and a migration of Neptune to 28 AU before the instability is unlikely with the Nice 2 model. This gap may be bridged by a slow dust-driven migration over several million years following an early escape from resonance.
A recent study found that the five-planet Nice model has a statistically small likelihood of reproducing the orbits of the terrestrial planets. Although this implies that the instability occurred before the formation of the terrestrial planets and could not be the source of the Late Heavy Bombardment, the advantage of an early instability is reduced by the sizable jumps in the semi-major axis of Jupiter and Saturn required to preserve the asteroid belt.
See also
Formation and evolution of the Solar System
Grand tack hypothesis
Jumping-Jupiter scenario
Late Heavy Bombardment
Planetary migration
References
External links
Animation of the Nice model
Solving solar system quandaries is simple: Just flip-flop the position of Uranus and Neptune''
Solar System dynamic theories
2005 introductions
2005 in science
21st century in Nice
Astronomy in France |
22260290 | https://en.wikipedia.org/wiki/Floating%20licensing | Floating licensing | Floating licensing, also known as concurrent licensing or network licensing, is a software licensing approach in which a limited number of licenses for a software application are shared among a larger number of users over time. When an authorized user wishes to run the application, they request a license from a central license server. If a license is available, the license server allows the application to run. When they finish using the application, or when the allowed license period expires, the license is reclaimed by the license server and made available to other authorized users.
The license server can manage licenses over a local area network, an intranet, virtual private network, or the Internet.
Floating licensing is often used for high-value applications in corporate environments; such as electronic design automation or engineering tools. However, its use is broadly expanding throughout the software industry.
An on-premise license server used to be the only way to enforce floating licensing models. A license server was required at each end user's location and each computer or device in a network needed to connect to it. License files would be bound to the host ID of the license server but could be made available to any client computer in the network; with the concurrent user limit enforced by the on-premise license server. License files are usually tied to the host ID of the license server by a MAC address or Ethernet address. The number of licenses registered and installed on the license server limits the number of concurrent users.
Although on-premise license servers are the traditional method of implementing floating licensing, modern solutions make use of cloud-based and plug and play solutions.
See also
Concurrent user
Software metering
License manager
License borrowing
Node-locked licensing
Software license server
Software licenses
System administration
de:Floating License Server
ru:Плавающие лицензии |
27684 | https://en.wikipedia.org/wiki/Steampunk | Steampunk | Steampunk is a subgenre of science fiction that incorporates retrofuturistic technology and aesthetics inspired by 19th-century industrial steam-powered machinery. Steampunk works are often set in an alternative history of the Victorian era or the American "Wild West", where steam power remains in mainstream use, or in a fantasy world that similarly employs steam power.
Steampunk most recognizably features anachronistic technologies or retrofuturistic inventions as people in the 19th century might have envisioned them — distinguishing it from Neo-Victorianism — and is likewise rooted in the era's perspective on fashion, culture, architectural style, and art. Such technologies may include fictional machines like those found in the works of H. G. Wells and Jules Verne. Other examples of steampunk contain alternative-history-style presentations of such technology as steam cannons, lighter-than-air airships, analog computers, or such digital mechanical computers as Charles Babbage's Analytical Engine.
Steampunk may also incorporate additional elements from the genres of fantasy, horror, historical fiction, alternate history, or other branches of speculative fiction, making it often a hybrid genre. As a form of speculative fiction, it explores alternative futures or pasts but can also address real-world social issues. The first known appearance of the term steampunk was in 1987, though it now retroactively refers to many works of fiction created as far back as the 1950s or earlier A popular subgenre is Japanese steampunk, consisting of steampunk-themed manga and anime, with steampunk elements having appeared in mainstream manga since the 1940s.
Steampunk also refers to any of the artistic styles, clothing fashions, or subcultures that have developed from the aesthetics of steampunk fiction, Victorian-era fiction, art nouveau design, and films from the mid-20th century. Various modern utilitarian objects have been modded by individual artisans into a pseudo-Victorian mechanical "steampunk" style, and a number of visual and musical artists have been described as steampunk.
History
Precursors
Steampunk is influenced by and often adopts the style of the 19th-century scientific romances of Jules Verne, H. G. Wells, Mary Shelley, and Edward S. Ellis's The Steam Man of the Prairies. Several more modern works of art and fiction significant to the development of the genre were produced before the genre had a name. Titus Alone (1959), by Mervyn Peake, is widely regarded by scholars as the first novel in the genre proper, while others point to Michael Moorcock's 1971 novel The Warlord of the Air, which was heavily influenced by Peake's work. The film Brazil (1985) was an early cinematic influence, although it can also be considered a precursor to the steampunk offshoot dieselpunk. The Adventures of Luther Arkwright was an early (1970s) comic version of the Moorcock-style mover between timestreams.
In fine art, Remedios Varo's paintings combine elements of Victorian dress, fantasy, and technofantasy imagery. In television, one of the earliest manifestations of the steampunk ethos in the mainstream media was the CBS television series The Wild Wild West (1965–69), which inspired the later film.
Origin of the term
Although many works now considered seminal to the genre were published in the 1960s and 1970s, the term steampunk originated largely in the 1980s as a tongue-in-cheek variant of cyberpunk. It was coined by science fiction author K. W. Jeter, who was trying to find a general term for works by Tim Powers (The Anubis Gates, 1983), James Blaylock (Homunculus, 1986), and himself (Morlock Night, 1979, and Infernal Devices, 1987) — all of which took place in a 19th-century (usually Victorian) setting and imitated conventions of such actual Victorian speculative fiction as H. G. Wells' The Time Machine. In a letter to science fiction magazine Locus, printed in the April 1987 issue, Jeter wrote:
Modern steampunk
While Jeter's Morlock Night and Infernal Devices, Powers' The Anubis Gates, and Blaylock's Lord Kelvin's Machine were the first novels to which Jeter's neologism would be applied, the three authors gave the term little thought at the time. They were far from the first modern science fiction writers to speculate on the development of steam-based technology or alternative histories. Keith Laumer's Worlds of the Imperium (1962) and Ronald W. Clark's Queen Victoria's Bomb (1967) apply modern speculation to past-age technology and society. Michael Moorcock's Warlord of the Air (1971) is another early example. Harry Harrison's novel A Transatlantic Tunnel, Hurrah! (1973) portrays Britain in an alternative 1973, full of atomic locomotives, coal-powered flying boats, ornate submarines, and Victorian dialogue. The Adventures of Luther Arkwright (mid-1970s) was one of the first steampunk comics. In February 1980, Richard A. Lupoff and Steve Stiles published the first "chapter" of their 10-part comic strip The Adventures of Professor Thintwhistle and His Incredible Aether Flyer. In 2004, one anonymous author described steampunk as "Colonizing the Past so we can dream the future."
The first use of the word "steampunk" in a title was in Paul Di Filippo's 1995 Steampunk Trilogy, consisting of three short novels: "Victoria", "Hottentots", and "Walt and Emily", which, respectively, imagine the replacement of Queen Victoria by a human/newt clone, an invasion of Massachusetts by Lovecraftian monsters, and a love affair between Walt Whitman and Emily Dickinson.
Japanese steampunk
Japanese steampunk consists of steampunk manga comics and anime productions from Japan. Steampunk elements have consistently appeared in mainstream manga since the 1940s, dating back to Osamu Tezuka's epic science-fiction trilogy consisting of Lost World (1948), Metropolis (1949) and Nextworld (1951). The steampunk elements found in manga eventually made their way into mainstream anime productions starting in the 1970s, including television shows such as Leiji Matsumoto's Space Battleship Yamato (1974) and the 1979 anime adaptation of Riyoko Ikeda's manga Rose of Versailles (1972). Influenced by 19th-century European authors such as Jules Verne, steampunk anime and manga arose from a Japanese fascination with an imaginary fantastical version of old Industrial Europe, linked to a phenomenon called akogare no Pari ("the Paris of our dreams"), comparable to the West's fascination with an "exotic" East.
The most influential steampunk animator was Hayao Miyazaki, who was creating steampunk anime since the 1970s, starting with the television show Future Boy Conan (1978). His manga Nausicaä of the Valley of the Wind (1982) and its 1984 anime film adaptation also contained steampunk elements. Miyazaki's most influential steampunk production was the Studio Ghibli anime film Laputa: Castle in the Sky (1986), which became a major milestone in the genre and has been described by The Steampunk Bible as "one of the first modern steampunk classics." Archetypal steampunk elements in Laputa include airships, air pirates, steam-powered robots, and a view of steam power as a limitless but potentially dangerous source of power.
The success of Laputa inspired Hideaki Anno and Studio Gainax to create their first hit production, Nadia: The Secret of Blue Water (1990), a steampunk anime show which loosely adapts elements from Verne's Twenty Thousand Leagues Under the Sea, with Captain Nemo making an appearance. Based on a concept by Miyazaki, Nadia was influential on later steampunk anime such as Katsuhiro Otomo's anime film Steamboy (2004). Disney's animated steampunk film Atlantis: The Lost Empire (2001) was influenced by anime, particularly Miyazaki's works and possibly Nadia. Other popular Japanese steampunk works include Miyazaki's Studio Ghibli anime films Porco Rosso (1992) and Howl's Moving Castle (2004), Sega's video game and anime franchise Sakura Wars (1996) which is set in a steampunk version of Meiji/Taishō era Japan, and Square Enix's manga and anime franchise Fullmetal Alchemist (2001).
Relationships to retrofuturism, DIY craft and making
Steampunk used to be confused with retrofuturism. Indeed, both sensibilities recall "the older but still modern eras in which technological change seemed to anticipate a better world, one remembered as relatively innocent of industrial decline." For some scholars, retrofuturism is considered a strand of steampunk, one that looks at alternatives to historical imagination and usually created with the same kinds of social protagonists and written for the same type of audiences.
One of steampunk's most significant contributions is the way in which it mixes digital media with traditional handmade art forms. As scholars Rachel Bowser and Brian Croxall put it, "the tinkering and tinker-able technologies within steampunk invite us to roll up our sleeves and get to work re-shaping our contemporary world." In this respect, steampunk bears more in common with DIY craft and making.
Art, entertainment, and media
Art and design
Many of the visualisations of steampunk have their origins with, among others, Walt Disney's film 20,000 Leagues Under the Sea (1954), including the design of the story's submarine the Nautilus, its interiors, and the crew's underwater gear; and George Pal's film The Time Machine (1960), especially the design of the time machine itself. This theme is also carried over to Six Flags Magic Mountain and Disney parks, in the themed area the "Screampunk District" at Six Flags Magic Mountain and in the designs of The Mysterious Island section of Tokyo DisneySea theme park and Disneyland Paris' Discoveryland area.
Aspects of steampunk design emphasise a balance between form and function. In this it is like the Arts and Crafts Movement. But John Ruskin, William Morris, and the other reformers in the late nineteenth century rejected machines and industrial production. On the other hand, steampunk enthusiasts present a "non-luddite critique of technology".
Various modern utilitarian objects have been modified by enthusiasts into a pseudo-Victorian mechanical "steampunk" style. Examples include computer keyboards and electric guitars. The goal of such redesigns is to employ appropriate materials (such as polished brass, iron, wood, and leather) with design elements and craftsmanship consistent with the Victorian era, rejecting the aesthetic of industrial design.
In 1994, the Paris Metro station at Arts et Métiers was redesigned by Belgian artist Francois Schuiten in steampunk style, to honor the works of Jules Verne. The station is reminiscent of a submarine, sheathed in brass with giant cogs in the ceiling and portholes that look out onto fanciful scenes.
The artist group Kinetic Steam Works brought a working steam engine to the Burning Man festival in 2006 and 2007. The group's founding member, Sean Orlando, created a Steampunk Tree House (in association with a group of people who would later form the Five Ton Crane Arts Group) that has been displayed at a number of festivals. The Steampunk Tree House is now permanently installed at the Dogfish Head Brewery in Milton, Delaware.
The Neverwas Haul is a three-story, self-propelled mobile art vehicle built to resemble a Victorian house on wheels. Designed by Shannon O’Hare, it was built by volunteers in 2006 and presented at the Burning Man festival from 2006 through 2015. When fully built, the Haul propelled itself at a top speed of 5 miles per hour and required a crew of ten people to operate safely. Currently, the Neverwas Haul makes her home at Obtainium Works, an "art car factory" in Vallejo, CA, owned by O’Hare and home to several other self-styled "contraptionists".
In May–June 2008, multimedia artist and sculptor Paul St George exhibited outdoor interactive video installations linking London and Brooklyn, New York, in a Victorian era-styled telectroscope. Utilizing this device, New York promoter Evelyn Kriete organised a transatlantic wave between steampunk enthusiasts from both cities, prior to White Mischief's Around the World in 80 Days steampunk-themed event.
In 2009, for Questacon, artist Tim Wetherell created a large wall piece that represented the concept of the clockwork universe. This steel artwork contains moving gears, a working clock, and a movie of the moon's terminator in action. The 3D moon movie was created by Antony Williams.
Steampunk became a common descriptor for homemade objects sold on the craft network Etsy between 2009 and 2011, though many of the objects and fashions bear little resemblance to earlier established descriptions of steampunk. Thus the craft network may not strike observers as "sufficiently steampunk" to warrant its use of the term. Comedian April Winchell, author of the book Regretsy: Where DIY meets WTF, cataloged some of the most egregious and humorous examples on her website "Regretsy". The blog was popular among steampunks and even inspired a music video that went viral in the community and was acclaimed by steampunk "notables".
From October 2009 through February 2010, the Museum of the History of Science, Oxford, hosted the first major exhibition of steampunk art objects, curated and developed by New York artist and designer Art Donovan, who also exhibited his own "electro-futuristic" lighting sculptures, and presented by Dr. Jim Bennett, museum director. From redesigned practical items to fantastical contraptions, this exhibition showcased the work of eighteen steampunk artists from across the globe. The exhibition proved to be the most successful and highly attended in the museum's history and attracted more than eighty thousand visitors. The event was detailed in the official artist's journal The Art of Steampunk, by curator Donovan.
In November 2010, The Libratory Steampunk Art Gallery was opened by Damien McNamara in Oamaru, New Zealand. Created from papier-mâché to resemble a large cave and filled with industrial equipment from yesteryear, rayguns, and general steampunk quirks, its purpose is to provide a place for steampunkers in the region to display artwork for sale all year long. A year later, a more permanent gallery, Steampunk HQ, was opened in the former Meeks Grain Elevator Building across the road from The Woolstore, and has since become a notable tourist attraction for Oamaru.
In 2012, the Mobilis in Mobili: An Exhibition of Steampunk Art and Appliance made its debut. Originally located at New York City's Wooster Street Social Club (itself the subject of the television series NY Ink), the exhibit featured working steampunk tattoo systems designed by Bruce Rosenbaum, of ModVic and owner of the Steampunk House, Joey "Dr. Grymm" Marsocci, and Christopher Conte. with different approaches. "[B]icycles, cell phones, guitars, timepieces and entertainment systems" rounded out the display. The opening night exhibition featured a live performance by steampunk band Frenchy and the Punk.
The stills at The Oxford Artisan Distillery are nicknamed "Nautilus" and "Nemo", named after the submarine and its captain in the Jules Verne 1870 science fiction novel Twenty Thousand Leagues Under the Seas. They were built in copper by South Devon Railway Engineering using a steampunk style.
Fashion
Steampunk fashion has no set guidelines but tends to synthesize modern styles with influences from the Victorian era. Such influences may include bustles, corsets, gowns, and petticoats; suits with waistcoats, coats, top hats and bowler hats (themselves originating in 1850 England), tailcoats and spats; or military-inspired garments. Steampunk-influenced outfits are usually accented with several technological and "period" accessories: timepieces, parasols, flying/driving goggles, and ray guns. Modern accessories like cell phones or music players can be found in steampunk outfits, after being modified to give them the appearance of Victorian-era objects. Post-apocalyptic elements, such as gas masks, ragged clothing, and tribal motifs, can also be included. Aspects of steampunk fashion have been anticipated by mainstream high fashion, the Lolita and aristocrat styles, neo-Victorianism, and the romantic goth subculture.
In 2005, Kate Lambert, known as "Kato", founded the first steampunk clothing company, "Steampunk Couture", mixing Victorian and post-apocalyptic influences. In 2013, IBM predicted, based on an analysis of more than a half million public posts on message boards, blogs, social media sites, and news sources, "that 'steampunk,' a subgenre inspired by the clothing, technology and social mores of Victorian society, will be a major trend to bubble up and take hold of the retail industry". Indeed, high fashion lines such as Prada, Dolce & Gabbana, Versace, Chanel, and Christian Dior had already been introducing steampunk styles on the fashion runways.
In episode 7 of Lifetime's Under the Gunn reality series, contestants were challenged to create avant-garde "steampunk chic" looks. America's Next Top Model tackled steampunk fashion in a 2012 episode where models competed in a steampunk themed photo shoot, posing in front of a steam train while holding a live owl.
Literature
The educational book Elementary BASIC – Learning to Program Your Computer in BASIC with Sherlock Holmes (1981), by Henry Singer and Andrew Ledgar, may have been the first fictional work to depict the use of Charles Babbage's Analytical Engine in an adventure story. The instructional book, aimed at young programming students, depicts Holmes using the engine as an aid in his investigations, and lists programs that perform simple data processing tasks required to solve the fictional cases. The book even describes a device that allows the engine to be used remotely, over telegraph lines, as a possible enhancement to Babbage's machine. Companion volumes—Elementary Pascal – Learning to Program Your Computer in Pascal with Sherlock Holmes and From Baker Street to Binary – An Introduction to Computers and Computer Programming with Sherlock Holmes—were also written.
In 1988, the first version of the science fiction tabletop role-playing game Space: 1889 was published. The game is set in an alternative history in which certain now discredited Victorian scientific theories were probable and led to new technologies. Contributing authors included Frank Chadwick, Loren Wiseman, and Marcus Rowland.
William Gibson and Bruce Sterling's novel The Difference Engine (1990) is often credited with bringing about widespread awareness of steampunk. This novel applies the principles of Gibson and Sterling's cyberpunk writings to an alternative Victorian era where Ada Lovelace and Charles Babbage's proposed steam-powered mechanical computer, which Babbage called a difference engine (a later, more general-purpose version was known as an Analytical Engine), was actually built, and led to the dawn of the information age more than a century "ahead of schedule". This setting was different from most steampunk settings in that it takes a dim and dark view of this future, rather than the more prevalent utopian versions.
Nick Gevers's original anthology Extraordinary Engines (2008) features newer steampunk stories by some of the genre's writers, as well as other science fiction and fantasy writers experimenting with neo-Victorian conventions. A retrospective reprint anthology of steampunk fiction was released, also in 2008, by Tachyon Publications. Edited by Ann and Jeff VanderMeer and appropriately entitled Steampunk, it is a collection of stories by James Blaylock, whose "Narbondo" trilogy is typically considered steampunk; Jay Lake, author of the novel Mainspring, sometimes labeled "clockpunk"; the aforementioned Michael Moorcock; as well as Jess Nevins, known for his annotations to The League of Extraordinary Gentlemen (first published in 1999).
Younger readers have also been targeted by steampunk themes, by authors such as Philip Reeve and Scott Westerfeld. Reeve's quartet Mortal Engines is set far in Earth's future where giant moving cities consume each other in a battle for resources, a concept Reeve coined as Municipal Darwinism. Westerfeld's Leviathan trilogy is set during an alternate First World War fought between the "clankers" (Central Powers), who use steam technology, and "darwinists" (Allied Powers), who use genetically engineered creatures instead of machines.
"Mash-ups" are also becoming increasingly popular in books aimed at younger readers, mixing steampunk with other genres. Suzanne Lazear's Aether Chronicles series mixes steampunk with faeries, and The Unnaturalists, by Tiffany Trent, combines steampunk with mythological creatures and alternate history.
While most of the original steampunk works had a historical setting, later works often place steampunk elements in a fantasy world with little relation to any specific historic era. Historical steampunk tends to be science fiction that presents an alternate history; it also contains real locales and persons from history with alternative fantasy technology. "Fantasy-world steampunk", such as China Miéville's Perdido Street Station, Alan Campbell's Scar Night, and Stephen Hunt's Jackelian novels, on the other hand, presents steampunk in a completely imaginary fantasy realm, often populated by legendary creatures coexisting with steam-era and other anachronistic technologies. However, the works of China Miéville and similar authors are sometimes referred to as belonging to the "New Weird" rather than steampunk.
Self-described author of "far-fetched fiction" Robert Rankin has incorporated elements of steampunk into narrative worlds that are both Victorian and re-imagined contemporary. In 2009, he was made a Fellow of the Victorian Steampunk Society.
The comic book series Hellboy, created by Mike Mignola, and the two Hellboy films featuring Ron Perlman and directed by Guillermo del Toro, all have steampunk elements. In the comic book and the first (2004) film, Karl Ruprecht Kroenen is a Nazi SS scientist who has an addiction to having himself surgically altered, and who has many mechanical prostheses, including a clockwork heart. The character Johann Krauss is featured in the comic and in the second film, Hellboy II: The Golden Army (2008), as an ectoplasmic medium (a gaseous form in a partly mechanical suit). This second film also features the Golden Army itself, which is a collection of 4,900 mechanical steampunk warriors.
Steampunk settings
Alternative world
Since the 1990s, the application of the steampunk label has expanded beyond works set in recognisable historical periods, to works set in fantasy worlds that rely heavily on steam- or spring-powered technology. One of the earliest short stories relying on steam-powered flying machines is "The Aerial Burglar" of 1844. An example from juvenile fiction is The Edge Chronicles by Paul Stewart and Chris Riddell.
Fantasy steampunk settings abound in tabletop and computer role-playing games. Notable examples include Skies of Arcadia, Rise of Nations: Rise of Legends, and Arcanum: Of Steamworks and Magick Obscura.
One of the first steampunk novels set in a Middle-earth-like world was the Forest of Boland Light Railway by BB, about gnomes who build a steam locomotive. 50 years later, Terry Pratchett wrote the Discworld novel Raising Steam, about the ongoing industrial revolution and railway mania in Ankh-Morpork.
The gnomes and goblins in World of Warcraft also have technological societies that could be described as steampunk, as they are vastly ahead of the technologies of men, but still run on steam and mechanical power.
The Dwarves of the Elder Scrolls series, described therein as a race of Elves called the Dwemer, also use steam-powered machinery, with gigantic brass-like gears, throughout their underground cities. However, magical means are used to keep ancient devices in motion despite the Dwemer's ancient disappearance.
The 1998 game Thief: The Dark Project, as well as the other sequels including its 2014 reboot, feature heavy steampunk-inspired architecture, setting, and technology.
Amidst the historical and fantasy subgenres of steampunk is a type that takes place in a hypothetical future or a fantasy equivalent of our future involving the domination of steampunk-style technology and aesthetics. Examples include Jean-Pierre Jeunet and Marc Caro's The City of Lost Children (1995), Turn A Gundam (1999–2000), Trigun, and Disney's film Treasure Planet (2002). In 2011, musician Thomas Dolby heralded his return to music after a 20-year hiatus with an online steampunk alternate fantasy world called the Floating City, to promote his album A Map of the Floating City.
American West
Another setting is "Western" steampunk, which overlaps with both the Weird West and Science fiction Western subgenres. One of the earliest steampunk books set in America was The Steam Man of the Prairies by Edward S. Ellis. Recent examples include the TV show and the movie adaption Wild Wild West, the Italian comics about Magico Vento, Devon Monk's Dead Iron, and the Big Thunder Mountain Railroad in Disneyland-style Disney Parks around the world.
Fantasy and horror
Kaja Foglio introduced the term "Gaslight Romance", gaslamp fantasy, which John Clute and John Grant define as "steampunk stories ... most commonly set in a romanticised, smoky, 19th-century London, as are Gaslight Romances. But the latter category focuses nostalgically on icons from the late years of that century and the early years of the 20th century—on Dracula, Jekyll and Hyde, Jack the Ripper, Sherlock Holmes and even Tarzan—and can normally be understood as combining supernatural fiction and recursive fantasy, though some gaslight romances can be read as fantasies of history." Author/artist James Richardson-Brown coined the term steamgoth to refer to steampunk expressions of fantasy and horror with a "darker" bent.
Post-apocalyptic
Mary Shelley's The Last Man, set near the end of the 21st century after a plague had brought down civilization, was probably the ancestor of post-apocalyptic steampunk literature. Post-apocalyptic steampunk is set in a world where some cataclysm has precipitated the fall of civilization and steam power is once again ascendant, such as in Hayao Miyazaki's post-apocalyptic anime Future Boy Conan (1978, loosely based on Alexander Key's The Incredible Tide (1970)), where a war fought with superweapons has devastated the planet. Robert Brown's novel, The Wrath of Fate (as well as much of Abney Park's music) is set in a Victorianesque world where an apocalypse was set into motion by a time-traveling mishap. Cherie Priest's Boneshaker series is set in a world where a zombie apocalypse happened during the Civil War era. The Peshawar Lancers by S.M. Stirling is set in a post-apocalyptic future in which a meteor shower in 1878 caused the collapse of Industrialized civilization. The movie 9 (which might be better classified as "stitchpunk" but was largely influenced by steampunk) is also set in a post-apocalyptic world after a self-aware war machine ran amok. Steampunk Magazine even published a book called A Steampunk's Guide to the Apocalypse, about how steampunks could survive should such a thing actually happen.
Victorian
In general, this category includes any recent science fiction that takes place in a recognizable historical period (sometimes an alternate history version of an actual historical period) in which the Industrial Revolution has already begun, but electricity is not yet widespread, "usually Britain of the early to mid-nineteenth century or the fantasized Wild West-era United States", with an emphasis on steam- or spring-propelled gadgets. The most common historical steampunk settings are the Victorian and Edwardian eras, though some in this "Victorian steampunk" category are set as early as the beginning of the Industrial Revolution and as late as the end of World War I.
Some examples of this type include the novel The Difference Engine, the comic book series League of Extraordinary Gentlemen, the Disney animated film Atlantis: The Lost Empire, Scott Westerfeld's Leviathan trilogy, and the roleplaying game Space: 1889. The anime film Steamboy (2004) is another example of Victorian steampunk, taking place in an alternate 1866 where steam technology is far more advanced than reality. Some, such as the comic series Girl Genius, have their own unique times and places despite partaking heavily of the flavor of historic settings. Other comic series are set in a more familiar London, as in the Victorian Undead, which has Sherlock Holmes, Doctor Watson, and others taking on zombies, Doctor Jekyll and Mister Hyde, and Count Dracula, with advanced weapons and devices. Another example of this genre is the Tunnels novels by Roderick Gordon and Brian Williams. These are set in the modern day but with an underground Victorian world that is working to overthrow the world above. Detective graphic novel series Lady Mechanika is set in an alternative Victorian-like world.
Karel Zeman's film The Fabulous World of Jules Verne (1958) is a very early example of cinematic steampunk. Based on Jules Verne novels, Zeman's film imagines a past that never was, based on those novels. Other early examples of historical steampunk in cinema include Hayao Miyazaki's anime films such as Laputa: Castle in the Sky (1986) and Howl's Moving Castle (2004), which contain many archetypal anachronisms characteristic of the steampunk genre.
"Historical" steampunk usually leans more towards science fiction than fantasy, but a number of historical steampunk stories have incorporated magical elements as well. For example, Morlock Night, written by K. W. Jeter, revolves around an attempt by the wizard Merlin to raise King Arthur to save the Britain of 1892 from an invasion of Morlocks from the future.
Paul Guinan's Boilerplate, a "biography" of a robot in the late 19th century, began as a website that garnered international press coverage when people began believing that Photoshop images of the robot with historic personages were real. The site was adapted into the illustrated hardbound book Boilerplate: History's Mechanical Marvel, which was published by Abrams in October 2009. Because the story was not set in an alternative history, and in fact contained accurate information about the Victorian era, some booksellers referred to the tome as "historical steampunk".
Asian (silkpunk)
Fictional settings inspired by Asian rather than Western history have been called "silkpunk". The term appears to originate with the author Ken Liu, who defined it as "a blend of science fiction and fantasy [that] draws inspiration from classical East Asian antiquity", with a "technology vocabulary (...) based on organic materials historically important to East Asia (bamboo, paper, silk) and seafaring cultures of the Pacific (coconut, feathers, coral)", rather than the brass and leather associated with steampunk. Liu used the term to describe his Dandelion Dynasty series, which began in 2015. Other works described as silkpunk include Neon Yang's Tensorate series of novellas, which began in 2017. Lyndsie Manusos of Book Riot has argued that the genre does "not fit in a direct analogy with steampunk. Silkpunk is technology and poetics. It is engineering and language."
Music
Steampunk music is very broadly defined. Abney Park's lead singer Robert Brown defined it as "mixing Victorian elements and modern elements". There is a broad range of musical influences that make up the Steampunk sound, from industrial dance and world music to folk rock, dark cabaret to straightforward punk, Carnatic to industrial, hip-hop to opera (and even industrial hip-hop opera), darkwave to progressive rock, barbershop to big band.
Joshua Pfeiffer (of Vernian Process) is quoted as saying, "As for Paul Roland, if anyone deserves credit for spearheading Steampunk music, it is him. He was one of the inspirations I had in starting my project. He was writing songs about the first attempt at manned flight, and an Edwardian airship raid in the mid-80s long before almost anyone else..." Thomas Dolby is also considered one of the early pioneers of retro-futurist (i.e., Steampunk and Dieselpunk) music. Amanda Palmer was once quoted as saying, "Thomas Dolby is to Steampunk what Iggy Pop was to Punk!"
Steampunk has also appeared in the work of musicians who do not specifically identify as Steampunk. For example, the music video of "Turn Me On", by David Guetta and featuring Nicki Minaj, takes place in a Steampunk universe where Guetta creates human droids. Another music video is "The Ballad of Mona Lisa", by Panic! at the Disco, which has a distinct Victorian Steampunk theme. A continuation of this theme has in fact been used throughout the 2011 album Vices & Virtues, in the music videos, album art, and tour set and costumes. In addition, the album Clockwork Angels (2012) and its supporting tour by progressive rock band Rush contain lyrics, themes, and imagery based around Steampunk. Similarly, Abney Park headlined the first "Steamstock" outdoor steampunk music festival in Richmond, California, which also featured Thomas Dolby, Frenchy and the Punk, Lee Presson and the Nails, Vernian Process, and others.
The music video for the Lindsey Stirling song "Roundtable Rival", has a Western Steampunk setting.
Television and films
The Fabulous World of Jules Verne (1958) and The Fabulous Baron Munchausen (1962), both directed by Karel Zeman, have steampunk elements. The 1965 television series The Wild Wild West, as well as the 1999 film of the same name, features many elements of advanced steam-powered technology set in the Wild West time period of the United States. Two Years' Vacation (or The Stolen Airship) (1967) directed by Karel Zeman contains steampunk elements.
The BBC series Doctor Who also incorporates steampunk elements. During season 14 of the show (in 1976), the formerly futuristic looking interior set was replaced with a Victorian-styled wood-panel and brass affair. In the 1996 American co-production, the TARDIS interior was re-designed to resemble an almost Victorian library with the central control console made up of an eclectic array of anachronistic objects. Modified and streamlined for the 2005 revival of the series, the TARDIS console continued to incorporate steampunk elements, including a Victorian typewriter and gramophone. Several storylines can be classed as steampunk, for example: The Evil of the Daleks (1966), wherein Victorian scientists invent a time travel device. Dinner for Adele (1977) directed by Oldřich Lipský involves steampunk contraptions. The 1979 film Time After Time has Herbert George "H.G." Wells following a surgeon named John Leslie Stevenson into the future, as John is suspected of being Jack the Ripper. Both separately use Wells's time machine to travel.
The Mysterious Castle in the Carpathians (1981) directed by Oldřich Lipský contains steampunk elements. The 1982 American TV series Q.E.D. is set in Edwardian England, stars Sam Waterston as Professor Quentin Everett Deverill (from whose initials, by which he is primarily known, the series title is derived, initials which also stand for the Latin phrase quod erat demonstrandum, which translates as "which was to be demonstrated"). The Professor is an inventor and scientific detective, in the mold of Sherlock Holmes. The plot of the Soviet film Kin-dza-dza! (1986) centers on a desert planet, depleted of its resources, where an impoverished dog-eat-dog society uses steam-punk machines, the movements and functions of which defy earthly logic.
In making his 1986 Japanese film Castle in the Sky, Hayao Miyazaki was heavily influenced by steampunk culture, the film featuring various air ships and steam-powered contraptions as well as a mysterious island that floats through the sky, accomplished not through magic as in most stories, but instead by harnessing the physical properties of a rare crystal—analogous to the lodestone used in the Laputa of Swift's Gulliver's Travels—augmented by massive propellers, as befitting the Victorian motif. The first "Wallace & Gromit" animation "A Grand Day Out" (1989) features a space rocket in the steampunk style. The Adventures of Brisco County, Jr., a 1993 Fox Network TV science fiction-western set in the 1890s, features elements of steampunk as represented by the character Professor Wickwire, whose inventions were described as "the coming thing". The short-lived 1995 TV show Legend, on UPN, set in 1876 Arizona, features such classic inventions as a steam-driven "quadrovelocipede", trigoggle and night-vision goggles (à la teslapunk), and stars John de Lancie as a thinly disguised Nikola Tesla.
Alan Moore's and Kevin O'Neill's 1999 The League of Extraordinary Gentlemen graphic novel series (and the subsequent 2003 film adaption) greatly popularised the steampunk genre.
Steamboy (2004) is a Japanese animated action film directed and co-written by Katsuhiro Otomo (Akira). It is a retro science-fiction epic set in a Steampunk Victorian England. It features steamboats, trains, airships and inventors. The 2004 film Lemony Snicket's A Series of Unfortunate Events contains Steampunk-esque themes, such as the costumery and vehicle interiors. The 2007 Syfy miniseries Tin Man incorporates a considerable number of steampunk-inspired themes into a re-imagining of L. Frank Baum's The Wonderful Wizard of Oz. Despite leaning more towards gothic influences, the "parallel reality" of Meanwhile, City, within the 2009 film Franklyn, contains many steampunk themes, such as costumery, architecture, minimal use of electricity (with a preference for gaslight), and absence of modern technology (such as there being no motorised vehicles or advanced weaponry, and the manual management of information with no use of computers).
The 2009–2014 Syfy television series Warehouse 13 features many steampunk-inspired objects and artifacts, including computer designs created by steampunk artisan Richard Nagy, a.k.a. "Datamancer". The 2010 episode of the TV series Castle entitled "Punked" (which first aired on October 11, 2010) prominently features the steampunk subculture and uses Los Angeles-area steampunks (such as the League of STEAM) as extras. The 2011 film The Three Musketeers has many steampunk elements, including gadgets and airships.
The Legend of Korra, a 2012–2014 Nickelodeon animated series, incorporates steampunk elements in an industrialized world with East Asian themes.
The Penny Dreadful (2014) television series is a Gothic Victorian fantasy series with steampunk props and costumes.
The 2015 GSN reality television game show Steampunk'd features a competition to create steampunk-inspired art and designs which are judged by notable Steampunks Thomas Willeford, Kato, and Matthew Yang King (as Matt King). Based on the work of cartoonist Jacques Tardi, April and the Extraordinary World (2015) is an animated movie set in a steampunk Paris. It features airships, trains, submarines, and various other steam-powered contraptions. Tim Burton's 2016 film Alice Through the Looking Glass features steampunk costumes, props, and vehicles.
Japanese anime Kabaneri of the Iron Fortress (2016) features a steampunk zombie apocalypse.
The American fantasy animated sitcom, Disenchantment, created by Matt Groening for Netflix, features a steampunk country named Steamland, led by an odd industrialist named Alva Gunderson voiced by Richard Ayoade, first appears in the season 1 episode, "The Electric Princess." The country is portrayed as driven by logic and is egalitarian, governed by science, rather than magic, as is the case for Dreamland, where the protagonist, Princess Bean, is from. The country has cars, automatic lights, submarines, and other modern technologies, all of which are steam-powered, and references to Groening's other series, Futurama. Steamland appears in three episodes of the show's second season, showing an explorers club as part of the country's high society, flying zeppelins, and robots with light bulbs for their heads which chase the protagonists through the streets. Some even argued that Steamland is "dieselpunk inspired."
Video games
A variety of styles of video games have used steampunk settings.
Steel Empire (1992), a shoot 'em up game originally released as Koutetsu Teikoku on the Sega Mega Drive console in Japan, is considered to be the first steampunk video game. Designed by Yoshinori Satake and inspired by Hayao Miyazaki's anime film Laputa: Castle in the Sky (1986), Steel Empire is set in an alternate timeline dominated by steam-powered technology. The commercial success of Steel Empire, both in Japan and the West, helped propel steampunk into the video game market, and had a significant influence on later steampunk games. The most notable steampunk game it influenced is Final Fantasy VI (1994), a Japanese role-playing game developed by Squaresoft and designed by Hiroyuki Ito for the Super Nintendo Entertainment System. Final Fantasy VI was both critically and commercially successful, and had a considerable influence on later steampunk video games.
The Chaos Engine (1993) is a run and gun video game inspired by the Gibson/Sterling novel The Difference Engine (1990), set in a Victorian steampunk age. Developed by the Bitmap Brothers, it was first released on the Amiga in 1993; a sequel was released in 1996. The graphic adventure puzzle video games Myst (1993), Riven (1997), Myst III: Exile (2001), and Myst IV: Revelation (all produced by or under the supervision of Cyan Worlds) take place in an alternate steampunk universe, where elaborate infrastructures have been built to run on steam power. The Elder Scrolls (since 1994, last release in 2014) is an action role-playing game where one can find an ancient extinct race called dwemers or dwarves, whose steampunk technology is based on steam-powered levers and gears made of copper–bronze material, which are maintained by magical techniques that have kept them in working order over the centuries.
Sakura Wars (1996), a visual novel and tactical role-playing game developed by Sega for the Saturn console, is set in a steampunk version of Japan during the Meiji and Taishō periods, and features steam-powered mecha robots. Thief: The Dark Project (1998), its sequels, Thief II (2000), Thief: Deadly Shadows (2004) and it's reboot Thief (2014) are set in a steampunk metropolis. The 2001 computer role-playing game Arcanum: Of Steamworks and Magick Obscura mixed fantasy tropes with steampunk.
The Professor Layton series of games (2007 debut) has several entries showcasing steampunk machinery and vehicles. Notably Professor Layton and the Unwound Future features a quasi-steampunk future setting. Solatorobo (2010) is a role-playing video game developed by CyberConnect2 set in a floating island archipelago populated by anthropomorphic cats and dogs, who pilot steampunk airships and engage in combat with robots. Resonance of Fate (2010) is a role-playing video game developed by tri-Ace and published by Sega for the PlayStation 3 and Xbox 360. It is set in a steampunk environment with combat involving guns.
Impossible Creatures (2003) real-time strategy game inspired by the works of H. G. Welles, especially "The Island of Doctor Moreau". Developed by Relic Entertainment, it sees an adventurer building an army of genetically spliced animals to battle against a mad scientist who has abducted his father. The player's headquarters is a steam-powered "Hovertrain" locomotive, which functions as both a science lab and mobile command center. Coal is a key resource in the game, and must be burned to provide power to the players many base buildings.
The SteamWorld series of games (2010 debut) has the player controlling steam-powered robots. Minecraft (2011) has a steampunk-themed texture pack. Terraria (2011) is a video game developed by Re-Logic. It is a 2D open world platform game in which the player controls a single character in a generated world. It has a Steampunker non-player character in the game who sells items referencing Steampunk.
LittleBigPlanet 2 (2011) has the world Victoria's Laboratory, run by Victoria von Bathysphere, this world mixes steampunk themes with confections. Guns of Icarus Online (2012) is multiplayer game with steampunk thematic. Dishonored (2012) and Dishonored 2 (2016) are set within a fictional world with heavy steampunk influences, wherein whale oil, as opposed to coal, served as catalyst of their industrial revolution.
Dishonored is a series (2012 debut) of stealth games with role playing elements developed by Arkane Studios and widely considered to be a spiritual successor of the original Thief trilogy. Set in the Empire of the Isles, a steampunk Victorian metropolis where technology and supernatural magic co-exist. Steam-powered robots and mechanical combat suits are present as enemies, as well as the presence of magic. The major locations in the Isles include Dunwall, the Empire's capital city which uses the burning of whale oil as the city's main fuel source, and Karnaca, which is powered by wind turbines fed by currents generated by a cleft mountain along the city's borders.
BioShock Infinite (2013) is a first-person shooter FPS game set in 1912, in a fictional city called Columbia, which uses technology to float in the sky and has many historical and religious scenes.
Code: Realize − Guardian of Rebirth (2014), a Japanese otome game for the PS Vita is set in a steampunk Victorian London, and features a cast with several historical figures with steampunk aesthetics. Code Name S.T.E.A.M. (2015), a Japanese tactical RPG game for the 3DS sets in a steampunk fantasy version of London and where you are conscript in the strike force S.T.E.A.M. (short for Strike Team Eliminating the Alien Menace). They Are Billions (2017), is a steampunk strategy game in a post-apocalyptic setting. Players build a colony and attempt to ward off waves of zombies. Frostpunk (2018) is a city-building game set in 1888, but where the Earth is in the midst of a great ice age. Players must construct a city around a large steampunk heat generator with many steampunk aesthetics and mechanics, such as a "Steam Core."
Toys
Mattel's Monster High dolls Robecca Steam and Hexiciah Steam.
The Pullip Dolls by Japanese manufacturer Dal have a steampunk range.
Hornby's world of Bassett-Lowke steampunk models
Culture and community
Because of the popularity of steampunk, there is a growing movement of adults that want to establish steampunk as a culture and lifestyle. Some fans of the genre adopt a steampunk aesthetic through fashion, home decor, music, and film. While Steampunk is considered the amalgamation of Victorian aesthetic principles with modern sensibilities and technologies, it can be more broadly categorised as neo-Victorianism, described by scholar Marie-Luise Kohlke as "the afterlife of the nineteenth century in the cultural imaginary". The subculture has its own magazine, blogs, and online shops.
In September 2012, a panel, chaired by steampunk entertainer Veronique Chevalier and with panelists including magician Pop Hadyn and members of the steampunk performance group the League of STEAM, was held at Stan Lee's Comikaze Expo. The panel suggested that because steampunk was inclusive of and incorporated ideas from various other subcultures such as goth, neo-Victorian, and cyberpunk, as well as a growing number of fandoms, it was fast becoming a super-culture rather than a mere subculture. Other steampunk notables such as Professor Elemental have expressed similar views about steampunk's inclusive diversity.
Some have proposed a steampunk philosophy that incorporates punk-inspired anti-establishment sentiments typically bolstered by optimism about human potential. A 2004 Steampunk Manifesto, later republished in SteamPunk Magazine, lamented that most "so-called" steampunk was nothing more than dressed-up recreationary nostalgia and proposed that "authentic" steampunk would "take the levers of technology from the [technocrats] and powerful." American activist and performer Miriam Rosenberg Rocek impersonated anarcha-feminist Emma Goldman to inspire discussions around gender, society and politics. SteamPunk Magazine was edited and published by anarchists. Its founder, Margaret Killjoy, argued "there have always been radical politics at the core of steampunk." Diana M. Pho, a science-fiction editor and author of the multicultural steampunk blog Beyond Victoriana, similarly argued steampunk's "progressive roots" can be traced to its literary inspirations, including Verne's Captain Nemo. Steampunk authors Phenderson Djèlí Clark, Jaymee Goh, Dru Pagliassotti, and Charlie Stross consider their work political.
These views are not universally shared. Killjoy lamented that even some diehard enthusiasts believe steampunk "has nothing to offer but designer clothes." Pho argued many steampunk fans "don't like to acknowledge that their attitudes could be considered ideological." The largest online steampunk community, Brass Goggles, which is dedicated to what it calls the "lighter side" of steampunk, banned discussion about politics. Cory Gross, who was one of the first to write about the history and theory of steampunk, argued that the "sepia-toned yesteryear more appropriate for Disney and grandparents than a vibrant and viable philosophy or culture" denounced in the Steampunk Manifesto was in fact representative of the genre. Author Catherynne M. Valente called the punk in steampunk "nearly meaningless." Kate Franklin and James Schafer, who at the time managed one of the largest steampunk groups on Facebook, admitted in 2011 that steampunk hadn't created the "revolutionary, or even a particularly progressive" community they wanted. Blogger and podcaster Eric Renderking Fisk announced in 2017 that steampunk was no longer punk, since it had "lost the anti-authoritarian, anti-establishment aspects."
Others argued explicitly against turning steampunk into a political movement, preferring to see steampunk as "escapism" or a "fandom". In 2018, Nick Ottens, editor of the online alternate-history magazine Never Was, declared that the "lighter side" of steampunk had won out. To the extent that steampunk is politicized, it appears to be an American and British phenomenon. Continental Europeans and Latin Americans are more likely to consider steampunk a hobby than a cause.
Social events
June 19, 2005 marked the grand opening of the world's first steampunk club night, "Malediction Society", in Los Angeles. The event ran for nearly 12 years at The Monte Cristo nightclub, interrupted by a single year residency at Argyle Hollywood, until both the club night and The Monte Cristo closed in April 2017. Though the steampunk aesthetic eventually gave way to a more generic goth and industrial aesthetic, Malediction Society celebrated its roots every year with "The Steampunk Ball".
2006 saw the first "SalonCon", a neo-Victorian/steampunk convention. It ran for three consecutive years and featured artists, musicians (Voltaire and Abney Park), authors (Catherynne M. Valente, Ekaterina Sedia, and G. D. Falksen), salons led by people prominent in their respective fields, workshops and panels on steampunk—as well as a seance, ballroom dance instruction, and the Chrononauts' Parade. The event was covered by MTV and The New York Times. Since then, a number of popular steampunk conventions have sprung up the world over, with names like Steamcon (Seattle), the Steampunk World's Fair (Piscataway, New Jersey), Up in the Aether: The Steampunk Convention (Dearborn, Michigan), Steampunk NZ (Oamaru, New Zealand), Steampunk Unlimited (Strasburg Railroad, Lancaster, PA). Each year, on Mother's Day weekend, the city of Waltham, MA, turns over its city center and surrounding areas to host the Watch City Steampunk Festival, a US outdoor steampunk festival. In Kennebunk, ME the Brick Store Museum hosts the Southern Maine Steampunk Fair annually. During the first weekend of May, the Australian town of Nimmitabel celebrates Steampunk @ Altitude with some 2,000 attendance.
In recent years, steampunk has also become a regular feature at San Diego Comic-Con International, with the Saturday of the four-day event being generally known among steampunks as "Steampunk Day", and culminating with a photo-shoot for the local press. In 2010, this was recorded in the Guinness Book of World Records as the world's largest steampunk photo shoot. In 2013, Comic-Con announced four official 2013 T-shirts, one of them featuring the official Rick Geary Comic-Con toucan mascot in steampunk attire. The Saturday steampunk "after-party" has also become a major event on the steampunk social calendar: in 2010, the headliners included The Slow Poisoner, Unextraordinary Gentlemen, and Voltaire, with Veronique Chevalier as Mistress of Ceremonies and special appearance by the League of STEAM; in 2011, UXG returned with Abney Park.
Steampunk has also sprung up recently at Renaissance Festivals and Renaissance Faires, in the US. Some festivals have organised events or a "Steampunk Day", while others simply support an open environment for donning steampunk attire. The Bristol Renaissance Faire in Kenosha, Wisconsin, on the Wisconsin/Illinois border, featured a Steampunk costume contest during the 2012 season, the previous two seasons having seen increasing participation in the phenomenon.
Steampunk also has a growing following in the UK and Europe. The largest European event is "Weekend at the Asylum", held at The Lawn, Lincoln, every September since 2009. Organised as a not-for-profit event by the Victorian Steampunk Society, the Asylum is a dedicated steampunk event which takes over much of the historical quarter of Lincoln, England, along with Lincoln Castle. In 2011, there were over 1000 steampunks in attendance. The event features the Empire Ball, Majors Review, Bazaar Eclectica, and the international Tea Duelling final.
The Surrey Steampunk Convivial was originally held in New Malden, but since 2019 has been held in Stoneleigh in southwestern London, within walking distance of H. G. Wells's home. The Surrey Steampunk Convivial started as an annual event in 2012, and now takes place thrice a year, and has spanned three boroughs and five venues. Attendees have been interviewed by BBC Radio 4 for Phill Jupitus and filmed by the BBC World Service. The West Yorkshire village of Haworth has held an annual Steampunk weekend since 2013, on each occasion as a charity event raising funds for Sue Ryder's "Manorlands" hospice in Oxenhope. In September 2021, Finland's first steampunk festival was held at the Väinö Linna Square and the Werstas Workers' House in Tampere, Pirkanmaa, Finland.
Other
A 2018 physics Ph.D. dissertation used the phrase "Quantum Steampunk" to describe the author's synthesis of some 19th century and current ideas. The term has not been widely adopted.
A 2012 conference paper on human factors in computing systems examines the use of steampunk as a design fiction for human-computer interaction (HCI). It concludes that "the practices of DIY and appropriation that are evident in Steampunk design provide a useful set of design strategies and implications for HCI".
Steampunk HQ, a museum and arts centre dedicated to steampunk in Oamaru, New Zealand, along with its associated art gallery (The Libratory), was the world's first steampunk museum. The town of Oamaru and the English city of Lincoln have both claimed the title of "Steampunk Capital of the World".
See also
Air pirate – Common stock character in steampunk
Dark academia
Notes
References
Further reading
External links
Steampunk Culture Documentary produced by Off Book
"Steampunk Hangar" at Lumberwoods, Unnatural History Museum An archive of unrealized, implausible inventions by authentic nineteenth- and twentieth-century inventors.
Fantasy genres
Science fiction fandom
Fantasy fandom
History of fashion
Retro style
Science fantasy
Science fiction culture
Science fiction genres
Science fiction themes
Subcultures
1980s neologisms
Works about the Industrial Revolution |
15905419 | https://en.wikipedia.org/wiki/Credit%20card%20fraud | Credit card fraud | Credit card fraud is an inclusive term for fraud committed using a payment card, such as a credit card or debit card. The purpose may be to obtain goods or services or to make payment to another account, which is controlled by a criminal. The Payment Card Industry Data Security Standard (PCI DSS) is the data security standard created to help financial institutions process card payments securely and reduce card fraud.
Credit card fraud can be authorised, where the genuine customer themselves processes payment to another account which is controlled by a criminal, or unauthorised, where the account holder does not provide authorisation for the payment to proceed and the transaction is carried out by a third party. In 2018, unauthorised financial fraud losses across payment cards and remote banking totalled £844.8 million in the United Kingdom. Whereas banks and card companies prevented £1.66 billion in unauthorised fraud in 2018. That is the equivalent to £2 in every £3 of attempted fraud being stopped.
Credit cards are more secure than ever, with regulators, card providers and banks taking considerable time and effort to collaborate with investigators worldwide to ensure fraudsters aren't successful. Cardholders' money is usually protected from scammers with regulations that make the card provider and bank accountable. The technology and security measures behind credit cards are becoming increasingly sophisticated making it harder for fraudsters to steal money.
Means of payment card fraud
There are two kinds of card fraud: card-present fraud (not so common nowadays) and card-not-present fraud (more common). The compromise can occur in a number of ways and can usually occur without the knowledge of the cardholder. The internet has made database security lapses particularly costly, in some cases, millions of accounts have been compromised.
Stolen cards can be reported quickly by cardholders, but a compromised account's details may be held by a fraudster for months before any theft, making it difficult to identify the source of the compromise. The cardholder may not discover fraudulent use until receiving a statement. Cardholders can mitigate this fraud risk by checking their account frequently to ensure there are not any suspicious or unknown transactions.
When a credit card is lost or stolen, it may be used for illegal purchases until the holder notifies the issuing bank and the bank puts a block on the account. Most banks have free 24-hour telephone numbers to encourage prompt reporting. Still, it is possible for a thief to make unauthorized purchases on a card before the card is cancelled.
Prevention of payment card fraud
Card information is stored in a number of formats. Card numbers – formally the Primary Account Number (PAN) – are often embossed or imprinted on the card, and a magnetic stripe on the back contains the data in a machine-readable format. Fields can vary, but the most common include the Name of the cardholder; Card number; Expiration date; and Verification CVV code.
In Europe and Canada, most cards are equipped with an EMV chip which requires a 4 to 6 digit PIN to be entered into the merchant's terminal before payment will be authorized. However, a PIN isn't required for online transactions. In some European countries, if you don't have a card with a chip, you may be asked for photo-ID at the point of sale.
In some countries, a credit card holder can make a contactless payment for goods or services by tapping their card against a RFID or NFC reader without the need for a PIN or signature if the cost falls under a pre-determined limit. However, a stolen credit or debit card could be used for a number of smaller transactions prior to the fraudulent activity being flagged.
Card issuers maintain several countermeasures, including software that can estimate the probability of fraud. For example, a large transaction occurring a great distance from the cardholder's home might seem suspicious. The merchant may be instructed to call the card issuer for verification or to decline the transaction, or even to hold the card and refuse to return it to the customer.
Types of payment card fraud
Application fraud
Application fraud takes place when a person uses stolen or fake documents to open an account in another person's name. Criminals may steal or fake documents such as utility bills and bank statements to build up a personal profile. When an account is opened using fake or stolen documents, the fraudster could then withdraw cash or obtain credit in the victim's name. To protect yourself, keep your details private store sensitive documents in a secure place and be careful how you dispose of personally identifiable information.
Account takeover
An account takeover refers to the act by which fraudsters will attempt to assume control of a customer's account (i.e. credit cards, email, banks, SIM card and more). Control at the account level offers high returns for fraudsters. According to Forrester, risk-based authentication (RBA) plays a key role in risk mitigation.
A fraudster uses parts of the victim's identity such as an email address to gain access to financial accounts. This individual then intercepts communication about the account to keep the victim blind to any threats. Victims are often the first to detect account takeover when they discover charges on monthly statements they did not authorize or multiple questionable withdrawals. There has been an increase in the number of account takeovers since the adoption of EMV technology, which makes it more difficult for fraudsters to clone physical credit cards.
Among some of the most common methods by which a fraudster will commit an account, takeover includes proxy-based "checker" one-click apps, brute-force botnet attacks, phishing, and malware. Other methods include dumpster diving to find personal information in discarded mail, and outright buying lists of 'Fullz,' a slang term for full packages of identifying information sold on the black market.
Social engineering fraud
Social engineering fraud can occur when a criminal poses as someone else which results in a voluntary transfer of money or information to the fraudster. Fraudsters are turning to more sophisticated methods of scamming people and businesses out of money. A common tactic is sending spoof emails impersonating a senior member of staff and trying to deceive employees into transferring money to a fraudulent bank account.
Fraudsters may use a variety of techniques in order to solicit personal information by pretending to be a bank or payment processor. Telephone phishing is the most common social engineering technique to gain the trust of the victim.
Businesses can protect themselves with a dual authorisation process for the transfer of funds that requires authorisation from at least two persons, and a call-back procedure to a previously established contact number, rather than any contact information included with the payment request. Your bank must refund you for any unauthorised payment, however, they can refuse a refund on the basis: it can prove you authorised the transaction, or it can prove you are at fault because you acted deliberately, or failed to protect your details that allowed the transaction.
Skimming
Skimming is the theft of personal information which has been used in an otherwise normal transaction. The thief can procure a victim's card number using basic methods such as photocopying receipts or more advanced methods such as using a small electronic device (skimmer) to swipe and store hundreds of victims' card numbers. Common scenarios for skimming are taxis, restaurants or bars where the skimmer has possession of the victim's payment card out of their immediate view. The thief may also use a small keypad to unobtrusively transcribe the three or four-digit card security code, which is not present on the magnetic strip.
Call centers are another area where skimming can easily occur. Skimming can also occur at merchants when a third-party card-reading device is installed either outside a card-swiping terminal. This device allows a thief to capture a customer's card information, including their PIN, with each card swipe.
Skimming is difficult for the typical cardholder to detect, but given a large enough sample, it is fairly easy for the card issuer to detect. The issuer collects a list of all the cardholders who have complained about fraudulent transactions, and then uses data mining to discover relationships among them and the merchants they use. Sophisticated algorithms can also search for patterns of fraud. Merchants must ensure the physical security of their terminals, and penalties for merchants can be severe if they are compromised, ranging from large fines by the issuer to complete exclusion from the system, which can be a death blow to businesses such as restaurants where credit card transactions are the norm.
Instances of skimming have been reported where the perpetrator has put over the card slot of an automated teller machine, a device that reads the magnetic strip as the user unknowingly passes their card through it. These devices are often used in conjunction with a miniature camera to read the user's personal identification number at the same time. This method is being used in many parts of the world, including South America, Argentina, and Europe.
Unexpected repeat billing
Online bill paying or internet purchases utilizing a bank account are a source for repeat billing known as "recurring bank charges". These are standing orders or banker's orders from a customer to honour and pay a certain amount every month to the payee. With E-commerce, especially in the United States, a vendor or payee can receive payment by direct debit through the ACH Network. While many payments or purchases are valid, and the customer has intentions to pay the bill monthly, some are known as Rogue Automatic Payments.
Another type of credit card fraud targets utility customers. Customers receive unsolicited in-person, telephone, or electronic communication from individuals claiming to be representatives of utility companies. The scammers alert customers that their utilities will be disconnected unless an immediate payment is made, usually involving the use of a reloadable debit card to receive payment. Sometimes the scammers use authentic-looking phone numbers and graphics to deceive victims.
Regulation and governance
United States
While not federally mandated in the United States PCI DSS is mandated by the Payment Card Industry Security Standard Council, which is composed of major credit card brands and maintains this as an industry standard. Some states have incorporated the standard into their laws.
Proposed toughening of federal law
The US Department of Justice has announced in September 2014 that it will seek to impose a tougher law to combat overseas credit card trafficking. Authorities say the current statute is too weak because it allows people in other countries to avoid prosecution if they stay outside the United States when buying and selling the data and don't pass their illicit business through the U.S. The Department of Justice asks US Congress to amend the current law that would make it illegal for an international criminal to possess, buy or sell a stolen credit card issued by a U.S. bank independent of geographic location.
Cardholder liability
In the US, federal law limits the liability of cardholders to $50 in the event of theft of the actual credit card, regardless of the amount charged on the card, if reported within 60 days of receiving the statement. In practice, many issuers will waive this small payment and simply remove the fraudulent charges from the customer's account if the customer signs an affidavit confirming that the charges are indeed fraudulent. If the physical card is not lost or stolen, but rather just the credit card account number itself is stolen, then federal law guarantees cardholders have zero liability to the credit card issuer.
United Kingdom
In the UK, credit cards are regulated by the Consumer Credit Act 1974 (amended 2006). This provides a number of protections and requirements. Any misuse of the card, unless deliberately criminal on the part of the cardholder, must be refunded by the merchant or card issuer.
The regulation of banks in the United Kingdom is undertaken by the: Bank of England (BoE); Prudential Regulation Authority (PRA) a division of the BoE; and the Financial Conduct Authority (FCA) who manages the day to day oversight. There is no specific legislation or regulation that governs the credit card industry. However, the Association for Payment Clearing Services (APACS) is the institution that all settlement members are a part of. The organisation works under the Banking Consolidation Directive to provide a means by which transactions can be monitored and regulated. UK Finance is the association for the UK banking and financial services sector, representing more than 250 firms providing credit, banking and payment-related services.
Australia
In Australia, credit card fraud is considered a form of ‘identity crime’. The Australian Transaction Reports and Analysis Centre has established standard definitions in relation to identity crime for use by law enforcement across Australia:
The term identity encompasses the identity of natural persons (living or deceased) and the identity of bodies corporate
Identity fabrication describes the creation of a fictitious identity
Identity manipulation describes the alteration of one's own identity
Identity theft describes the theft or assumption of a pre-existing identity (or significant part thereof), with or without consent and whether, in the case of an individual, the person is living or deceased
Identity crime is a generic term to describe activities/offences in which a perpetrator uses a fabricated identity, a manipulated identity, or a stolen/assumed identity to facilitate the commission of a crime(s).
Losses
Estimates created by the Attorney-General's Department show that identity crime costs Australia upwards of $1.6 billion each year, with the majority of about $900 million being lost by individuals through credit card fraud, identity theft and scams. In 2015, the Minister for Justice and Minister Assisting the Prime Minister for Counter-Terrorism, Michael Keenan, released the report Identity Crime and Misuse in Australia 2013–14. This report estimated that the total direct and indirect cost of identity crime was closer to $2 billion, which includes the direct and indirect losses experienced by government agencies and individuals, and the cost of identity crimes recorded by police.
Cardholder liability
The victim of credit card fraud in Australia, still in possession of the card, is not responsible for anything bought on it without their permission. However, this is subject to the terms and conditions of the account. If the card has been reported physically stolen or lost the cardholder is usually not responsible for any transactions not made by them, unless it can be shown that the cardholder acted dishonestly or without reasonable care.
Vendors vs merchants
To prevent vendors from being "charged back" for fraud transactions, merchants can sign up for services offered by Visa and MasterCard called Verified by Visa and MasterCard SecureCode, under the umbrella term 3-D Secure. This requires consumers to add additional information to confirm a transaction.
Often enough online merchants do not take adequate measures to protect their websites from fraud attacks, for example by being blind to sequencing. In contrast to more automated product transactions, a clerk overseeing "card present" authorization requests must approve the customer's removal of the goods from the premises in real time.
If the merchant loses the payment, the fees for processing the payment, any currency conversion commissions, and the amount of the chargeback penalty. For obvious reasons, many merchants take steps to avoid chargebacks—such as not accepting suspicious transactions. This may spawn collateral damage, where the merchant additionally loses legitimate sales by incorrectly blocking legitimate transactions. Mail Order/Telephone Order (MOTO) merchants are implementing Agent-assisted automation which allows the call center agent to collect the credit card number and other personally identifiable information without ever seeing or hearing it. This greatly reduces the probability of chargebacks and increases the likelihood that fraudulent chargebacks will be overturned.
Famous credit fraud attacks
Between July 2005 and mid-January 2007, a breach of systems at TJX Companies exposed data from more than 45.6 million credit cards. Albert Gonzalez is accused of being the ringleader of the group responsible for the thefts. In August 2009 Gonzalez was also indicted for the biggest known credit card theft to date — information from more than 130 million credit and debit cards was stolen at Heartland Payment Systems, retailers 7-Eleven and Hannaford Brothers, and two unidentified companies.
In 2012, about 40 million sets of payment card information were compromised by a hack of Adobe Systems. The information compromised included customer names, encrypted payment card numbers, expiration dates, and information relating to orders, Chief Security Officer Brad Arkin said.
In July 2013, press reports indicated four Russians and a Ukrainian were indicted in the U.S. state of New Jersey for what was called "the largest hacking and data breach scheme ever prosecuted in the United States." Albert Gonzalez was also cited as a co-conspirator of the attack, which saw at least 160 million credit card losses and excess of $300 million in losses. The attack affected both American and European companies including Citigroup, Nasdaq OMX Group, PNC Financial Services Group, Visa licensee Visa Jordan, Carrefour, J. C. Penny and JetBlue Airways.
Between 27 November 2013 and 15 December 2013, a breach of systems at Target Corporation exposed data from about 40 million credit cards. The information stolen included names, account numbers, expiry dates, and card security codes.
From 16 July to 30 October 2013, a hacking attack compromised about a million sets of payment card data stored on computers at Neiman-Marcus. A malware system, designed to hook into cash registers and monitor the credit card authorisation process (RAM-scraping malware), infiltrated Target's systems and exposed information from as many as 110 million customers.
On 8 September 2014, The Home Depot confirmed that their payment systems were compromised. They later released a statement saying that the hackers obtained a total of 56 million credit card numbers as a result of the breach.
On 15 May 2016, in a coordinated attack, a group of around 100 individuals used the data of 1600 South African credit cards to steal US$12.7 million from 1400 convenience stores in Tokyo within three hours. By acting on a Sunday and in another country than the bank which issued the cards, they are believed to have won enough time to leave Japan before the heist was discovered.
Countermeasures to combat card payment fraud
Countermeasures to combat credit card fraud include the following.
By Merchants
PAN truncation – not displaying the full primary account number on receipts
Tokenization (data security) – using a reference (token) to the card number rather than the real card number
Requesting additional information, such as a PIN, ZIP code, or Card Security Code
Performing geolocation validation, such as IP address
Use of Reliance Authentication, indirectly via PayPal, or directly via iSignthis or miiCard.
By Card issuers
Fraud detection and prevention software that analyzes patterns of normal and unusual behavior as well as individual transactions in order to flag likely fraud. Profiles include such information as IP address. Technologies have existed since the early 1990s to detect potential fraud. One early market entrant was Falcon; other leading software solutions for card fraud include Actimize, SAS, BAE Systems Detica, and IBM.
Fraud detection and response business processes such as:
Contacting the cardholder to request verification
Placing preventative controls/holds on accounts that may have been victimized
Blocking card until transactions are verified by the cardholder
Investigating fraudulent activity
Strong Authentication measures such as:
Multi-factor Authentication, verifying that the account is being accessed by the cardholder through requirement of additional information such as account number, PIN, ZIP, challenge questions
Multi possession-factor authentication, verifying that the account is being accessed by the cardholder through requirement of additional personal devices such as smart watch, smart phone challenge–response authentication
Out-of-band Authentication, verifying that the transaction is being done by the cardholder through a "known" or "trusted" communication channel such as text message, phone call, or security token device
Industry collaboration and information sharing about known fraudsters and emerging threat vectors
By Banks and Financial Institutions
Internal self-banking area for the customer to carry out the transactions regardless of the weather conditions. The access door:
Identifies every cardholder that gains access to the designated area
Increases protection for customers during self-service procedures
Protects the ATMs and banking assets against unauthorized usage
The protected area can also be monitored by the bank's CCTV system
Cards use CHIP identification (ex PASSCHIP ) to decrease the possibility of card skimming
By Governmental and Regulatory Bodies
Enacting consumer protection laws related to card fraud
Performing regular examinations and risk assessments of credit card issuers
Publishing standards, guidance, and guidelines for protecting cardholder information and monitoring for fraudulent activity
Regulation, such as that introduced in the SEPA and EU28 by the European Central Bank's 'SecuRe Pay' requirements and the Payment Services Directive 2 legislation.
By Cardholders
Reporting lost or stolen cards
Reviewing charges regularly and reporting unauthorized transactions immediately
Keeping a credit card within the cardholder's view at all times, such as in restaurants and taxis
Installing virus protection software on personal computers
Using caution when using credit cards for online purchases, especially on non-trusted websites
Keeping a record of account numbers, their expiration dates, and the phone number and address of each company in a secure place.
Not sending credit card information by unencrypted email
Not keeping written PIN numbers with the credit card.
Additional technological features
3-D Secure
EMV
Point to Point Encryption
Strong authentication
True Link
See also
Carding (fraud)
Chargeback fraud
Chargeback insurance
Credit card hijacking
FBI
Financial crimes
Identity theft
Immigration and Customs Enforcement (ICE)
Internet fraud
Organized crime
Phishing
Predictive analytics
Reimbursement
Social Engineering
Traffic analysis
United States Postal Inspection Service
United States Secret Service
White-collar crime
References
External links
Federal Financial Institutions Examination Council (FFIEC) IT Booklets » Information Security » Appendix C: Laws, Regulations, and Guidance
Visa's fraud control basics for merchants
Mastercard's merchant training support
The Internet Crime Complaint Center (IC3) is a partnership between the Federal Bureau of Investigation (FBI) and the National White Collar Crime Center(NW3C).
Internet Fraud, with a section "Avoiding Credit Card Fraud", at the Federal Bureau of Investigation website
Avoiding Credit and Charge Card Fraud at U.S. Federal Trade Commission
US Federal Trade Commission Consumer Sentinel Network Report
Machine Learning for Credit Card Fraud Detection - Practical Handbook
Credit cards
Identity theft
Organized crime activity
Carding (fraud) |
40483537 | https://en.wikipedia.org/wiki/PlayStation%20TV | PlayStation TV | The PlayStation TV (abbreviated to PS TV), known in Japan and other parts of Asia as the PlayStation Vita TV or PS Vita TV, is a microconsole, and a non-handheld variant of the PlayStation Vita handheld game console. It was released in Japan on November 14, 2013, and Europe and Australia on November 14, 2014.
Controlled with either the DualShock 3 or DualShock 4 controllers, the PS TV is capable of playing many PlayStation Vita games and applications, either through physical cartridges or downloaded through the PlayStation Store. However, not all content is compatible with the device, since certain features in the PS Vita such as the gyroscope and microphone are not available on the PS TV. Nevertheless, the PS TV is able to emulate touch input for both the Vita's front and rear touchpads using the PS3 and PS4 controller.
In Japan, "PlayStation TV" was the name given to PlayStation 3 retail kiosks from 2006 to 2014, which consisted of a PS3 unit, an LCD monitor and a number of controllers.
History
Release
The system was released in Japan on November 14, 2013. The device on its own sold for 9,954 yen tax inclusive (about US$100), whilst a bundle version with an 8 GB memory card and DualShock 3 controller retailed for 14,994 yen (about US$150).
Andrew House, CEO of Sony Computer Entertainment, explained that Sony hoped to use the PS Vita TV to penetrate the Chinese gaming market, where video game consoles have previously been prohibited. The PS Vita TV was released in five other Southeast Asian countries and the special region of Hong Kong on January 16, 2014. At E3 2014, the system was announced for North America and Europe, under the name PlayStation TV, for release in Q3 2014. Final release dates for the western release were announced at Gamescom 2014.
System software update 3.15 was released on April 30, 2014, which enabled PS4 remote play functionality for the PS Vita TV. the system can be used with PlayStation Network accounts originating from outside the original launch territories of Japan and Asia following the release of system software firmware version 3.30 update, which also renames the PS Vita TV system to PS TV within the system menus.
Open beta trials for PlayStation Now functionality on the PS TV began on October 14, 2014 in North America, the same day that PS TV was released there.
By the end of March, in Europe, Sony has dropped the price of PlayStation TV by 40% with the new price of €59.99. That same week the sales had increased by 1272%.
On February 28, 2016, Engadget reported that Sony has stopped shipping the PlayStation TV in Japan. Sony confirmed shipments were discontinued in Americas and Europe at the end of 2015, however will continue in Asia contrary to reports.
Features
Instead of featuring a display screen, the console connects to a television via HDMI. Users can play using a DualShock 3 controller (with functionality for DualShock 4 controllers added with the 3.10 firmware update released on March 25, 2014), although due to the difference in features between the controller and the handheld, certain games are not compatible with PS Vita TV, such as those that are dependent on the system's microphone, camera, or gyroscopic features. The device is said to be compatible with over 100 PS Vita games, as well as various digital PlayStation Portable, PlayStation, and PC Engine titles, along with a selection of PlayStation 3 titles streamed from the PlayStation Now service. The device is technically referred to by Sony as the VTE-1000 series, to distinguish it from the handheld PCH-1000/2000 series PS Vita models.
According to Muneki Shimada, Sony Director of the Second Division of Software Development, the original PCH-1000 series PlayStation Vita already includes an upscaler that supports up to 1080i resolution, however it was decided that the idea for video output for the original Vita was to be scrapped in favor for releasing the PlayStation Vita TV as a separate device for television connectivity. The in-built scaler has been removed from the PCH-2000 series PlayStation Vita model.
The system supports Remote Play compatibility with the PlayStation 4, allowing players to stream games from the PlayStation 4 to a separate TV connected to PS Vita TV, and also allows users to stream content from video services such as Hulu and Niconico, as well as access the PlayStation Store. PS4 Remote Play functionality for the PS Vita TV gained full support with the release of the 1.70 PS4 firmware update. The device includes the software features of the PS Vita, such as the Web browser and email client. There are future plans for media server and DLNA support for remote video streaming and image/audio file transfer.
The console measures 6.5 cm by 10.5 cm, about the size of a pack of playing cards.
It is powered with (and ships with) the same model/type of power adapter that was used for the original PlayStation Portable.
Reception
PC World called the device an amazing invention, praising the opportunity to play Vita and PSP games on the big screen. IGN said the console "may be one of Sony's most exciting new products and could provide a critical edge for the PS4."
Various commentators have compared the device to set-top boxes—including media streaming devices (such as Apple TV and Chromecast) and other microconsoles, such as the Ouya. Time said the console could compete well against set-top box competitors with a quality library of games. At launch however, the game library was limited to a subset of PS Vita games, which negatively impacted early reviews.
The PlayStation TV, along with the PlayStation 4, won the 2014 Good Design Award from the Japan Institute of Design Promotion.
The PlayStation TV sold 42,172 units during its debut week of release in Japan. The PlayStation TV was heavily marketed alongside God Eater 2 which was released on the same day as the device, and placed at the top of the Japanese software sales charts for that week.
Compatibility and whitelist hack
Journalists criticized the platform's lack of compatibility with the Vita's overall software library. Sam Byford of The Verge commented: "Vita TV’s most egregious failure was that it failed at being a Vita. Many games’ reliance on dubious Vita features like the rear touchpad came back to haunt Sony, as vast swathes of the system's library was rendered incompatible with a regular gaming controller. And even some games that should by rights have worked just didn't, for whatever reason."
Andrew Hayward of IGN wrote: "Sadly, anyone with a large Vita library will surely find the incredibly massive holes in the PlayStation TV's compatibility list quite quickly. The 140+ compatible Vita games as of this writing represent a rather small chunk of those released in North America, and the omissions are baffling." The omissions included some of the Vita's heavy hitters such as Uncharted: Golden Abyss, Wipeout 2048, Assassin's Creed III: Liberation, Lumines: Electronic Symphony, Tearaway, Gravity Rush, Borderlands 2, The Sly Collection and others. Hayward also reported: "It's possible to navigate menus in games with a cursor via the DualShock 4's touch pad, so why didn't Sony enlist its studios and third-party partners to make sure the biggest and best Vita games were patched and ready to play on day one? That's not the kind of thing to ignore or worry about down the line." Richard Leadbetter of Eurogamer’s Digital Foundry shared these sentiments, arguing that games such as Wipeout 2048 didn't need to be blacklisted, as it only used the touchscreen for the menus and could be accessed using the DualShock 4's touch pointer emulation: "Life as a PlayStation TV owner can be pretty frustrating - especially when a vast array of mobile Vita titles that should work just fine on the under-utilised micro-console fail to load at all, blocked by their lack of inclusion on Sony's whitelist of approved titles." Sean Hollister of Gizmodo complained that even ports of PlayStation 3 games didn't work on the PS TV, including Sound Shapes, Flower and Guacamelee.
In 2015 and 2016, hackers found ways to overwrite the PS TV's Whitelist to allow any Vita game to load. But while all games will load, compatibility issues persist in certain Vita games reliant on touch and motion controls. For example, Uncharted: Golden Abyss is playable up until any mini game prompting the player to tilt the Vita side to side in order to balance Nathan Drake as he crosses a log. But for the most part, the Whitelist hack was well received by journalists, as many other games were playable from start to finish. Kyle Orland of Ars Technica reported: "some enterprising hackers have apparently gone a long way toward fixing this problem by increasing the PlayStation TV's software compatibility with a simple hack." Joel Hruska of ExtremeTech reported that "a full 30 additional titles" from his collection of Vita games were compatible with PS TV Whitelist hack. J.C. Torres confirmed that Assassin's Creed III: Liberation, Call of Duty: Black Ops Declassified, Hatsune Miku: Project DIVA F, Silent Hill: Book of Memories and Gravity Rush were among the titles compatible with the hack. He also reported: "Interestingly, Netflix also becomes compatible with this hack. Netflix is a curious case for the PS TV, as the app, which worked for the Vita, remains broken for the mini console. A tragedy for a set-top box device." Leadbetter rejoiced: "For the first time, you can play non-approved games like Wipeout 2048 and the Metal Gear Solid HD Collection on your handheld, then continue playing them at home on the big screen." He also felt "this exploit opens up some delicious gaming opportunities." Marliella Moon of Engadget even joked about the incompatibility of a stock PS TV in her review of the Whitelist hack: "If you have a PlayStation TV collecting dust in a cabinet somewhere, this might make it useful again."
See also
Lists of PlayStation Vita games (table has a column denoting compatibility)
References
External links
Official Hong Kong website
Official South Korea website
Official US website
Eighth-generation video game consoles
Microconsoles
PlayStation (brand)
PlayStation Vita
Products introduced in 2013
Regionless game consoles
Sony consoles |
36053652 | https://en.wikipedia.org/wiki/Timeline%20of%20scientific%20computing | Timeline of scientific computing | The following is a timeline of scientific computing, also known as computational science.
Before modern computers
18th century
Simpson rediscovers Simpson's rule, a century after Johannes Kepler (who derived it in 1615 after seeing it used for wine barrels).
1733 – The French naturalist Comte de Buffon poses his needle problem.
Euler comes up with a simple numerical method for integrands.
19th century
First formulation of Gram-Schmidt orthogonalisation by Laplace, to be further improved decades later.
Babbage in 1822, began work on a machine made to compute/calculate values of polynomial functions automatically by using the method of finite differences. This was eventually called the Difference engine.
Lovelace's note G on the Analytical Engine (1842) describes an algorithm for generating Bernoulli numbers. It is considered the first algorithm ever specifically tailored for implementation on a computer, and thus the first-ever computer programme. The engine was never completed, however, so her code was never tested.
Adams-Bashforth method published.
In applied mathematics, Jacobi develops technique for solving numerical equations.
Gauss Seidel first published.
To help with computing tides, Harmonic Analyser is built in 1886.
1900s (decade)
1900 – Runge’s work followed by Martin Kutta to invent the Runge-Kutta method for approximating integration for differential equations.
1910s (decade)
1910 – A-M Cholesky creates a matrix decomposition scheme.
Richardson extrapolation introduced.
1920s
1922 – Lewis Fry Richardson introduces numerical weather forecasting by manual calculation, using methods originally developed by Vilhelm Bjerknes as early as 1895.
1926 – Grete Hermann publishes foundational paper for computer algebra, which established the existence of algorithms (including complexity bounds) for many of the basic problems of abstract algebra, such as ideal membership for polynomial rings.
1926 Adams-Moulton method.
1927 – Douglas Hartree creates what is later known as the Hartree–Fock method, the first ab initio quantum chemistry methods. However, manual solutions of the Hartree–Fock equations for a medium-sized atom were laborious and small molecules required computational resources far beyond what was available before 1950.
1930s
This decade marks the first major strides to a modern computer, and hence the start of the modern era.
Fermi's Rome physics research group (informal name I ragazzi di Via Panisperna) develop statistical algorithms based on Comte de Buffon's work, that would later become the foundation of the Monte Carlo method. See also FERMIAC.
Shannon explains how to use electric circuits to do Boolean algebra in "A Symbolic Analysis of Relay and Switching Circuits"
John Vincent Atanasoff and Clifford Berry create the first electronic non-programmable, digital computing device, the Atanasoff–Berry Computer, from 1937-42.
Complex number calculator created by Stibitz.
1940s
1947 – Monte Carlo simulation (voted one of the top 10 algorithms of the 20th century) invented at Los Alamos by von Neumann, Ulam and Metropolis.
George Dantzig introduces the simplex method (voted one of the top 10 algorithms of the 20th century) in 1947.
Ulam and von Neumann introduce the notion of cellular automata.
Turing formulated the LU decomposition method.
A. W. H. Phillips invents the MONIAC hydraulic computer at LSE, better known as "Phillips Hydraulic Computer".
First hydro simulations occurred at Los Alamos.
1950s
First successful weather predictions on a computer occurred.
Hestenes, Stiefel, and Lanczos, all from the Institute for Numerical Analysis at the National Bureau of Standards, initiate the development of Krylov subspace iteration methods. Voted one of the top 10 algorithms of the 20th century.
Equations of State Calculations by Fast Computing Machines introduces the Metropolis–Hastings algorithm.
Molecular dynamics invented by Bernie Alder and Wainwright
A S Householder invents his eponymous matrices and transformation method (voted one of the top 10 algorithms of the 20th century).
1953 – Enrico Fermi, John Pasta, Stanislaw Ulam, and Mary Tsingou discover the Fermi–Pasta–Ulam–Tsingou problem through computer simulations of a vibrating string.
A team led by John Backus develops the FORTRAN compiler and programming language at IBM's research centre in San Jose, California. This sped the adoption of scientific programming, and is one of the oldest extant programming languages, as well as one of the most popular in science and engineering.
1960s
1960 – First recorded use of the term "finite element method" by Ray Clough to describe the earlier methods of Richard Courant, Alexander Hrennikoff and Olgierd Zienkiewicz in structural analysis.
1961 – John G.F. Francis and Vera Kublanovskaya invent QR factorization (voted one of the top 10 algorithms of the 20th century).
1963 – Edward Lorenz discovers the butterfly effect on a computer, attracting interest in chaos theory.
1961 – Using computational investigations of the 3-body problem, Michael Minovitch formulates the gravity assist method.
1964 – Molecular dynamics invented independently by Aneesur Rahman.
1965 – fast Fourier transform developed by James W. Cooley and John W. Tukey.
1964 – Walter Kohn, with Lu Jeu Sham and Pierre Hohenberg, instigates the development of density functional theory, for which he shares the 1998 Nobel Chemistry Prize with John Pople. This contribution is arguably the earliest work to which Nobels were given for a computer program or computational technique.
First regression calculations in economics.
1970s
1975 – Benoit Mandelbrot coins the term "fractal" to describe the self-similarity found in the Fatou, Julia and Mandelbrot sets. Fractals become the first mathematical visualization tool extensively explored with computing.
1977 – Kenneth Appel and Wolfgang Haken prove the four colour theorem, the first theorem to be proved by computer.
1980s
Fast multipole method (voted one of the top 10 algorithms of the 20th century) invented by Vladimir Rokhlin and Leslie Greengard.
Car–Parrinello molecular dynamics developed by Roberto Car and Michele Parrinello
1990s
1990 – In computational genomics and sequence analysis, the Human Genome Project, an endeavour to sequence the entire human genome, begins.
1998 – Kepler conjecture is almost all but certainly proved algorithmically by Thomas Hales.
The appearance of the first research grids using volunteer computing – GIMPS (1996), distributed.net (1997) and Seti@Home (1999).
2000s
2000 – The Human Genome Project completes a rough draft of human genome.
2003 – The Human Genome Project completed.
2002 – The BOINC architecture is launched in 2002.
2010s
Foldit players solve virus structure, one of the first cases of a game solving a scientific question.
See also
Scientific computing
History of computing
History of mathematics
Timeline of mathematics
Timeline of algorithms
Timeline of computational physics
Timeline of computational mathematics
Timeline of numerical analysis after 1945
History of computing hardware
References
External links
SIAM (Society for Industrial and Applied Mathematics) News. Top 10 Algorithms of the 20th Century.
The History of Numerical Analysis and Scientific Computing @ SIAM (Society for Industrial and Applied Mathematics)
IEEE Milestones
Computational science
s
s |
43597924 | https://en.wikipedia.org/wiki/Infogix%2C%20Inc. | Infogix, Inc. | Infogix, Inc. is a multinational data controls and analytics software company for businesses to manage, analyze and monitor their data for business operations. The company is based in the United States with headquarters in Naperville, Illinois and primarily serves clients in the healthcare, financial services, property and casualty insurance, telecommunications and retail industries.
The enterprise was founded by Madhaven K. Nayar in 1982 as Unitech Systems, Inc. The company began to work on automated data controls software. In 2005, Unitech Systems was renamed Infogix after growing to represent many companies in the Fortune 100 and Global 2000 economy including Wells Fargo, Target, Progressive Insurance and Verizon Wireless.
Infogix has undergone multiple organizational changes since its inception. On June 1, 2012, Infogix announced H.I.G. Capital, a global private equity firm, had recapitalized the business. On January 10, 2014, both Infogix and H.I.G. publicly announced a completed acquisition of Agilis International, Inc., a provider of predictive customer and operational analytics.
In 2016, H.I.G. Capital sold Infogix to Thoma Bravo.
In 2018, Infogix has acquired DATUM.
History
Infogix was founded in 1982 as Unitech Systems, Inc. with a focus on creating continuous, automated controls software for the mainframe to assist with large amounts of balance and reconciliation. In the early 2000s Unitech Systems began extending its offerings and launching distributed products and the company changed its name to Infogix, Inc. in 2005.
The name Infogix was chosen based on the concept of the focal point where information, logic and exchange meet.
With the new name came a new company logo. The gold checkmark in the current logo represents the company reaching to achieve the gold standard, and the equal symbol is a reference to the reconciliation technology that started the company in 1982.
Infogix introduced its Business Operations Management solution in 2012 based on feedback from some of its largest customers. The company had successfully worked with IT and finance departments for 30 years, but organizations were sharing stories of successfully using Infogix products in other areas of the business, as well. Use of the software solution across the enterprise led the company to introduce its Business Operations Management solution.
Infogix’ current offerings include their Controls Suite (Infogix Assure, ACR/Summary, ACR/Detail, Infogix ER); Visibility Suite (Infogix Insight, Infogix Nexix and Nexix Mobile, Infogix Perceive); and Analytics Suite (RevMind, NetMind, DataMind).
On June 1, 2012, Infogix announced that H.I.G. Capital had recapitalized the business.
Infogix announced its acquisition of Agilis International, a provider of advanced analytics solutions, on January 9, 2014. Founded in 2003 in Rockville, MD, Agilis currently operates as a division of Infogix.
In 2021, Infogix was acquired by Precisely Software Incorporated.
Corporate Leadership
References
Software companies of the United States
Companies based in Naperville, Illinois
Software companies established in 1982
Multinational companies headquartered in the United States
1982 establishments in Illinois
2016 mergers and acquisitions |
311487 | https://en.wikipedia.org/wiki/Catacomb%203-D | Catacomb 3-D | Catacomb 3-D (also known as Catacomb 3-D: A New Dimension, Catacomb 3-D: The Descent, and Catacombs 3) is the third in the Catacomb series of video games (created by the founders of id Software), and the first of these games to feature 3D computer graphics. The game was originally published by Softdisk under the Gamer's Edge label, and is a first-person shooter with a dark fantasy setting. The player takes control of the high wizard Petton Everhail, descending into the catacombs of the Towne Cemetery to defeat the evil lich Nemesis and rescue his friend Grelminar.
Catacomb 3-D is a landmark title in terms of first-person graphics. The game was released in November 1991 and is arguably the first example of the modern, character-based first-person shooter genre, or at least it was a direct ancestor to the games that popularized the genre. It was released for MS-DOS with EGA graphics. The game introduced the concept of showing the player's hand in the three-dimensional viewpoint, and an enhanced version of its technology was later used for the more successful and well-known Wolfenstein 3D. The game's more primitive technological predecessor was Hovertank 3D.
Production
The origin of the games is Catacomb by John Carmack for the PC and Apple II. This was a two-dimensional game utilizing a third-person view from above, released in 1989–1990. It was followed up with Catacomb II, which used the same game engine with new levels. The first release of Catacomb 3-D was called Catacomb 3-D: A New Dimension, but it was later re-released as Catacomb 3-D: The Descent, as well as Catacombs 3 for a re-release as commercially packaged software (the earlier versions had been released by other means such as disk magazines and downloads). The game creators were John Carmack, John Romero, Jason Blochowiak (programmers), Tom Hall (creative director), Adrian Carmack (artist), and Robert Prince (musician). The game was programmed using the Borland C++ programming language.
id Software's use of texture mapping in Catacomb 3-D was influenced by Ultima Underworld (still in development at Catacomb 3-D'''s release). Conflicting accounts exist regarding the extent of this influence, however. In the book Masters of Doom, author David Kushner asserts that the concept was discussed only briefly during a 1991 telephone conversation between Underworld developer Paul Neurath and John Romero. However, Paul Neurath has stated multiple times that John Carmack and John Romero had seen the game's 1990 CES demo, and recalled a comment from Carmack that he could write a faster texture mapper.
Catacomb Adventure SeriesCatacomb 3-D was followed by three games, in the so-called Catacomb Adventure Series. They were not developed by id Software but internally by Softdisk with a new staff for Gamer's Edge, who also made the later Dangerous Dave sequels. All of the games, including the original Catacomb titles, are now distributed legally by Flat Rock Software through their own web store and via GoG.com. Flat Rock have also released the source code for the games under GNU GPL-2.0-or-later in June 2014 in a manner similar those done by id and partners. This has led to the creation of the source port Reflection Catacomb, also called Reflection Keen due to shared support for Keen Dreams, and ports all of the 3D Catacomb games to modern systems. Another project, CatacombGL, is an enhanced OpenGL port for Microsoft Windows.
The credits for the series are Mike Maynard, James Row, Nolan Martin (programming), Steven Maines (art direction), Carol Ludden, Jerry Jones, Adrian Carmack (art production), James Weiler, Judi Mangham (quality assurance), and id Software (3D imaging effects). The series' development head, Greg Malone, later became creative director for Duke Nukem 3D and also worked on Shadow Warrior for 3D Realms. Department heads Mike Maynard and Jim Row, meanwhile, would co-found JAM Productions (soon joined by Jerry Jones), the creators of Blake Stone using an enhanced Wolfenstein 3D engine.
The series also introduced an item called crystal hourglasses, which would temporarily freeze time and allow the player to stage shots to destroy enemies upon the resumption of normal time, pre-dating later bullet time features in games such as Requiem: Avenging Angel and Max Payne.
Catacomb AbyssCatacomb Abyss is the sequel to Catacomb 3-D, and featured the same main character in a new adventure: since his defeat, some of Nemesis' minions have built a mausoleum in his honour. Fearful of the dark mage's return, the townspeople hire Everhail to descend below and end the evil. The environments are more varied than in Catacomb 3D, featuring crypts, gardens, mines, aqueducts, volcanic regions and various other locales. It was the only game in the series that was distributed as shareware, released by Softdisk in 1992.
Catacomb ArmageddonCatacomb Armageddon is the sequel to Catacomb Abyss, only now set in the present day. The levels featured, among others, towns, forests, temples, torture chambers, an ant colony, and a crystal maze. It was developed by Softdisk and was later republished by Froggman under the title Curse of the Catacombs.
Catacomb ApocalypseCatacomb Apocalypse is the final game in the Catacomb Adventure Series. It was set in the distant future, accessible via time portals, and mixed fantasy and science fiction elements, pitting players against robotic necromancers and the like. It is also the only game in the trilogy to have a hub system, though it was present in the original Catacomb 3D. It was developed by Softdisk and later republished by Froggman under the title Terror of the Catacombs.
Reception
According to John Romero, the team felt it lacked the coolness and fun of Commander Keen, although the 3D technology was interesting to work with. Computer Gaming World in May 1993 called The Catacomb Abyss "very enjoyable" despite the "minimal" EGA graphics and sound. The magazine stated in February 1994 that Terror of the Catacombs''s "Playability is good, almost addictive, and offers bang for the buck in spite of its lackluster" EGA graphics. Transend Services Ltd. sold over 1,000 copies of the game in the first month of its release.
References
External links
id's look back at Catacomb 3D
1991 video games
DOS games
Amiga games
Amiga CD32 games
First-person shooters
Sprite-based first-person shooters
Video games with 2.5D graphics
Wolfenstein 3D engine games
Id Software games
Commercial video games with freely available source code
Games commercially released with DOSBox
Video games developed in the United States
Video games scored by Bobby Prince
Video games set in cemeteries
Softdisk |
577918 | https://en.wikipedia.org/wiki/Sistina%20Software | Sistina Software | Sistina Software was a US company that focused on storage solutions designed around a Linux platform. It originated in the University of Minnesota.
Their three primary offerings were Global File System (GFS), logical volume management (LVM) and device mapper (DM).
Sistina Software was acquired by Red Hat in December, 2003 for $31 million in stock. After acquisition GFS was merged into Red Hat Cluster Suite and open sourced.
GFS
GFS is a cluster file system on Linux that allows servers to transparently access a single file system on a storage area network (SAN). Its highlights are performance and reliability (journaling filesystem, scalability through parallelism, etc.).
LVM
LVM has become a part of the Linux kernel. It is a subsystem which allows arbitrary physical storage to be recognized as a virtual disk device. The physical storage can be remote, or it can even consist of multiple physical devices, but LVM abstracts those distinctions away from the operating system user. LVM also provides services for backing up data.
References
Red Hat |
14703407 | https://en.wikipedia.org/wiki/OpenGEU | OpenGEU | OpenGEU was a free computer operating system based upon the popular Ubuntu Linux distribution, which in turn is based on Debian. OpenGEU combined the strengths and ease of use of GNOME desktop environment with the lightweight, and graphical eye candy features of the Enlightenment window manager into a unique and user-friendly desktop. While OpenGEU was originally derived from Ubuntu, the design of the user gave it a significantly different appearance to the user, with original art themes, software and tools.
Geubuntu
Initially called Geubuntu (a mix of GNOME, Enlightenment and Ubuntu), OpenGEU was an unofficial re-working of Ubuntu. The name change from Geubuntu to OpenGEU occurred on 21 January 2008 in order to remove the "-buntu" suffix from its name. This was done in respect for Ubuntu's own trademark policies, which require all officially recognized Ubuntu derivatives to be based upon software found only in the official Ubuntu repositories–a criterion not met by OpenGEU.
Installation
Installation of OpenGEU was generally performed via a Live CD, which allowed the user to first test OpenGEU on their system prior to installation (albeit with a performance limit from loading applications off the disk). This is particularly useful for testing hardware compatibility and driver support. The CD also contained the Ubiquity installer, which guided the user through the permanent installation process. Due to the fact that OpenGEU used "Ubiquity," the installation process was nearly identical to that of Ubuntu. Alternatively, users could download a disk image of the CD from an online source which could then be written to a physical medium or run from a hard drive via UNetbootin. Another option was to add the OpenGEU repositories to an established Ubuntu-based system and install OpenGEU via the package manager.
Programs
Default environment
As described above, OpenGEU includes software from both the GNOME and Enlightenment projects. Unlike Ubuntu, which uses Metacity or Compiz 3D, OpenGEU used Enlightenment DR17 as its primary window manager for its rich two-dimensional features, such as real transparency and desktop animation options. Starting with OpenGEU 8.10 Luna Serena, a port of Compiz called Ecomorph has been available for 3D effects, as well.
Themes manager
Starting with OpenGEU 8.04.1 Luna Crescente, the GEUTheme application became default in the distribution. This is a toolshowed a list of installed OpenGEU themes to the user, enabling the user to browse through them and select one with one-click ease. GEUTheme could fetch new themes from the internet or from an expansion CD for the user. The tool had some advanced customization abilities–it helping the user to install and customize many aspects of their OpenGEU desktop environment (including icon themes, GTK+, ETK, E17, EWL themes, wallpapers, fonts, etc.). It was also possible for the user to create new OpenGEU themes, as well as to export, import, and share them. The creation of this tool marked the first availability of a Desktop Effects Manager, similar to that of Ubuntu, for an Enlightenment desktop. This tool was incorporated into the GEUTheme application.
Additional components
Since the distribution was an Ubuntu derivative, the range of available software was almost identical to that of Ubuntu and the other related Ubuntu projects. Additional repositories were created by the OpenGEU development team, and were pre-enabled for the distribution's use. Enlightenment 17 software was compiled, re-packaged in the .deb packaging format, and uploaded to the repositories and a number of new software packages were developed by the OpenGEU team itself: the OpenGEU Themes Manager, eTray, e17-settings-daemon and several E17 modules.
Themes
The E17 window manager used a number of different libraries to render GUI applications. To ensure every application shares the same look on the desktop, it was necessary to develop themes for various libraries that utilize the same art and graphics. This was a difficult and time-consuming task compared to that of GNOME, where all the GTK+ applications use the same default GTK+ libraries to render widgets. OpenGEU therefore developed a way to allow easy switching from one theme to another, and accordingly change the graphics of every desktop component at the same time. Every OpenGEU theme was also capable of changing the look of icon sets and wallpapers. In other words, OpenGEU themes were just a set of sub-themes for all of the different libraries used in the distribution (Edje, ETK, EWL, GTK+), designed so that the user would not notice any change in the appearance when opening their various chosen applications.
Sunshine and Moonlight
OpenGEU was presented as an artistic distribution. The two main signature themes of OpenGEU were Sunshine and Moonlight. While Sunshine and Moonlight are considered the primary themes, there were also a number of alternative themes available.
Project focus
OpenGEU focused on reducing minimum hardware requirements, such as by providing two alternative methods to enable compositing effects without any particular hardware or driver requirement.
The primary OpenGEU concept was that of building a complete and universally accessible E17 desktop—filling all of the missing parts in E17 with GNOME tools, while maintaining the speed of the distribution–for usability on any system.
Remixes
OpenGeeeU 8.10 Luna Serena was released March 23, 2009. It's a version of OpenGEU modified to work on the Asus EeePC. OpenGeeeU 8.10 uses EasyPeasy as its platform, so it is optimized for netbooks and it includes all of the drivers and fixes needed for EeePC to work out of the box.
Release history
The end
Despite announcing a switch to Debian as a base rather than Ubuntu a while after publishing Quarto di Luna in January 2010, a public release never came. As of August 2012 the Web site has been taken down along with their forums, mailing lists and other information, indicating that the project has disbanded.
Reviews and citations
OpenGEU has been independently reviewed by a number of on- and off-line Linux magazines:
Full Circle Magazine
Softpedia
Linux.com
DistroWatch
Dedoimedo.com
See also
Bodhi Linux
References
External links
OpenGEU's page on Launchpad
Ubuntu derivatives
Linux distributions |
37305157 | https://en.wikipedia.org/wiki/%28316179%29%202010%20EN65 | (316179) 2010 EN65 | is a trans-Neptunian object orbiting the Sun. However, with a semi-major axis of 30.8 AU, the object is actually a jumping Neptune trojan, co-orbital with Neptune, as the giant planet has a similar semi-major axis of 30.1 AU. The body is jumping from the Lagrangian point into via . , it is 54 AU from Neptune. By 2070, it will be 69 AU from Neptune.
Discovery
was discovered on 7 March 2010, by David L. Rabinowitz and Suzanne W. Tourtellotte using the 1.3-meter Small and Medium Research Telescope System (SMARTS) at Cerro Tololo Observatory in Chile.
Orbit
follows a rather eccentric orbit (0.31) with a semi-major axis of 30.72 AU and an inclination of 19.3º. Its orbit is well determined with images dating back to 1989.
Physical properties
is a quite large minor body with an absolute magnitude of 7.17 and an estimated diameter of based on an assumed albedo of 0.08.
Jumping trojan
is another co-orbital of Neptune, the second brightest after the quasi-satellite . is currently transitioning from librating around Lagrangian point L4 to librating around L5. This unusual trojan-like behavior is termed "jumping trojan".
Numbering and naming
This minor planet was numbered by the Minor Planet Center on 7 February 2012 (). , it has not been named. If named, it will follow the naming scheme already established with 385571 Otrera and 385695 Clete, which is to name these objects after figures related to the Amazons, an all-female warrior tribe that fought in the Trojan War on the side of the Trojans against the Greek.
References
External links
List of Centaurs and Scattered Disk Objects, Minor Planet Center
List of Trans Neptunian Objects, Minor Planet Center
316179
Discoveries by David L. Rabinowitz
Discoveries by Suzanne W. Tourtellotte
316179
20100307 |
22948674 | https://en.wikipedia.org/wiki/Wolfenstein%20RPG | Wolfenstein RPG | Wolfenstein RPG is a first-person shooter and role-playing video game developed by id Software and Fountainhead Entertainment, released in September 2008 for mobile phones and in May 2009 for iOS.
Plot
While the original Wolfenstein 3D contained Nazi castles full of swastikas and sour-looking Hitler portraits, Wolfenstein RPG is decidedly lighter in tone, with mutant chickens, romance novels, and a playful giant named Gunther. Sgt. William "B.J." Blazkowicz of the Wolfenstein series of video games, is being held captured by the Axis military. He must now escape his captors and try to save the world by defeating the Paranormal Division. To stop the Axis' diabolically evil Paranormal Division, he must escape prison, navigate towns, and infiltrate Castle Wolfenstein. On his way he can use tools and items he comes across such as boots, fist and toilets. He will inflict serious damage with weapons such as a flamethrower, a rocket launcher, and a Tesla.
Gameplay
The gameplay follows the recipe from Doom RPG as it is shown in the first person while being a turn-based role-playing game rather than a shooter and puts emphasis on the plot. Combat and movement are turn-based, allowing the player time to select their responses in combat. The player turns at 90 degree angles and moves space by space. One step or action by the player allows all other characters in the area to take one step or action themselves. The game takes advantage of its deliberately slow pace, encouraging players to take their time and check out every little corner, read the books on every bookshelf, and destroy all the furniture to see if anything is hidden within. Levels include underground passages and weapon development laboratories plus a level involving a moving vehicle.
The game also includes two mini games: the card game War as well as Chicken Kicking, where the player is awarded points for kicking a chicken into a score area.
Development
The development of Wolfenstein RPG was a long and difficult task involving id Software, Firemint Software and Electronic Arts and many months of development. The mobile version was released in late 2008. EA Mobile announced the availability of Wolfenstein RPG on August 14, 2009, a new take on the classic game originally created by id, on the App Store. Wolfenstein RPG is a worldwide release to all territories that host the iTunes App Store, including Germany. It is compatible with iPhone and iPod touch and the minimum requirement is iPhone OS 2.2.1 or later. Wolfenstein RPG is the fourth generation of turn-based titles under EA Mobile. John Carmack, founder and technical director at id Software, said that "the App Store version is dramatically better than on any other platform, with by an order of magnitude more media in high resolution graphics and audio, all rendered fast and smooth with hardware OpenGL graphics acceleration."
The game was available for most JRE-capable mobile phones, as well as the various iDevices. The mobile versions and the iOS version have some differences, but they are all largely the same game except that the iOS version has improved sound and graphics, and is more accessible to most gamers than the JRE version. The iPhone version recycles a lot of sound and music from Return to Castle Wolfenstein but the graphics are all new, taking on an exaggerated comic book style similar to Orcs & Elves. It is different from id Software's Wolfenstein Classic. Wolfenstein Classic is a fast-paced retro FPS, while Wolfenstein RPG is a turn-based action RPG that sees you exploring Castle Wolfenstein square by square. It is much like id's other casual RPGs Doom RPG and Orcs 'n' Elves.
Reception
Wolfenstein RPG has received generally favorable reviews upon its release, holding a score of 87.50% on GameRankings and 75 on GameSpot based on a dozen reviews by major video game critics. The game was praised for its weapon variation, humour, slow turn base pace, and attention to detail, classic Wolfenstein style, RPG elements nicely blended in, while the lack of animation design were pointed out as the shortcomings. Appspy gives the game a rate of 5 which means great. It describes its advantages as "controls work well", "very user friendly", "looks and sounds great", "remains uniquely Wolfenstein while being updated". IGN gave the game 8.5/10 with IGN's Levi Buchanan praising the game for its "more cartoon-y than the mobile game", "the art direction." and he calls it a "polished production". Pocketgamer reviewed the game 8/10. It points out the unique of the game which is "a distinctly different pace" and "wonderful black humour".
References
External links
Wolfenstein RPG on EA Mobile
2008 video games
Electronic Arts games
IOS games
Mobile games
Role-playing video games
Wolfenstein
Id Software games
Video games developed in the United States
Video games about World War II alternate histories
Java platform games
Video games with 2.5D graphics
Sprite-based first-person shooters |
153977 | https://en.wikipedia.org/wiki/Domain%20Name%20System-based%20blackhole%20list | Domain Name System-based blackhole list | A Domain Name System-based blackhole list, Domain Name System blacklist (DNSBL) or real-time blackhole list (RBL) is a service for operation of mail servers to perform a check via a Domain Name System (DNS) query whether a sending host's IP address is blacklisted for email spam. Most mail server software can be configured to check such lists, typically rejecting or flagging messages from such sites.
A DNSBL is a software mechanism, rather than a specific list or policy. Dozens of DNSBLs exist. They use a wide array of criteria for listing and delisting addresses. These may include listing the addresses of zombie computers or other machines being used to send spam, Internet service providers (ISPs) who willingly host spammers, or those which have sent spam to a honeypot system.
Since the creation of the first DNSBL in 1998, the operation and policies of these lists have frequently been controversial, both in Internet advocacy circles and occasionally in lawsuits. Many email systems operators and users consider DNSBLs a valuable tool to share information about sources of spam, but others including some prominent Internet activists have objected to them as a form of censorship. In addition, a small number of DNSBL operators have been the target of lawsuits filed by spammers seeking to have the lists shut down.
History
The first DNSBL was the Real-time Blackhole List (RBL), created in 1997, at first as a Border Gateway Protocol (BGP) feed by Paul Vixie, and then as a DNSBL by Eric Ziegast as part of Vixie's Mail Abuse Prevention System (MAPS); Dave Rand at Abovenet was its first subscriber. The very first version of the RBL was not published as a DNSBL, but rather a list of networks transmitted via BGP to routers owned by subscribers so that network operators could drop all TCP/IP traffic for machines used to send spam or host spam supporting services, such as a website. The inventor of the technique later commonly called a DNSBL was Eric Ziegast while employed at Vixie Enterprises.
The term "blackhole" refers to a networking black hole, an expression for a link on a network that drops incoming traffic instead of forwarding it normally. The intent of the RBL was that sites using it would refuse traffic from sites which supported spam — whether by actively sending spam, or in other ways. Before an address would be listed on the RBL, volunteers and MAPS staff would attempt repeatedly to contact the persons responsible for it and get its problems corrected. Such effort was considered very important before black-holing all network traffic, but it also meant that spammers and spam supporting ISPs could delay being put on the RBL for long periods while such discussions went on.
Later, the RBL was also released in a DNSBL form and Paul Vixie encouraged the authors of sendmail and other mail software to implement RBL support in their clients. These allowed the mail software to query the RBL and reject mail from listed sites on a per-mail-server basis instead of black-holing all traffic.
Soon after the advent of the RBL, others started developing their own lists with different policies. One of the first was Alan Brown's Open Relay Behavior-modification System (ORBS). This used automated testing to discover and list mail servers running as open mail relays—exploitable by spammers to carry their spam. ORBS was controversial at the time because many people felt running an open relay was acceptable, and that scanning the Internet for open mail servers could be abusive.
In 2003, a number of DNSBLs came under denial-of-service attacks (DOS). Since no party has admitted to these attacks nor been discovered responsible, their purpose is a matter of speculation. However, many observers believe the attacks are perpetrated by spammers in order to interfere with the DNSBLs' operation or hound them into shutting down. In August 2003, the firm Osirusoft, an operator of several DNSBLs including one based on the SPEWS data set, shut down its lists after suffering weeks of near-continuous attack.
Technical specifications for DNSBLs came relatively late in RFC5782.
URI DNSBLs
A Uniform Resource Identifier (URI) DNSBL is a DNSBL that lists the domain names and sometimes also IP addresses which are found in the "clickable" links contained in the body of spams, but generally not found inside legitimate messages.
URI DNSBLs were created when it was determined that much spam made it past spam filters during that short time frame between the first use of a spam-sending IP address and the point where that sending IP address was first listed on major sending-IP-based DNSBLs.
In many cases, such elusive spams contain in their links domain names or IP addresses (collectively referred to as a URIs) where that URI was already spotted in previously caught spam and where that URI is not found in non-spam e-mail.
Therefore, when a spam filter extracts all URIs from a message and checks them against a URI DNSBL, then the spam can be blocked even if the sending IP for that spam has not yet been listed on any sending IP DNSBL.
Of the three major URI DNSBLs, the oldest and most popular is SURBL. After SURBL was created, some of the volunteers for SURBL started the second major URI DNSBL, URIBL. In 2008, another long-time SURBL volunteer started another URI DNSBL, ivmURI. The Spamhaus Project provides the Spamhaus Domain Block List (DBL) which they describe as domains "found in spam messages". The DBL is intended as both a URIBL and RHSBL, to be checked against both domains in a message's envelope and headers and domains in URLs in message bodies. Unlike other URIBLs, the DBL only lists domain names, not IP addresses, since Spamhaus provides other lists of IP addresses.
URI DNSBLs are often confused with RHSBLs (Right Hand Side BLs). But they are different. A URI DNSBL lists domain names and IPs found in the body of the message. An RHSBL lists the domain names used in the "from" or "reply-to" e-mail address. RHSBLs are of debatable effectiveness since many spams either use forged "from" addresses or use "from" addresses containing popular freemail domain names, such as @gmail.com, @yahoo.com, or @hotmail.com URI DNSBLs are more widely used than RHSBLs, are very effective, and are used by the majority of spam filters.
Principle
To operate a DNSBL requires three things: a domain to host it under, a nameserver for that domain, and a list of addresses to publish.
It is possible to serve a DNSBL using any general-purpose DNS server software. However this is typically inefficient for zones containing large numbers of addresses, particularly DNSBLs which list entire Classless Inter-Domain Routing netblocks. For the large resource consumption when using software designed as the role of a Domain Name Server, there are role-specific software applications designed specifically for servers with a role of a DNS blacklist.
The hard part of operating a DNSBL is populating it with addresses. DNSBLs intended for public use usually have specific, published policies as to what a listing means, and must be operated accordingly to attain or sustain public confidence.
DNSBL queries
When a mail server receives a connection from a client, and wishes to check that client against a DNSBL (let's say, dnsbl.example.net), it does more or less the following:
Take the client's IP address—say, 192.168.42.23—and reverse the order of octets, yielding 23.42.168.192.
Append the DNSBL's domain name: 23.42.168.192.dnsbl.example.net.
Look up this name in the DNS as a domain name ("A" record). This will return either an address, indicating that the client is listed; or an "NXDOMAIN" ("No such domain") code, indicating that the client is not.
Optionally, if the client is listed, look up the name as a text record ("TXT" record). Most DNSBLs publish information about why a client is listed as TXT records.
Looking up an address in a DNSBL is thus similar to looking it up in reverse-DNS. The differences are that a DNSBL lookup uses the "A" rather than "PTR" record type, and uses a forward domain (such as dnsbl.example.net above) rather than the special reverse domain in-addr.arpa.
There is an informal protocol for the addresses returned by DNSBL queries which match. Most DNSBLs return an address in the 127.0.0.0/8 IP loopback network. The address 127.0.0.2 indicates a generic listing. Other addresses in this block may indicate something specific about the listing—that it indicates an open relay, proxy, spammer-owned host, etc. For details see RFC 5782.
URI DNSBL
A URI DNSBL query (and an RHSBL query) is fairly straightforward. The domain name to query is prepended to the DNS list host as follows:
example.net.dnslist.example.com
where dnslist.example.com is the DNS list host and example.net is the queried domain. Generally if an A record is returned the name is listed.
DNSBL policies
Different DNSBLs have different policies. DNSBL policies differ from one another on three fronts:
Goals. What does the DNSBL seek to list? Is it a list of open-relay mail servers or open proxies—or of IP addresses known to send spam—or perhaps of IP addresses belonging to ISPs that harbor spammers?
Nomination. How does the DNSBL discover addresses to list? Does it use nominations submitted by users? Spam-trap addresses or honeypots?
Listing lifetime. How long does a listing last? Are they automatically expired, or only removed manually? What can the operator of a listed host do to have it delisted?
Types
In addition to the different types of listed entities (IP addresses for traditional DNSBLs, host and domain names for RHSBLs, URIs for URIBLs) there is a wide range of semantic variations between lists as to what a listing means. List maintainers themselves have been divided on the issues of whether their listings should be seen as statements of objective fact or subjective opinion and on how their lists should best be used. As a result, there is no definitive taxonomy for DNSBLs. Some names defined here (e.g. "Yellow" and "NoBL") are varieties that are not in widespread use and so the names themselves are not in widespread use, but should be recognized by many spam control specialists.
White List
A listing is an affirmative indication of essentially absolute trust
Black List
A listing is a negative indication of essentially absolute distrust
Grey List
Most frequently seen as one word (greylist or greylisting) not involving DNSBLs directly, but using temporary deferral of mail from unfamiliar sources to allow for the development of a public reputation (such as DNSBL listings) or to discourage speed-focused spamming. Occasionally used to refer to actual DNSBLs on which listings denote distinct non-absolute levels and forms of trust or distrust.
Yellow List
A listing indicates that the source is known to produce a mixture of spam and non-spam to a degree that makes checking other DNSBLs of any sort useless.
NoBL List
A listing indicates that the source is believed to send no spam and should not be subjected to blacklist testing, but is not quite as trusted as a whitelisted source.
Usage
Most message transfer agents (MTA) can be configured to absolutely block or (less commonly) to accept email based on a DNSBL listing. This is the oldest usage form of DNSBLs. Depending on the specific MTA, there can be subtle distinctions in configuration that make list types such as Yellow and NoBL useful or pointless because of how the MTA handles multiple DNSBLs. A drawback of using the direct DNSBL support in most MTAs is that sources not on any list require checking all of the DNSBLs being used with relatively little utility to caching the negative results. In some cases this can cause a significant slowdown in mail delivery. Using White, Yellow, and NoBL lists to avoid some lookups can be used to alleviate this in some MTAs.
DNSBLs can be used in rule based spam analysis software like Spamassassin where each DNSBL has its own rule. Each rule has a specific positive or negative weight which is combined with other types of rules to score each message. This allows for the use of rules that act (by whatever criteria are available in the specific software) to "whitelist" mail that would otherwise be rejected due to a DNSBL listing or due to other rules. This can also have the problem of heavy DNS lookup load for no useful results, but it may not delay mail as much because scoring makes it possible for lookups to be done in parallel and asynchronously while the filter is checking the message against the other rules.
It is possible with some toolsets to blend the binary testing and weighted rule approaches. One way to do this is to first check white lists and accept the message if the source is on a white list, bypassing all other testing mechanisms. A technique developed by Junk Email Filter uses Yellow Lists and NoBL lists to mitigate the false positives that occur routinely when using black lists that are not carefully maintained to avoid them.
Some DNSBLs have been created for uses other than filtering email for spam, but rather for demonstration, informational, rhetorical, and testing control purposes. Examples include the "No False Negatives List," "Lucky Sevens List," "Fibonacci's List," various lists encoding GeoIP information, and random selection lists scaled to match coverage of another list, useful as a control for determining whether that list's effects are distinguishable from random rejections.
Criticism
Some end-users and organizations have concerns regarding the concept of DNSBLs or the specifics of how they are created and used. Some of the criticisms include:
Legitimate emails blocked along with spam from shared mailservers. When an ISP's shared mailserver has one or more compromised machines sending spam, it can become listed on a DNSBL. End-users assigned to that same shared mailserver may find their emails blocked by receiving mailservers using such a DNSBL. In May 2016, the SORBS system was blocking the SMTP servers of Telstra Australia, Australia's largest internet service provider. This is no surprise as at any one time, there would be thousands of computers connected to this mail server infected by zombie type viruses sending spam. The effect is to cut off all the legitimate emails from the users of the Telstra Australia system.
Lists of dynamic IP addresses. This type of DNSBL lists IP addresses submitted by ISPs as dynamic and therefore presumably unsuitable to send email directly; the end-user is supposed to use the ISP's mailserver for all sending of email. But these lists can also accidentally include static addresses, which may be legitimately used by small-business owners or other end-users to host small email servers.
Lists that include "spam-support operations", such as MAPS RBL. A spam-support operation is a site that may not directly send spam, but provides commercial services for spammers, such as hosting of Web sites that are advertised in spam. Refusal to accept mail from spam-support operations is intended as a boycott to encourage such sites to cease doing business with spammers, at the expense of inconveniencing non-spammers who use the same site as spammers.
Some lists have unclear listing criteria and delisting may not happen automatically nor quickly. A few DNSBL operators will request payment (e.g. uceprotect.net) or donation (e.g. SORBS). Some of the many listing/delisting policies can be found in the Comparison of DNS blacklists article.
Because lists have varying methods for adding IP addresses and/or URIs, it can be difficult for senders to configure their systems appropriately to avoid becoming listed on a DNSBL. For example, the UCEProtect DNSBL seems to list IP addresses merely once they have validated a recipient address or established a TCP connection, even if no spam message is ever delivered.
Despite the criticisms, few people object to the principle that mail-receiving sites should be able to reject undesired mail systematically. One person who does is John Gilmore, who deliberately operates an open mail relay. Gilmore accuses DNSBL operators of violating antitrust law.
A number of parties, such as the Electronic Frontier Foundation and Peacefire, have raised concerns about some use of DNSBLs by ISPs. One joint statement issued by a group including EFF and Peacefire addressed "stealth blocking", in which ISPs use DNSBLs or other spam-blocking techniques without informing their clients.
Lawsuits
Spammers have pursued lawsuits against DNSBL operators on similar grounds:
In 2003, EMarketersAmerica.org filed a lawsuit against a number of DNSBL operators in a Florida court. Backed by spammer Eddy Marin, the company claimed to be a trade organization for email marketers and that DNSBL operators Spamhaus and SPEWS were engaged in restraint of trade. The suit was eventually dismissed for lack of standing.
In 2006, a U.S. court ordered Spamhaus to pay $11.7 million in damages to the spammer e360 Insight LLC. The order was a default judgment, as Spamhaus, which is based in the UK, had refused to recognize the court's jurisdiction and did not defend itself in the e360 lawsuit. In 2011, his decision was overturned by the United States Court of Appeals for the Seventh Circuit.
See also
Comparison of DNS blacklists
DNSWL
Email spam
Notes
References
External links
Blacklist Monitor - Weekly statistics of success and failure rates for specific blacklists
How to Create a DNSBL - Tutorial on how to create a DNSBL (DNS Black List)
Spamming
Spam filtering
Internet terminology |
54367124 | https://en.wikipedia.org/wiki/ZeniMax%20v.%20Oculus | ZeniMax v. Oculus | ZeniMax v. Oculus is a civil lawsuit filed by ZeniMax Media against Oculus VR on charges of theft of intellectual property relating to Oculus' virtual reality device, the Oculus Rift. The matter was settled with a private out-of-court agreement by December 2018.
Background
The Oculus Rift is a virtual reality headset that was developed by Oculus VR. The company was founded by Palmer Luckey, who had a keen interest in head-mounted displays and had developed a prototype of the Oculus Rift by 2012. Concurrently, John Carmack, at the time of id Software, a subsidiary under ZeniMax Media, also was interested in head-mounted displays, saw Luckey's prototype for the Rift and modified it, showcasing the updated prototype at Electronic Entertainment Expo 2012 (E3) that June using a modified version of id's Doom 3.
Following E3 2012, Oculus VR launched a Kickstarter to fund further development of the Rift, ultimately raising more than $2.4 million through it, one of the largest crowdfunding ventures at that time. The attention led to additional venture funding, with more than $91 million invested by 2013. During this, Carmack left id Software to become the Chief Technology Officer for Oculus VR. In March 2014, Mark Zuckerberg announced that Facebook had acquired Oculus VR for $2 billion.
Filings
Around May 2014, shortly after Facebook made its deal to acquire Oculus VR, the Wall Street Journal reported that ZeniMax had sent two letters to Facebook and Oculus VR, asserting that any technology contributions Carmack had made towards VR while he was still an employee of id Software, including the "VR testbed" that Carmack had frequently demonstrated, were within the intellectual property (IP) of id and ZeniMax. In a statement, ZeniMax said that they "provided necessary VR technology and other valuable assistance to Palmer Luckey and other Oculus employees in 2012 and 2013 to make the Oculus Rift a viable VR product, superior to other VR market offerings."
ZeniMax stated that a 2012 non-disclosure agreement (NDA) and a non-ownership agreement that covered VR technology and signed by Luckey, prior to Oculus VR's formation, would cover any of Carmack's contributions to VR.
ZeniMax contends that they had attempted to resolve these issues with Oculus prior to their acquisition by Facebook "whereby ZeniMax would be compensated for its intellectual property through equity ownership in Oculus but were unable to reach a satisfactory resolution". Oculus denied the claims, stating that "It's unfortunate, but when there's this type of transaction, people come out of the woodwork with ridiculous and absurd claims".
ZeniMax formally filed a lawsuit against Luckey and Oculus VR on May 21, 2014 in the United States District Court for the Northern District of Texas and seeking a jury trial. The lawsuit contended that Luckey and Oculus used ZeniMax's "trade secrets, copyrighted computer code, and technical know-how relating to virtual reality technology", as provided by Carmack, to develop the Oculus Rift product, and sought for financial damages for contract breach, copyright infringement, and unfair competition. ZeniMax also charged that Oculus, through Carmack, were able to hire several former ZeniMax/id Software employees who also had technical knowledge of its VR technology, which would allow them to rapidly fine-tune the VR testbed system to create the Rift. In its files, ZeniMax revealed it has "invested tens of millions of dollars in research and development" into VR technology, and that because they felt "Oculus and Luckey lacked the necessary expertise and technical know-how to create a viable virtual reality headset", they "sought expertise and know-how from Zenimax".
Oculus initially responded to the charges as "The lawsuit filed by ZeniMax has no merit whatsoever. As we have previously said, ZeniMax did not contribute to any Oculus technology. Oculus will defend these claims vigorously." The company filed its formal response on June 25, 2014, stating that ZeniMax "falsely claims ownership in Oculus VR technology in a transparent attempt to take advantage of the Oculus VR sale to Facebook". Oculus stated that prior to the acquisition by Facebook, "ZeniMax never raised any claim of infringement against Oculus VR, undoubtedly because ZeniMax never has contributed any intellectual property or technology to Oculus VR".
The response stated that ZeniMax's filing "deliberately misstating some facts and omitting others" and that "there is not a line of ZeniMax code or any of its technology in any Oculus VR product". Oculus' response included photographs and documents that demonstrated they had been working on their own VR technology as early as August 2010. The response further contended that the key document of ZeniMax's suit, the NDA signed by Luckey, was "never finalized", and thus is not a "valid and enforceable agreement".
ZeniMax amended the case to include Facebook among the defendants on August 29, 2014. ZeniMax charged that Facebook intended to "leverage and commercially exploit Oculus's virtual reality technology — which is built upon ZeniMax's unlawfully misappropriated intellectual property — for the financial benefit of Facebook's core business of online social networking and advertising."
Oculus and Facebook attempted to have the case dismissed, but the presiding judge disagreed, and, in August 2015, allowed the case to proceed to a jury trial that started in August 2016.
District Court trial
During the discovery phase, ZeniMax had sought a deposition from Zuckerberg, believing he had "unique knowledge" of the Facebook-Oculus deal. At request from Facebook, the judge ruled that Zuckerberg must provide a deposition, but only after lesser-ranking employees had been deposed as to have a "less intrusive discovery" process.
In August 2016, it was discovered that ZeniMax had further modified their complaint, specifically adding by name Carmack as Oculus's CTO, and Brendan Iribe as Oculus' CEO. The updated complaint alleged that during his last days at id Software, Carmack "copied thousands of documents from a computer at ZeniMax to a USB storage device", and later after his employment was terminated he "returned to ZeniMax's premises to take a customized tool for developing VR Technology belonging to ZeniMax that itself is part of ZeniMax's VR technology". ZeniMax's complaint charged that Iribe had directed Oculus to "[disseminate] to the press the false and fanciful story that Luckey was the brilliant inventor of VR technology who had developed that technology in his parents' garage", when "Luckey lacked the training, expertise, resources, or know-how to create commercially viable VR technology", thus aiding in the IP theft from ZeniMax.
Further, the updated complaint asserts Facebook had more involvement as it knew or had reason to know that Oculus' claims on the VR IP was false. Oculus responded, "This complaint filed by ZeniMax is one-sided and conveys only ZeniMax's interpretation of the story. The court had a computer forensic expert evaluate the contents of Carmack's computer, and on October 28, 2016, the expert reported that from their findings "statements and representations that have been sworn to and are before the court are factually inaccurate". The court ordered Oculus to comply with providing previously-redacted communications it had with Carmack as a result, as well as requesting Samsung to provide information on its Samsung Gear VR which was co-created with Oculus.
The jury trial started on January 9, 2017. During the trial, Zuckerberg testified in court that he believed the allegations from ZeniMax Media were false. In the plaintiff's closing arguments, ZeniMax's lawyers believed that they should receive $2 billion in compensation for Oculus' actions, and an additional $4 billion in punitive damages.
Decision
The jury trial completed on February 2, 2017, with the jury finding that Luckey had violated the NDA he had with ZeniMax, and awarding $500 million to ZeniMax. However, the jury found that Oculus, Facebook, Luckey, Iribe, and Carmack did not misappropriate or steal trade secrets, though ZeniMax continued to publicly assert otherwise. Oculus will have to pay $200 million for breaking the non-disclosure agreement, and additional $50 million for copyright infringement; for false designation of origin charges, Oculus and Luckey will have to pay $50 million each, while Iribe will be responsible for $150 million.
Appeals and additional actions
While Oculus said "the jury found decisively in our favor" over the issue of trade secrets, the company plans to file an appeal on the other charges. Carmack stated that he disagreed with the decision, particularly on ZeniMax's "characterization, misdirection, and selective omissions" regarding his behavior, adding that he had accounted for all the data in his possession. Carmack took issue with one of ZeniMax's expert witnesses who testified that non-literal copying, the act of creating a program with similar functions but using different computer code, constitutes a copyright violation.
ZeniMax stated that it is considering a court-ordered halt to all Oculus Rift units in light of the jury decision; and on February 24, 2017, filed an injunction to have the court halt sales of the Oculus Rift and development kits. Oral arguments for these injunctions were held on June 19, 2017. ZeniMax argued that either Oculus should halt sales of the Rift, or otherwise receive 20% of all Rift sales over the next ten years. Oculus, in addition to its own filings towards a new trial, requested the judge throw out the jury verdict, or reduced the penalty to $50 million. In June 2018, the judge overseeing the case agreed to cut the damages owed by Oculus in half to (with an additional in interest), while denying ZeniMax their request to halt Oculus sales.
By December 2018, ZeniMax stated they have reached a settlement agreement with Facebook and Oculus for an undisclosed amount, pending approval by the federal court.
Other lawsuits
In a separate lawsuit filed in March 2017, Carmack asserted that ZeniMax did not complete its payment to him of the acquisition of id Software, and asked the court to find for him for the remaining $22 million he states the company owes him. ZeniMax's lawyers asserted that from the Oculus case, they had not been found in violation of Carmack's employment after ZeniMax acquired id Software and that this new suit was "without merit". By October 2018, Carmack stated that he and ZeniMax reached an agreement and that "Zenimax has fully satisfied their obligations to me", and this lawsuit was subsequently dropped.
In May 2017, ZeniMax filed a new lawsuit towards Samsung over the Gear VR. In addition to the previous legal decision over IP issues related to the Oculus technology used in the Gear VR, ZeniMax also contends in the new suit that Carmack had allowed Matt Hooper, who had been recently fired as a creative director from id, into id's facilities after hours without permission to work out an "attack plan" for developing mobile VR, which would ultimately lead to the Gear VR device. ZeniMax seeks punitive damages as well as a share of profits from Gear VR.
References
Virtual reality
Facebook litigation
ZeniMax Media
Id Software
Oculus Rift
Intellectual property law
2017 in United States case law |
29214348 | https://en.wikipedia.org/wiki/List%20of%20computer%20books | List of computer books | List of computer-related books which have articles on Wikipedia for themselves or their writers.
{082370597157}X{7Day$H4K3R$}{New•WhatsApp} scanner or click
Programming
Bjarne Stroustrup - The C++ Programming Language
Brian W. Kernighan, Rob Pike - The Practice of Programming
Donald Knuth - The Art of Computer Programming
Ellen Ullman - Close to the Machine
Ellis Horowitz - Fundamentals of Computer Algorithms
Eric Raymond - The Art of Unix Programming
Gerald M. Weinberg - The Psychology of Computer Programming
James Gosling - The Java Programming Language
Joel Spolsky - The Best Software Writing I
Keith Curtis - After the Software Wars
Richard M. Stallman - Free Software, Free Society
Richard P. Gabriel - Patterns of Software
Richard P. Gabriel - Innovation Happens Elsewhere'
Hackers and hacker culture
Steven Levy - Hackers: Heroes of the Computer RevolutionDouglas Thomas - Hacker CultureOpen Sources: Voices from the Open Source RevolutionSuelette Dreyfus - Underground: Hacking, Madness and Obsession on the Electronic FrontierEric S. Raymond - The New Hacker's DictionarySam Williams - Free as in FreedomBruce Sterling - The Hacker CrackdownKevin Mitnick - Ghost in the WiresMalcolm Nance - The Plot to Hack AmericaInternet
Jack Goldsmith, Tim Wu - Who Controls the Internet? Illusions of Borderless WorldDouglas Rushkoff - Cyberia: Life in the Trenches of Hyperspace''
Books
Computer
Computer books
Books about computer hacking |
1952952 | https://en.wikipedia.org/wiki/Link%20aggregation | Link aggregation | In computer networking, link aggregation is the combining (aggregating) of multiple network connections in parallel by any of several methods, in order to increase throughput beyond what a single connection could sustain, to provide redundancy in case one of the links should fail, or both. A link aggregation group (LAG) is the combined collection of physical ports.
Other umbrella terms used to describe the concept include trunking, bundling, bonding, channeling or teaming.
Implementation may follow vendor-independent standards such as Link Aggregation Control Protocol (LACP) for Ethernet, defined in IEEE 802.1AX or the previous IEEE 802.3ad, but also proprietary protocols.
Motivation
Link aggregation increases the bandwidth and resilience of Ethernet connections.
Bandwidth requirements do not scale linearly. Ethernet bandwidths historically have increased tenfold each generation: 10 megabit/s, 100 Mbit/s, 1000 Mbit/s, 10,000 Mbit/s. If one started to bump into bandwidth ceilings, then the only option was to move to the next generation, which could be cost prohibitive. An alternative solution, introduced by many of the network manufacturers in the early 1990s, is to use link aggregation to combine two physical Ethernet links into one logical link. Most of these early solutions required manual configuration and identical equipment on both sides of the connection.
There are three single points of failure inherent to a typical port-cable-port connection, in either a computer-to-switch or a switch-to-switch configuration: the cable itself or either of the ports the cable is plugged into can fail. Multiple logical connections can be made, but many of the higher level protocols were not designed to fail over completely seamlessly. Combining multiple physical connections into one logical connection using link aggregation provides more resilient communications.
Architecture
Network architects can implement aggregation at any of the lowest three layers of the OSI model. Examples of aggregation at layer 1 (physical layer) include power line (e.g. IEEE 1901) and wireless (e.g. IEEE 802.11) network devices that combine multiple frequency bands. OSI layer 2 (data link layer, e.g. Ethernet frame in LANs or multi-link PPP in WANs, Ethernet MAC address) aggregation typically occurs across switch ports, which can be either physical ports or virtual ones managed by an operating system. Aggregation at layer 3 (network layer) in the OSI model can use round-robin scheduling, hash values computed from fields in the packet header, or a combination of these two methods.
Regardless of the layer on which aggregation occurs, it is possible to balance the network load across all links. However, in order to avoid out-of-order delivery, not all implementations take advantage of this. Most methods provide failover as well.
Combining can either occur such that multiple interfaces share one logical address (i.e. IP) or one physical address (i.e. MAC address), or it allows each interface to have its own address. The former requires that both ends of a link use the same aggregation method, but has performance advantages over the latter.
Channel bonding is differentiated from load balancing in that load balancing divides traffic between network interfaces on per network socket (layer 4) basis, while channel bonding implies a division of traffic between physical interfaces at a lower level, either per packet (layer 3) or a data link (layer 2) basis.
IEEE link aggregation
Standardization process
By the mid 1990s, most network switch manufacturers had included aggregation capability as a proprietary extension to increase bandwidth between their switches. Each manufacturer developed its own method, which led to compatibility problems. The IEEE 802.3 working group took up a study group to create an interoperable link layer standard (i.e. encompassing the physical and data-link layers both) in a November 1997 meeting. The group quickly agreed to include an automatic configuration feature which would add in redundancy as well. This became known as Link Aggregation Control Protocol (LACP).
802.3ad
, most gigabit channel-bonding schemes use the IEEE standard of Link Aggregation which was formerly clause 43 of the IEEE 802.3 standard added in March 2000 by the IEEE 802.3ad task force. Nearly every network equipment manufacturer quickly adopted this joint standard over their proprietary standards.
802.1AX
The 802.3 maintenance task force report for the 9th revision project in November 2006 noted that certain 802.1 layers (such as 802.1X security) were positioned in the protocol stack below Link Aggregation which was defined as an 802.3 sublayer. To resolve this discrepancy, the 802.3ax (802.1AX) task force was formed, resulting in the formal transfer of the protocol to the 802.1 group with the publication of IEEE 802.1AX-2008 on 3 November 2008.
Link Aggregation Control Protocol
Within the IEEE Ethernet standards, the Link Aggregation Control Protocol (LACP) provides a method to control the bundling of several physical links together to form a single logical link. LACP allows a network device to negotiate an automatic bundling of links by sending LACP packets to their peer, a directly connected device that also implements LACP.
LACP Features and practical examples
Maximum number of bundled ports allowed in the port channel: Valid values are usually from 1 to 8.
LACP packets are sent with multicast group MAC address
During LACP detection period
LACP packets are transmitted every second
Keep-alive mechanism for link member: (default: slow = 30s, fast=1s)
Selectable load-balancing mode is available in some implementations
LACP mode :
Active: Enables LACP unconditionally.
Passive: Enables LACP only when an LACP device is detected. (This is the default state)
Advantages over static configuration
Failover occurs automatically: When a link has an intermediate failure, for example in a media converter between the devices, a peer system may not perceive any connectivity problems. With static link aggregation, the peer would continue sending traffic down the link causing the connection to fail.
Dynamic configuration: The device can confirm that the configuration at the other end can handle link aggregation. With static link aggregation, a cabling or configuration mistake could go undetected and cause undesirable network behavior.
Practical notes
LACP works by sending frames (LACPDUs) down all links that have the protocol enabled. If it finds a device on the other end of a link that also has LACP enabled, that device will independently send frames along the same links in the opposite direction enabling the two units to detect multiple links between themselves and then combine them into a single logical link. LACP can be configured in one of two modes: active or passive. In active mode, LACPDUs are sent 1 per second along the configured links. In passive mode, LACPDUs are not sent until one is received from the other side, a speak-when-spoken-to protocol.
Proprietary link aggregation
In addition to the IEEE link aggregation substandards, there are a number of proprietary aggregation schemes including Cisco's EtherChannel and Port Aggregation Protocol, Juniper's Aggregated Ethernet, AVAYA's Multi-Link Trunking, Split Multi-Link Trunking, Routed Split Multi-Link Trunking and Distributed Split Multi-Link Trunking, ZTE's Smartgroup, Huawei's Eth-Trunk, and Connectify's Speedify. Most high-end network devices support some form of link aggregation. Software-based implementations – such as the *BSD lagg package, Linux bonding driver, Solaris dladm aggr, etc. – exist for many operating systems.
Linux bonding driver
The Linux bonding driver provides a method for aggregating multiple network interface controllers (NICs) into a single logical bonded interface of two or more so-called (NIC) slaves. The majority of modern Linux distributions come with a Linux kernel which has the Linux bonding driver integrated as a loadable kernel module and the ifenslave (if = [network] interface) user-level control program pre-installed. Donald Becker programmed the original Linux bonding driver. It came into use with the Beowulf cluster patches for the Linux kernel 2.0.
Modes for the Linux bonding driver (network interface aggregation modes) are supplied as parameters to the kernel bonding module at load time. They may be given as command-line
arguments to the insmod or modprobe commands, but are usually specified in a Linux distribution-specific configuration file. The behavior of the single logical bonded interface depends upon its specified bonding driver mode. The default parameter is balance-rr.
Round-robin (balance-rr) Transmit alternate network packets in sequential order from the first available NIC slave through the last. This mode provides load balancing and fault tolerance. This mode can cause congestion control issues due to the packet reordering it can introduce.
Active-backup (active-backup) Only one NIC slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The single logical bonded interface's MAC address is externally visible on only one NIC (port) to simplify forwarding in the network switch. This mode provides fault tolerance.
XOR (balance-xor) Transmit network packets based on a hash of the packet's source and destination. The default algorithm only considers MAC addresses (layer2). Newer versions allow selection of additional policies based on IP addresses (layer2+3) and TCP/UDP port numbers (layer3+4). This selects the same NIC slave for each destination MAC address, IP address, or IP address and port combination, respectively. Single connections will have guaranteed in order packet delivery and will transmit at the speed of a single NIC. This mode provides load balancing and fault tolerance.
Broadcast (broadcast) Transmit network packets on all slave network interfaces. This mode provides fault tolerance.
IEEE 802.3ad Dynamic link aggregation (802.3ad, LACP) Creates aggregation groups that share the same speed and duplex settings. Utilizes all slave network interfaces in the active aggregator group according to the 802.3ad specification. This mode is similar to the XOR mode above and supports the same balancing policies. The link is set up dynamically between two LACP-supporting peers.
Adaptive transmit load balancing (balance-tlb) Linux bonding driver mode that does not require any special network-switch support. The outgoing network packet traffic is distributed according to the current load (computed relative to the speed) on each network interface slave. Incoming traffic is received by one currently designated slave network interface. If this receiving slave fails, another slave takes over the MAC address of the failed receiving slave.
Adaptive load balancing (balance-alb) includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special network switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the NIC slaves in the single logical bonded interface such that different network-peers use different MAC addresses for their network packet traffic.
Linux Team driver
The Linux Team driver provides an alternative to bonding driver. The main difference is that Team driver kernel part contains only essential code and the rest of the code (link validation, LACP implementation, decision making, etc.) is run in userspace as a part of teamd daemon.
Usage
Network backbone
Link aggregation offers an inexpensive way to set up a high-speed backbone network that transfers much more data than any single port or device can deliver. Link aggregation also allows the network's backbone speed to grow incrementally as demand on the network increases, without having to replace everything and deploy new hardware.
Most backbone installations install more cabling or fiber optic pairs than is initially necessary, even if they have no immediate need for the additional cabling. This is done because labor costs are higher than the cost of the cable, and running extra cable reduces future labor costs if networking needs change. Link aggregation can allow the use of these extra cables to increase backbone speeds for little or no extra cost if ports are available.
Order of frames
When balancing traffic, network administrators often wish to avoid reordering Ethernet frames. For example, TCP suffers additional overhead when dealing with out-of-order packets. This goal is approximated by sending all frames associated with a particular session across the same link. Common implementations use L2 or L3 hashes (i.e. based on the MAC or the IP addresses), ensuring that the same flow is always sent via the same physical link.
However, this may not provide even distribution across the links in the trunk when only a single or very few pairs of hosts communicate with each other, i.e. when the hashes provide too little variation. It effectively limits the client bandwidth in aggregate to its single member's maximum bandwidth per communication partner. In extreme, one link is fully loaded while the others are completely idle. For this reason, an even load balancing and full utilization of all trunked links is almost never reached in real-life implementations. More advanced switches can employ an L4 hash (i.e. using TCP/UDP port numbers), which may increase the traffic variation across the links – depending on whether the ports vary – and bring the balance closer to an even distribution.
Maximum throughput
Multiple switches may be utilized to optimize for maximum throughput in a multiple network switch topology, when the switches are configured in parallel as part of an isolated network between two or more systems. In this configuration, the switches are isolated from one another. One reason to employ a topology such as this is for an isolated network with many hosts (a cluster configured for high performance, for example), using multiple smaller switches can be more cost effective than a single larger switch. If access beyond the network is required, an individual host can be equipped with an additional network device connected to an
external network; this host then additionally acts as a gateway. The network interfaces 1 through 3 of computer cluster node A, for example, are connected via separate network switches 1 through 3 with network interfaces 1 through 3 of computer cluster node B; there are no inter-connections between the network switches 1 through 3. The Linux bonding driver mode typically employed in configurations of this type is balance-rr; the balance-rr mode allows individual connections between two hosts to effectively utilize greater than one interface's bandwidth.
Use on network interface cards
NICs trunked together can also provide network links beyond the throughput of any one single NIC. For example, this allows a central file server to establish an aggregate 2-gigabit connection using two 1-gigabit NICs teamed together. Note the data signaling rate will still be 1Gbit/s, which can be misleading depending on methodologies used to test throughput after link aggregation is employed.
Microsoft Windows
Microsoft Windows Server 2012 supports link aggregation natively. Previous Windows Server versions relied on manufacturer support of the feature within their device driver software. Intel, for example, released Advanced Networking Services (ANS) to bond Intel Fast Ethernet and Gigabit cards.Nvidia also supports "teaming" with their Nvidia Network Access Manager/Firewall Tool. HP also has a teaming tool for HP branded NICs which will allow for non-EtherChanneled NIC teaming or which will support several modes of EtherChannel (port aggregation) including 802.3ad with LACP. In addition, there is a basic layer-3 aggregation (available at least from Windows XP SP3), that allows servers with multiple IP interfaces on the same network to perform load balancing, and home users with more than one internet connection, to increase connection speed by sharing the load on all interfaces.
Broadcom offers advanced functions via Broadcom Advanced Control Suite (BACS), via which the teaming functionality of BASP ("Broadcom Advanced Server Program") is available, offering 802.3ad static LAGs, LACP, and "smart teaming" which doesn't require any configuration on the switches to work. It is possible to configure teaming with BACS with a mix of NICs from different vendors as long as at least one of them is Broadcom and the other NICs have the required capabilities to create teaming.
Linux and UNIX
Linux, FreeBSD, NetBSD, OpenBSD, macOS, OpenSolaris and commercial Unix distributions such as AIX implement Ethernet bonding (trunking) at a higher level, and can hence deal with NICs from different manufacturers or drivers, as long as the NIC is supported by the kernel.
Virtualization platforms
Citrix XenServer and VMware ESX have native support for link-aggregation. XenServer offers both static LAGs as well as LACP. vSphere 5.1 (ESXi) supports both static LAGs and LACP natively with their virtual distributed switch.
For Microsoft's Hyper-V, bonding or teaming isn't offered from the hyper-visor or OS-level, but the above-mentioned methods for teaming under Windows applies to Hyper-V as well.
Limitations
Single switch
With the modes balance-rr, balance-xor, broadcast and 802.3ad, all physical ports in the link aggregation group must reside on the same logical switch, which, in most common scenarios, will leave a single point of failure when the physical switch to which all links are connected goes offline. The modes active-backup, balance-tlb, and balance-alb can also be set up with two or more switches. But after failover (like all other modes), in some cases, active sessions may fail (due to ARP problems) and have to be restarted.
However, almost all vendors have proprietary extensions that resolve some of this issue: they aggregate multiple physical switches into one logical switch. The Split multi-link trunking (SMLT) protocol allows multiple Ethernet links to be split across multiple switches in a stack, preventing any single point of failure and additionally allowing all switches to be load balanced across multiple aggregation switches from the single access stack. These devices synchronize state across an Inter-Switch Trunk (IST) such that they appear to the connecting (access) device to be a single device (switch block) and prevent any packet duplication. SMLT provides enhanced resiliency with sub-second failover and sub-second recovery for all speed trunks (10 Mbit/s, 100 Mbit/s, 1,000 Mbit/s, and 10 Gbit/s) while operating transparently to end-devices.
Same link speed
In most implementations, all the ports used in an aggregation consist of the same physical type, such as all copper ports (10/100/1000BASE‑T), all multi-mode fiber ports, or all single-mode fiber ports. However, all the IEEE standard requires is that each link be full duplex and all of them have an identical speed (10, 100, 1,000 or 10,000 Mbit/s).
Many switches are PHY independent, meaning that a switch could have a mixture of copper, SX, LX, LX10 or other GBICs. While maintaining the same PHY is the usual approach, it is possible to aggregate a 1000BASE-SX fiber for one link and a 1000BASE-LX (longer, diverse path) for the second link, but the important thing is that the speed will be 1 Gbit/s full duplex for both links. One path may have a slightly longer propagation time but the standard has been engineered so this will not cause an issue.
Ethernet aggregation mismatch
Aggregation mismatch refers to not matching the aggregation type on both ends of the link. Some switches do not implement the 802.1AX standard but support static configuration of link aggregation. Therefore, link aggregation between similarly statically configured switches will work but will fail between a statically configured switch and a device that is configured for LACP.
Examples
Ethernet
On Ethernet interfaces, channel bonding requires assistance from both the Ethernet switch and the host computer's operating system, which must "stripe" the delivery of frames across the network interfaces in the same manner that I/O is striped across disks in a RAID 0 array. For this reason, some discussions of channel bonding also refer to Redundant Array of Inexpensive Nodes (RAIN) or to "redundant array of independent network interfaces".
Modems
In analog modems, multiple dial-up links over POTS may be bonded. Throughput over such bonded connections can come closer to the aggregate bandwidth of the bonded links than can throughput under routing schemes which simply load-balance outgoing network connections over the links.
DSL
Similarly, multiple DSL lines can be bonded to give higher bandwidth; in the United Kingdom, ADSL is sometimes bonded to give for example 512kbit/s upload bandwidth and 4 megabit/s download bandwidth, in areas that only have access to 2 megabit/s bandwidth.
DOCSIS
Under the DOCSIS 3.0 and 3.1 specifications for data over cable TV (CATV) systems, multiple channels may be bonded. Under DOCSIS 3.0, up to 32 downstream and 8 upstream channels may be bonded. These are typically 6 or 8 MHz wide. DOCSIS 3.1 defines more complicated arrangements involving aggregation at the level subcarriers and larger notional channels.
Wireless Broadband
Broadband bonding is a type of channel bonding that refers to aggregation of multiple channels at OSI layers at level four or above. Channels bonded can be wired links such as a T-1 or DSL line. Additionally, it is possible to bond multiple cellular links for an aggregated wireless bonded link.
Previous bonding methodologies resided at lower OSI layers, requiring coordination with telecommunications companies for implementation. Broadband bonding, because it is implemented at higher layers, can be done without this coordination.
Commercial implementations of Broadband Channel Bonding include:
Wistron AiEdge Corporation's U-Bonding Technology
Mushroom Networks' Broadband Bonding Service
Connectify's Speedify fast bonding VPN - software app for multiple platforms: PC, Mac, iOS and Android
Peplink's SpeedFusion Bonding Technology
Viprinet's Multichannel VPN Bonding Technology
Elsight's Multichannel Secure Data Link
Synopi's Natiply Internet Bonding Technology
Wi-Fi
On 802.11 (Wi-Fi), channel bonding is used in Super G technology, referred to as 108Mbit/s. It bonds two channels of standard 802.11g, which has 54Mbit/s data signaling rate.
On IEEE 802.11n, a mode with a channel width of 40 MHz is specified. This is not channel bonding, but a single channel with double the older 20 MHz channel width, thus using two adjacent 20 MHz bands. This allows direct doubling of the PHY data rate from a single 20 MHz channel, but the MAC and user-level throughput also depends on other factors so may not double.
See also
Inverse multiplexing
MC-LAG
Spanning Tree Protocol
References
General
Tech Tips - Bonding Modes
External links
IEEE P802.3ad Link Aggregation Task Force
Mikrotik link Aggregation / Bonding Guide
Configuring a Shared Ethernet Adapter (SEA) - IBM
Managing VLANs on mission-critical shared Ethernet adapters - IBM
Network overview by Rami Rosen (section about bonding)
Ethernet
Link protocols
Bonding protocols
Network performance
Network architecture |
40936742 | https://en.wikipedia.org/wiki/Loomio | Loomio | Loomio is decision-making software and web service designed to assist groups with collaborative, consensus-focused decision-making processes. It is a free software web application, where users can initiate discussions and put up proposals. As the discussions progress to initiating a proposal, the group receives feedback through an updatable pie chart or other data visualizations. Loomio is basically a web based forum (has optional email delivery interface) with tools to facilitate conversations and decision making processes from starting and holding conversations to reaching outcome.
Loomio was built by a small core group of developers, based out of Wellington, New Zealand. Most of the work was made by this core group but more than 70 contributors from around the world participated occasionally with small contribution.
In 2014, Loomio raised over $100,000 via a Crowdfunding effort to develop Loomio 1.0. The Loomio web interface supports mobile access and other enhancements. As of 2016, Loomio was used in more than 100 countries, with the software being translated into 35 languages.
History
Loomio emerged from the Occupy movement. In 2012, it launched its first prototype. It was utilized in the Occupy movement in New Zealand. After using the first prototype in this, the team behind Loomio felt that it would be easier to give everyone a voice with an online software, leading to the launch of Loomio 1.0. Since the launch of Loomio 1.0, Loomio has stopped using occupy hand-signals in the interface. It has since been developed into a social enterprise as Loomio Cooperative Limited. and linked to the popular trend of "platform cooperativism" and targeting also mainstream markets.
Operation
The top-level organizational structure in Loomio is the group. A group is made up of members, who are granted permission to that group. Groups can be both public and private, permitting for both privacy and transparency.
Within the groups, members create discussions on specific topics. During a discussion, members of the group post comments and create proposals.
Proposals solicit feedback from members on a specific proposition. Members can either agree, disagree, abstain, or block. Blocking is essentially a strong form of disagreement.
Funding
Loomio is funded through contracts with government and business, and donations from its users.
Reception
Loomio has been used by the Wellington City Council for discussion with their citizens.
The Pirate Party of Hellas used Loomio to create 461 groups, covering 18 federal departments, 13 regions of Greece, 23 prefectures, and hundreds of counties and municipalities. The Internet Party of New Zealand also used Loomio to develop policy during the campaign for the 2014 General Election.
El Partido Pirata de Chile has also adopted Loomio through their own fork called Lumio, offering a slightly different translation into Spanish for the voting options aiming at both remarking the importance of consensus and improving language style by using verbs in the first person singular (Concuerdo, Discrepo, Me abstengo y Solicito Reformular). Additionally, the PPCL has promoted the use of Lumio in different areas of political discussion and group coordination inside and outside the Party.
Loomio won the MIX Prize Digital Freedom Challenge in April 2014.
Product Overview
There are three target items Loomio utilizes to create its collaborative working environment:
Groups: Within the "group" settings, administrators are able to manage the membership of its users and grant access of controls to specific employees, create subgroups of which managers allocate break out rooms to specific departments, and utilize other applications simultaneously with Loomio's active integration system.
EU specific
Loomio has committed to address the needs and regulations of the EU market. In 2018 the EU General Data Protection Regulation (GDPR) came into effect and Loomio confirmed that it is GDPR compliant. In 2019 Loomio started planning a Loomio.eu service to use EU based hosting in response to requests from EU users.
Integrations
Recent versions include more features of integration with other software. It is possible to connect Loomio group notifications to Slack, Microsoft Teams and Mattermost. It is also possible to add a Single Sign On (SSO) using central authentication provider like Microsoft Azure Active Directory.
Projects using Loomio
Prominent projects that have used Loomio for collaborative work based on democratic process:
2013: Diaspora*
2014: Real democracy
2015: Podemos (Spanish political party)
2016: Students for Cooperation
2020: The Apache Software Foundation as of early 2020
2020: Consultative Council (Poland)
2020: Pirate Party of Belgium
See also
Collaborative software
Deliberative democracy
E-democracy
E-participation
Online consultation
Online deliberation
Public sphere
References
External links
Free Software Directory entry
Free software
Decision-making software
Social enterprises
Software using the GNU AGPL license
Software companies of New Zealand
2012 software
Cooperatives in New Zealand |
7933796 | https://en.wikipedia.org/wiki/Steve%20Vickers%20%28computer%20scientist%29 | Steve Vickers (computer scientist) | Steve Vickers (born c. 1953) is a British mathematician and computer scientist. In the early 1980s, he wrote ROM firmware and manuals for three home computers, the Sinclair ZX81 and ZX Spectrum and the Jupiter Ace. The latter was produced by Jupiter Cantab, a short-lived company Vickers formed together with Richard Altwasser, after the two had left Sinclair Research. Since the late 1980s, Vickers has been an academic in the field of geometric logic, writing over 30 papers in scholarly journals on mathematical aspects of computer science. His book Topology via Logic has been influential over a range of fields (extending even to theoretical physics, where Christopher Isham of Imperial College London has cited Vickers as an early influence on his work on topoi and quantum gravity). In October 2018, he retired as senior lecturer at the University of Birmingham. As announced on his university homepage, he continues to supervise PhD students at the university and focus on his research.
Education
Vickers graduated from King's College, Cambridge with a degree in mathematics and completed a PhD at Leeds University, also in mathematics.
Sinclair Research
In 1980 he started working for Nine Tiles, which had previously written the Sinclair BASIC for the ZX80. He was responsible for the adaptation of the 4K ZX80 ROM into the 8K ROM used in the ZX81 and also wrote the ZX81 manual. He then wrote most of the ZX Spectrum ROM, and assisted with the user documentation.
Vickers left in 1982 to form "Rainbow Computing Co." with Richard Altwasser. The company became Jupiter Cantab and they were together responsible for the development of the commercially unsuccessful Jupiter ACE, a competitor to the similar Sinclair ZX Spectrum.
Academia
Originally at the Department of Computing at Imperial College London, Vickers later joined the Department of Pure Mathematics at the Open University before moving to the School of Computer Science at the University of Birmingham, where he is currently a senior lecturer and the research student tutor of the School of Computer Science.
Research
Vickers' main interest lies within geometric logic. His book Topology via Logic introduces topology from the point of view of some computational insights developed by Samson Abramsky and Mike Smyth. It stresses the point-free approach and can be understood as dealing with theories in the so-called geometric logic, which was already known from topos theory and is a more stringent form of intuitionistic logic. However, the book was written in the language of classical mathematics.
Extending the ideas to toposes (as generalised spaces) he found himself channelled into constructive mathematics in a geometric form and in Topical Categories of Domains he set out a geometrisation programme of, where possible, using this geometric mathematics as a tool for treating point-free spaces (and toposes) as though they had "enough points". Much of his subsequent work has been in case studies to show that, with suitable techniques, it was indeed possible to do useful mathematics geometrically. In particular, a notion of "geometric transformation of points to spaces" gives a natural fibrewise treatment of topological bundles. A recent project of his has been to connect this with the topos approaches to physics as developed by Chris Isham and others (see Doering and Isham's What is a Thing? Topos Theory in the Foundations of Physics) at Imperial College, and Klaas Landsman's group at Radboud University Nijmegen (see Heunen, Landsman and Spitters' A Topos for Algebraic Quantum Theory).
Bibliography
Steven Vickers, "An induction principle for consequence in arithmetic universes", Journal of Pure and Applied Algebra 216 (8–9), ISSN 0022-4049, pp. 1705 – 2068, 2012.
Jung, Achim and Moshier, M. Andrew and Vickers, Steven, "Presenting dcpos and dcpo algebras", in Bauer, A. and Mislove, M., Proceedings of the 24th Conference on the Mathematical Foundations of Programming Semantics (MFPS XXIV), pp. 209–229, Electronic Notes in Theoretical Computer Science, Elsevier, 2008.
Steven Vickers, "Cosheaves and connectedness in formal topology", Annals of Pure and Applied Logic, ISSN 0168-0072, 2009.
Steven Vickers, "A localic theory of lower and upper integrals", Mathematical Logic Quarterly, 54 (1), pp. 109–103, 2008.
Steven Vickers, "Locales and toposes as spaces", in Aiello, Marco and Pratt-Hartmann, Ian E. and van Benthem, Johan F.A.K., Springer, Handbook of Spatial Logics, Springer, 2007, , Chapter 8, pp. 429–496.
Palmgren, Erik and Vickers, Steven, "Partial Horn logic and cartesian categories", Annals of Pure and Applied Logic, 145 (3), pp. 314–353, ISSN 0168-0072, 2007.
Steven Vickers, "Localic completion of generalized metric spaces I, Theory and Applications of Categories", ISSN 1201-561X, 14, pp. 328–356, 2005.
Steven Vickers, "Localic completion of generalized metric spaces II: Powerlocales, Journal of Logic and Analysis", ISSN 1759-9008, 1 (11), pp. 1–48, 2009.
Steven Vickers, "The double powerlocale and exponentiation: a case study in geometric logic", Theoretical Computer Science, ISSN 0304-3975, vol. 316, pp. 297–321, 2004.
Steven Vickers, "Topical Categories of Domains", in Winskel, Proceedings of the CLICS workshop, Aarhus, Computer Science Department, Aarhus University, 1992.
Vickers, S. J., "Topology via Constructive Logic", in Moss and Ginzburg and de Rijke, Logic, Language and Computation Vol II, Proceedings of conference on Information-Theoretic Approaches to Logic, Language, and Computation, 1996, , 157586181X, CSLI Publications, Stanford, pp. 336–345, 1999.
Vickers, S. J., "Toposes pour les vraiment nuls", in Edalat, A. and Jourdan, S. and McCusker, G., Advances in Theory and Formal Methods of Computing 1996, , Imperial College Press, London, pp. 1–12, 1996.
Vickers, S. J., "Toposes pour les nuls", Techreport Doc96/4, Department of Computing, Imperial College London, (first published in Semantics Society Newsletter no. 4).
Broda, K. and Eisenbach, S. and Khoshnevisan, H. and Vickers, S.J., "Reasoned Programming", , Prentice Hall, International Series in Computer Science, 1994.
Johnstone, P. T. and Vickers, S. J., "Preframe Presentations Present", in Carboni, A. and Pedicchio, M.C. and Rosolini, G., Category Theory – Proceedings, Como 1990, , 0-387-54706-1, Lecture Notes in Mathematics, 1488, Springer-Verlag, 1991.
Steven Vickers, "Topology Via Logic", Cambridge University Press, , 1996.
Doring, Andreas and Isham, Chris, "What is a Thing?: Topos Theory in the Foundations of Physics", in Bob Coecke, New Structures in Physics, Chapter 13, pp. 753–940, Lecture Notes in Physics, 813, Springer, 2011, , (also see arXiv:0803.0417v1.)
Heunen, Chris and Landsman, Nicolaas P. and Spitters, Bas, A Topos for Algebraic Quantum Theory, 2009, Communications in Mathematical Physics, 291 (1), pp. 63–110, ISSN 0010-3616 (Print) 1432-0916 (Online).
References
External links
Steve Vickers' homepage at the University of Birmingham
An interview with Richard Altwasser and Steven Vickers
Year of birth missing (living people)
Living people
Alumni of King's College, Cambridge
Alumni of the University of Leeds
British computer programmers
British technology writers
Sinclair Research
Academics of Imperial College London
Academics of the Open University
Academics of the University of Birmingham |
3359032 | https://en.wikipedia.org/wiki/Internet%20Protocol%20Control%20Protocol | Internet Protocol Control Protocol | In computer networking, Internet Protocol Control Protocol (IPCP) is a Network Control Protocol (NCP) for establishing and configuring Internet Protocol over a Point-to-Point Protocol link. IPCP is responsible for configuring the IP addresses as well as for enabling and disabling the IP protocol modules on both ends of the point-to-point link. IPCP uses the same packet exchange mechanism as the Link Control Protocol. IPCP packets may not be exchanged until PPP has reached the Network-Layer Protocol phase, and any IPCP packets received before this phase is reached should be silently discarded. IPCP has the NCP protocol code number 0x8021.
Each of the two endpoints of a PPP connection must send an IPCP configure request to its peer because the TCP/IP options are independent for each direction of a PPP connection.
A PPP endpoint can request a specific IP address from its peer. It can also ask the peer to suggest an IP address by requesting the address 0.0.0.0; the peer then sends its suggestion in an IPCP Nak packet, which the first peer must subsequently request in order to complete the negotiation. In practice, in protocols like PPPoE which is commonly used in home broadband connections, the latter method (request suggestion, nak with suggestion, request suggested address) is used to set the IP address of the ISP's client endpoint (i.e. the customer-premises equipment), while the former method (request address) is used to inform the client of the ISP endpoint IP (provider edge equipment).
A similar NCP, the IPv6 Control Protocol exists for IPv6. It can be used together with IPCP on the same PPP connection for a dual stack link. (When interfacing newer and older equipment that doesn't support IPv6 one sees LCP ProtRej messages for protocol 0x8057 from the side that doesn't support IPV6CP.)
IP Frame
After the configuration is done, the link is able to carry IP data as a payload of the PPP frame. This code indicates that IP data is being carried.
IPCP header:
Code.
8 bits.
Specifies the function to be performed.
Identifier.
8 bits.
Used to match requests and replies.
Length.
16 bits.
Size of the packet including the header.
Data.
Variable length.
Zero or more bytes of data as indicated by the Length.
This field may contain one or more Options.
Configuration Options
IPCP Configuration Options allow negotiatiation of desirable Internet Protocol parameters. IPCP uses the same Configuration Option format defined for LCP Link Control Protocol, with a separate set of Options.
IPCP Configuration Options:
Option.
8 bits.
Length.
8 bits.
Data.
Variable length.
IP-Compression-Protocol
IP-Address
Microsoft
In the Microsoft implementation, "Common IPCP options include an IP address and the IP addresses of DNS and NetBIOS name servers."
See also
Dynamic Host Configuration Protocol
References
: The Internet Protocol Control Protocol (IPCP)
: PPP Link Control Protocol (LCP) Extensions
: The Point-to-Point Protocol (PPP)
: PPP Internet Protocol Control Protocol Extensions for Name Server Addresses
: IP Version 6 over PPP defines the core of IPV6CP, with extensions defined in
: A Model of IPv6/IPv4 Dual Stack Internet Access Service — discusses the combination IPCP and IPV6CP
Internet protocols |
38563279 | https://en.wikipedia.org/wiki/List%20of%20computer-assisted%20organic%20synthesis%20software | List of computer-assisted organic synthesis software | Computer software for computer-assisted organic synthesis (CAOS) is used in organic chemistry and computational chemistry to facilitate the tasks of designing and predicting chemical reactions. The CAOS problem reduces to identifying a series of chemical reactions which can, from starting materials, produce a desired target molecule. CAOS algorithms typically use two databases: a first one of known chemical reactions and a second one of known starting materials (i.e., typically molecules available commercially). Desirable synthetic plans cost less, have high yield, and avoid using hazardous reactions and intermediates. Typically cast as a planning problem, significant progress has been made in CAOS.
Some examples of CAOS packages are:
IBM Rxn -
AiZynthFinder - A freely accessible open source retrosynthetic planning tool developed as a collaboration between AstraZeneca and the University of Bern. AiZynthFinder predicts synthetic routes to a given target compound, and can be retrained on a users own dataset whether from public or proprietary sources.
Manifold - Compound searching and retrosynthesis planning tool freely accessible to academic users, developed by PostEra
Spaya - Retrosynthesis planning tool freely accessible provided by Iktos
WODCA – no trial version; proprietary software
Organic Synthesis Exploration Tool (OSET) – open-source software, abandoned
CHIRON – no trial version; proprietary software
SynGen – demo version; proprietary software; a unique program for automatic organic synthesis generation; focuses on generating the shortest, lowest cost synthetic routes for a given target organic compound, and is thus a useful tool for synthesis planning
LHASA – demo available but not linked (?); proprietary software
SYLVIA – demo version; proprietary software; rapidly evaluates the ease of synthesis of organic compounds; can prioritize thousands of structures (e.g., generated by de novo design experiments or retrieved from large virtual compound libraries) according to their synthetic complexity
ChemPlanner (formerly ARChem – Route Designer) - is an expert system to help chemists design viable synthetic routes for their target molecules; the knowledge base of reaction rules is algorithmically derived from reaction databases, and commercially available starting materials are used as termination points for the retrosynthetic search
ICSYNTH – demo available; proprietary software; A computer aided synthesis design tool that enables chemists to generate synthetic pathways for a target molecule, and a multistep interactive synthesis tree; at its core is an algorithmic chemical knowledge base of transform libraries that are automatically generated from reaction databases.
Chematica (Now known as Synthia)
ASKCOS – Open-source suite of synthesis planning and computational chemistry tools.
See also
Comparison of software for molecular mechanics modeling
Molecular design software
Molecule editor
Molecular modeling on GPU
List of software for nanostructures modeling
Semi-empirical quantum chemistry methods
Computational chemical methods in solid state physics, with periodic boundary conditions
Valence bond programs
References
Organic chemistry
Computational chemistry software
Molecular modelling software
Physics software
Lists of software |
1395666 | https://en.wikipedia.org/wiki/Forced | Forced | Forced is a single-player and co-op action role-playing game developed by BetaDwarf, released in October 2013 for Windows, OS X and Linux through the Steam platform as well as Wii U. It is about gladiators fighting for their freedom in a fantasy arena where they are assisted by a spirit-like character called Balfus. Gameplay consists of selecting a weapon class and abilities to combat the various enemies of each arena, while solving puzzles using the help of Balfus. BetaDwarf was formed by a small group of students in 2011, who began developing the game in an unused classroom in Aalborg University – Copenhagen, Denmark. They were removed months later and launched a successful Kickstarter campaign involving an Imgur picture which documented their progress. Forced received moderate to favorable reviews with most critics praising its competitive gameplay and puzzle-system. The game's weak plot, technical glitches and excess difficulty were the negative highlights. It won the Level Up 2013 Intel award and BetaDwarf received the Danish Developer Of The Year (2013) for it.
Gameplay
Forced is an action role-playing game having a top-down view for up to four players in the combat arena. Players have to select a weapon, which has 16 unlockable abilities each, similar to a character class. Though single player mode is also available, the game is mostly about cooperative gameplay by selecting a weapon class and abilities which complement the other players', while combating demons of various sizes and solving environmental puzzles—using a will-o-wisp-like companion called Balfus the Spirit Mentor.
There are four weapon classes: the Storm Bow, the Volcanic Hammer, the Spirit Knives, and the Frost Shield; each have the tactical roles of a long range attacker, slow melee attacker, fast melee attacker and a tank respectively. The active and passive abilities available for each class allow some form of customization. The weapon class and selected abilities (which also have cooldowns) can be changed before the start of any arena level. Each player must choose a different weapon class. Balfus can be made to interact with the environment's spiritual plane by ordering him to activate or trigger props like healing pedestals or set off a stunning blast from traps. He can travel with the characters, float in space or can be called to their location. Good positioning and communication will help the players use Balfus efficiently; the need to do this while facing waves of enemies, makes the game more challenging.
Gems are rewarded after the completion of each arena trial. They can be used to enhance the weapons by unlocking more ability slots and new abilities at regular intervals. Each trial contains three gems as a reward. The first requires completion, the second is a specific challenge and the third is a time trial. If the arena boss is too difficult, it is possible to complete all these challenges to earn extra gems. Forced has a Mark Combat System, where weapons cause marks on the enemies. A greater number of marks causes certain abilities to have better effects, thus making it more effective to hit a group of enemies a few times before using an ability, rather than using it at the start. Though the difficulty increases in proportion to the number of players and sharing Balfus needs communication between them, multiplayer is easier and lets the player focus on specific abilities and tactical roles instead of being forced to cover every possibility in single player; also, if a player dies, the trial can be completed as long as another survives till the end.
Plot
The players are cast as slaves who are forced to fight in a fantasy gladiator arena, which is the reason for the game title. The slaves are from a village where people are bred solely to be gladiators and fight for the pleasure of demon types to win their freedom. The players have the help of Balfus, a Spirit Mentor, and need to overcome the challenges and defeat the guardians of each arena. Spirit Mentors guide gladiators through the arena and Balfus is revealed to have done so for previous dead gladiators before the player. Balfus remains the source of drama since the protagonists are silent throughout the game and the antagonists do not go beyond taunting them.
In their first arena fight, they defeat the guardian called "Wrathhoof", who refuses to accept defeat to let them pass to the next arena. Despite Balfus' warning that this would be against the rules, Wrathhoof continues to attack the players who then kill him. The players and Balfus try to keep this murder of a guardian a secret and embark to the next arena, where the next guardian Slarth, discovers what they did. Slarth and the next guardian Graw are revealed to be former gladiators themselves and Balfus, their mentor. The players then defeat and kill Slarth. On killing Graw, Balfus decides to end this gladiator event by killing the remaining guardian Mordar and the final guardian called "The Master".
Development
Forced uses the Unity engine. BetaDwarf was formed by a small group of students in 2010, who moved into an unused classroom in Aalborg University – Copenhagen, Denmark and began developing the game. After seven months, the university discovered them when a lecturer accidentally walked into the room. They were removed and made a successful Kickstarter campaign involving a picture on Imgur, which described their progress as a team; they were then able to set up their office in Copenhagen. Steffen Kabbelgaard, Game Director and CEO of BetaDwarf, credited the 1996 videogame Crash Bandicoot as an inspiration for the campaigns and gem rewards in Forced. A demo of the game was available in the 2013 Gamescom and PAX Prime. A beta version of Forced was initially released on Steam Early Access for Windows, OS X and Linux in the same year. Its full release was on October 24, 2013 which was also for Wii U. On March 19, 2014, BetaDwarf announced that the game would be available on Xbox One.
Reception
Forced won the Level Up 2013 Intel award and BetaDwarf received the Danish Developer Of The Year at the 2013 Spilprisen game awards held by the Danish Producers Association. It received moderate to favorable reviews. Brittany Vincent of Hardcore Gamer gave the game a 4/5, calling it "a gleeful return to form for cooperative play." Jim Rossignol from Rock, Paper, Shotgun called it a "competently produced game" but said that it "simply lacks flair, and combined with the slightly awkward mechanics in co-op play, means it never feels wholly convincing." Lena LeRay from IndieGames.com felt the voice acting was mediocre and acknowledged the lack of depth in the plot but said that the "engaging gameplay" compensated for it. Bob Richardson from RPGFan also highlighted these issues in addition to various technical glitches in multiplayer. Richardson praised the gameplay, the puzzle system and the skill customization, giving the game 80% rating. He said, "Forced is purely an intrinsic experience: defeat is the result of lack of cooperation and skill, and victory is directly related to teamwork, communication, and aptitude."
Zach Welhouse from RPGamer gave it a 3/5 calling the combat "complex and rewarding" and multiplayer "a good balance of tactics and adrenaline." He also praised the music, Balfus's character and called the puzzle-solving system "unique" but noted the bare plot and game difficulty as its negative highlights. Welhouse commented, "The sheer number of ways to die makes it difficult to tell how much of Forced is unfair and how much is a series of lessons in avoiding dangerous situations to unlock a new ability is a compelling system for squeezing the most effort out of a player." Jason Venter from GameSpot, gave it 5/10 and said, "It's a challenging game with built-in reasons to revisit familiar areas, but it's also too demanding for its own good, and the results are more frustrating than satisfying." Mike Gunn from NZGamer.com gave it 8.6/10 and said, "Such a simple game, but one with a lot of tactical and strategic depth."
References
External links
"How we lived together for 3 years while making FORCED", by Steffen Kabbelgaard (CEO of BetaDwarf) on Gamasutra blogs, with the Imgur picture which began their Kickstarter campaign
Role-playing video games
Action role-playing video games
Kickstarter-funded video games
Linux games
MacOS games
PlayStation 4 games
PlayStation Network games
Video games developed in Denmark
Wii U games
Windows games
2013 video games
Xbox One games |
1403168 | https://en.wikipedia.org/wiki/Bendix%20G-15 | Bendix G-15 | The Bendix G-15 is a computer introduced in 1956 by the Bendix Corporation, Computer Division, Los Angeles, California. It is about and weighs about . The G-15 has a drum memory of 2,160 29-bit words, along with 20 words used for special purposes and rapid-access storage.
The base system, without peripherals, cost $49,500. A working model cost around $60,000 (over $500,000 by today's standards). It could also be rented for $1,485 per month. It was meant for scientific and industrial markets. The series was gradually discontinued when Control Data Corporation took over the Bendix computer division in 1963.
The chief designer of the G-15 was Harry Huskey, who had worked with Alan Turing on the ACE in the United Kingdom and on the SWAC in the 1950s. He made most of the design while working as a professor at Berkeley, and other universities. David C. Evans was one of the Bendix engineers on the G-15 project. He would later become famous for his work in computer graphics and for starting up Evans & Sutherland with Ivan Sutherland.
Architecture
The G-15 was inspired by the Automatic Computing Engine (ACE). It is a serial-architecture machine, in which the main memory is a magnetic drum. It uses the drum as a recirculating delay-line memory, in contrast to the analog delay line implementation in other serial designs. Each track has a set of read and write heads; as soon as a bit was read off a track, it is re-written on the same track a certain distance away. The length of delay, and thus the number of words on a track, is determined by the spacing of the read and write heads, the delay corresponding to the time required for a section of the drum to travel from the write head to the corresponding read head. Under normal operation, data are written back without change, but this data flow can be intercepted at any time, allowing the machine to update sections of a track as needed.
This arrangement allows the designers to create "delay lines" of any desired length. In addition to the twenty "long lines" of 108 words each, there are four more short lines of four words each. These short lines recycle at 27 times the rate of the long lines, allowing fast access to frequently needed data. Even the machine's accumulators are implemented as drum lines: three double-word lines are used for intermediate storage and double-precision addition, multiplication, and division in addition to a one single-word accumulator. This use of the drum rather than flip-flops for the registers helped to reduce vacuum tube count.
A consequence of this design was that, unlike other computers with magnetic drums, the G-15 does not retain its memory when it is shut off. The only permanent tracks are two timing tracks recorded on the drum at the factory. The second track is a backup, as the tracks are liable to erasure if one of their amplifier tubes shorted out.
The serial nature of the G-15's memory was carried over into the design of its arithmetic and control circuits. The adders work on one binary digit at a time, and even the instruction word was designed to minimize the number of bits in an instruction that needed to be retained in flip-flops (to the extent of leveraging another one-word drum line used exclusively for generating address timing signals).
The G-15 has 180 vacuum tube packs and 3000 germanium diodes. It has a total of about 450 tubes (mostly dual triodes). Its magnetic drum memory holds 2,160 words of twenty-nine bits. Average memory access time is 14.5 milliseconds, but its instruction addressing architecture can reduce this dramatically for well-written programs. Its addition time is 270 microseconds (not counting memory access time). Single-precision multiplication takes 2,439 microseconds and double-precision multiplication takes 16,700 microseconds.
Peripherals
One of the G-15's primary output devices is the typewriter with an output speed of about 10 characters per second for numbers (and lower-case hexadecimal characters u-z) and about three characters per second for alphabetical characters. The machine's limited storage precludes much output of anything but numbers; occasionally, paper forms with pre-printed fields or labels were inserted into the typewriter. A faster typewriter unit was also available.
The high-speed photoelectric paper tape reader (250 hexadecimal digits per second on five-channel paper tape for the PR-1; 400 characters from 5-8 channel tape for the PR-2) read programs (and occasionally saved data) from tapes that were often mounted in cartridges for easy loading and unloading. Not unlike magnetic tape, the paper tape data are blocked into runs of 108 words or less since that is the maximum read size. A cartridge can contain many multiple blocks, up to 2500 words (~10 kilobytes).
While there is an optional high-speed paper tape punch (the PTP-1 at 60 digits per second) for output, the standard punch operates at 17 hex characters per second (510 bytes per minute).
Optionally, the AN-1 "Universal Code Accessory" included the "35-4" Friden Flexowriter and HSR-8 paper tape reader and HSP-8 paper tape punch. The mechanical reader and punch can process paper tapes up to eight channels wide at 110 characters per second.
The CA-1 "Punched Card Coupler" can connect one or two IBM 026 card punches (which were more often used as manual devices) to read cards at 17 columns per second (approximately 12 full cards per minute) or punch cards at 11 columns per second (approximately 8 full cards per minute). Partially full cards were processed more quickly with an 80-column-per-second skip speed). The more expensive CA-2 Punched Card Coupler reads and punches cards at a 100-card-per-minute rate.
The PA-3 pen plotter runs at 1 inch per second with 200 increments per inch on a paper roll 1 foot wide by 100 feet long. The optional retractable pen-holder eliminates "retrace lines".
The MTA-2 can interface up to four drives for half-inch Mylar magnetic tapes, which can store as many as 300,000 words (in blocks no longer than 108 words). The read/write rate is 430 hexadecimal digits per second; the bidirectional search speed is 2500 characters per second.
The DA-1 differential analyzer facilitates solution of differential equations. It contains 108 integrators and 108 constant multipliers, sporting 34 updates per second.
Software
A problem peculiar to machines with serial memory is the latency of the storage medium: instructions and data are not always immediately available and, in the worst case, the machine must wait for the complete recirculation of a delay line to obtain data from a given memory address. The problem is addressed in the G-15 by what the Bendix literature calls "minimum-access coding". Each instruction carries with it the address of the next instruction to be executed, allowing the programmer to arrange instructions such that when one instruction completes, the next instruction is about to appear under the read head for its line. Data can be staggered in a similar manner. To aid this process, the coding sheets include a table containing numbers of all addresses; the programmer can cross off each address as it is used.
A symbolic assembler, similar to the IBM 650's Symbolic Optimal Assembly Program (SOAP), was introduced in the late 1950s and includes routines for minimum-access coding. Other programming aids include a supervisor program, a floating-point interpretive system named "Intercom", and ALGO, an algebraic language designed from the 1958 Preliminary Report of the ALGOL committee. Users also developed their own tools, and a variant of Intercom suited to the needs of civil engineers is said to have circulated.
Floating-point arithmetic is implemented in software. The "Intercom" series of languages provide an easier to program virtual machine that operates in floating point. Instructions to Intercom 500, 550, and 1000 are numerical, six or seven digits in length. Instructions are stored sequentially; the beauty is convenience, not speed. Intercom 1000 even has an optional double-precision version.
As mentioned above the machine uses hexadecimal numbers, but the user never has to deal with this in normal programming. The user programs use the decimal numbers while the OS resides in the higher addresses.
Significance
The G-15 is sometimes described as the first personal computer, because it has the Intercom interpretive system. The title is disputed by other machines, such as the LINC and the PDP-8, and some maintain that only microcomputers, such as those which appeared in the 1970s, can be called personal computers. Nevertheless, the machine's low acquisition and operating costs, and the fact that it does not require a dedicated operator, meant that organizations could allow users complete access to the machine.
Over 400 G-15s were manufactured. About 300 G-15s were installed in the United States and a few were sold in other countries such as Australia and Canada. The machine found a niche in civil engineering, where it was used to solve cut and fill problems. Some have survived and have made their way to computer museums or science and technology museums around the world.
Huskey received one of the last production G15s, fitted with a gold-plated front panel.
This was the first computer that Ken Thompson ever used.
A Bendix G-15 was used at Fremont High School (Oakland Unified School District) in the 1964-65 school year for the senior seminar math class. Students were taught the fundamentals of programming. One such exercise was the calculation of a square root using the method of Newtonian approximation. A Bendix G-15 was still in use for the UC Berkeley extension summer class in programming, at Oakland Technical High School, in 1970.
See also
List of vacuum tube computers
Bendix G-20
References
External links
The Bendix G-15
Bendix G15 computer
Another G-15 reference
Bendix G-15 documentation
photo
info page with photo
Describes Harry Huskey's involvement with ACE
Extensive G15 site list, photos & technical info
1950s computers
Vacuum tube computers
Minicomputers
Computer-related introductions in 1956
Science and technology in Greater Los Angeles
1956 in California
Serial computers
Bendix Corporation |
11905123 | https://en.wikipedia.org/wiki/Lynx%20Software%20Technologies | Lynx Software Technologies | Lynx Software Technologies, Inc. (formerly LynuxWorks) is a San Jose, California software company founded in 1988. Lynx specializes in secure virtualization and open, reliable, certifiable real-time operating systems (RTOSes). Originally known as Lynx Real-Time Systems, the company changed its name to LynuxWorks in 2000 after acquiring, and merging with, ISDCorp (Integrated Software & Devices Corporation), an embedded systems company with a strong Linux background. In May 2014, the company changed its name to Lynx Software Technologies.
Over 30 years of processor evolution, Lynx has crafted and adapted platform architectures for builders of safety- and security-critical software systems. Lynx embraced open standards from its inception, with its original RTOS, LynxOS, featuring a UNIX-like user model and standard POSIX interfaces to embedded developers. LynxOS-178 is developed and certified to the distinguished FAA DO-178C DAL A safety standard and received the first and only FAA Reusable Software Component certificate for an RTOS. It supports ARINC API and FACE standards.
Lynx has created technology that has been deployed in thousands of designs and millions of products made by leading communications, industrial, transportation, avionics, aerospace/defense and consumer electronics companies. In 1989, LynxOS, the company's flagship RTOS, was selected for use in the NASA/IBM Space Station Freedom project. Lynx Software Technologies operating systems are also used in medical, industrial and communications systems around the world.
In early 2020, Lynx announced that the TR3 modernization program for the joint strike fighter had adopted Lynx’s LYNX MOSA.ic software development framework. The F-35 Lightning II Program (also known as the Joint Strike Fighter Program) is the US Department of Defense's focal point for defining affordable next generation strike aircraft weapon systems It is intended to replace a wide range of existing fighter, strike, and ground attack aircraft for the United States, the United Kingdom, Italy, Canada, Australia, the Netherlands, and their allies. After a competition between the Boeing X-32 and the Lockheed Martin X-35, a final design was chosen based on the X-35. This is the F-35 Lightning II, which will replace various tactical aircraft.
The company’s technology is also used in medical, industrial and communications systems around the world by companies like Airbus, Bosch, Denso, General Dynamics, Lockheed Martin, Raytheon, Rohde and Schwartz and Toyota.
Operating system evolution and history
LynxOS is the company's real-time operating system. It is UNIX-compatible and POSIX-compliant. It features predictable worst-case response time, preemptive scheduling, real-time priorities, ROMable kernel, and memory locking. LynxOS 7.0 is marketed as a "military grade", general purpose multi-core hard real-time operating system, and is intended for developers to embed security features during the design process, rather than adding security features after development. LynxOS and LynxOS-178 have been deployed in millions of safety-critical applications worldwide, including multiple military and aerospace systems.
In 2003, the company introduced the LynxOS-178 real-time operating system, a specialized version of LynxOS geared toward avionics applications that require certification to industry standards such as DO-178B. LynxOS-178 is a commercial off-the-shelf (COTS) RTOS that fully satisfies the objectives of the DO-178B level A specification and meets requirements for Integrated Modular Avionics (IMA) developers. LynxOS-178 is a native POSIX, hard real-time partitioning operating system developed and certified to FAA DO-178B/C DAL A safety standards. It is the only Commercial-off-the-Shelf (COTS) OS to be awarded a Reusable Software Component (RSC) certificate from the FAA for re-usability in DO-178B/C certification projects. LynxOS-178 is the primary host for real-time POSIX and FACE applications within the LYNX MOSA.ic development and integration framework. LynxOS-178 satisfies the PSE 53/54 profiles for both dedicated and multi-purpose real-time as well as FACE applications.
The LynxSecure Hypervisor ("bare metal," type 1) and separation kernel was released in 2005. Within the LYNX MOSA.ic development framework, it acts as a programmable processor partitioning system leveraging hardware virtualization capabilities of modern multi-core processors to isolate computing resources.
In February 2019, Lynx announced LYNX MOSA.ic (pronounced “mosaic”). LYNX MOSA.ic is a software development framework for rapidly building security- and safety-critical software systems out of independent application modules. Designed to deliver on the vision of the Modular Open Systems Approach (MOSA),its focus is to enable developers to collapse existing development cycles to create, certify, and deploy robust, secure platforms for manned and unmanned autonomous systems.
Lynx Software Technologies' patents on LynxOS technology include patent #5,469,571, "Operating System Architecture using Multiple Priority Light Weight kernel Task-based Interrupt Handling," November 21, 1995, and patent #5,594,903, "Operating System architecture with reserved memory space resident program code identified in file system name space," January 14, 1997.
References
External links
Software companies based in California
Linux companies
Real-time operating systems
Embedded operating systems
Companies based in San Jose, California |
13880633 | https://en.wikipedia.org/wiki/OpenProj | OpenProj | OpenProj was an open-source project management software application.
It has not been updated since 2008 and is not supported. Serena Software asked users to use ProjectLibre instead.
History and status
Marc O'Brien, Howard Katz, and Laurent Chretienneau developed OpenProj at Projity in 2007. It moved out of beta with Version 1.0 on January 10, 2008.
In late 2008, Projity was acquired by Silver Lake Partners (the private equity firm) via its subsidiary at that time, Serena Software.
In November 2008, support and development of OpenProj appeared suspended. There were a few later commits to the CVS with regressions, but no improvements. It is no longer compatible with Microsoft Project.
Serena/Projity also developed a software as a Service (SaaS) project software, Projects On Demand. (Projects On Demand service ended on June 11, 2011.)
In 2012, the founders of OpenProj announced that they had forked the OpenProj codebase and started a different implementation.
Serena announced and posted online to avoid downloading OpenProj and instead download ProjectLibre.
The initial release of ProjectLibre occurred in August 2012. ProjectLibre has been completely rewritten and thus technically ceased to be a fork.
Features
The current version includes:
Earned value costing
Gantt charts
PERT graphs
Resource breakdown structure (RBS) charts
Task usage reports
Work breakdown structure (WBS) charts
Popularity
It has been downloaded over 4,000,000 times in over 142 countries. Three months after the beta version release, on SourceForge an average of 60,000 copies a month were downloaded. With a SourceForge activity percentile of 99.964, at number 15 it was listed just ahead of the popular messaging application Pidgin. In May 2008 the total number of downloads on SourceForge reached 500,000.
Bugs
OpenProj has not been supported for over 10 years. Serena software previously issued a warning and asked users to use ProjectLibre. As of version 1.4, bugs in the software generally only manifest for users who are attempting more advanced features. For example, tasks may mysteriously start at a certain time (they behave as if they have a 'Start no earlier than' constraint even though none exists, and the project start date is not a constraint), links show gaps, fixed cost for summary tasks neither sums nor is editable, etc. Sometimes these errors are solved by restarting the software, but others are persistent.
Compared to Microsoft Project, which it closely emulates, OpenProj has a similar user interface (UI), and a similar approach to construction of a project plan: create an indented task list or work breakdown structure (WBS), set durations, create links (either by (a) mouse drag, (b) selection and then button-down, or (c) manually type in the "predecessor" column), assign resources. The columns (fields) are the same as for Microsoft Project. Users of either software should be broadly comfortable using the other. Costs are the same: labour, hourly rate, material usage, and fixed costs: these are all provided.
However, there are small differences in the UI (comments apply to version 1.4), which take some adaptation for those familiar with Microsoft Project, i.e. OpenProj can't link upward with method (c), inserting tasks is more difficult than in Microsoft Project, and OpenProj can't create resources on the fly (have to create them first in the resource sheet). There are also several more serious limitations with OpenProj, the chief of these being the unavailability of more detailed views and reports typical of Microsoft Project. For example, though the fields exist for cost, there is no quick way to show them other than to manually insert them. This requires a relatively advanced user: someone who knows what the fields might be called and how to use them.
Licensing
Some features of OpenProj are limited to users acquiring a purchased license; for those users using OpenProj for free, a slightly limited feature set is provided. For example, OpenProj(v1.4) does not allow the in-house exporting of PDF output, though the usefulness of such a feature is questionable. It is possible to circumvent the reduced feature set using external software, though as with all paid software, donation or purchase is appreciated by the developers.
ProjectLibre
The original founders of OpenProj started to develop a complementary Cloud version called ProjectLibre in 2012, comparable to Microsoft Project Server for Microsoft Project. During development they realized, that the fact that OpenProj had not been updated anymore by Serena Software for 12 years would become problematic to their goal, so they needed to rewrite the program.
See also
Comparison of project management software
Microsoft Project
ProjectLibre
References
External links
Free software programmed in Java (programming language)
Free project management software
Java platform software |
36842421 | https://en.wikipedia.org/wiki/Gene%20Rondo | Gene Rondo | Winston Lara (28 May 1943 – 12 June 1994), better known by his stage name Gene Rondo, was a Jamaican reggae singer. After first recording as part of the duo Gene & Roy in Jamaica, he relocated to London where he continued to record until the 1980s, including several album releases in the 1970s, both solo and as a member of The Undivided. He was sometimes credited as Gene Laro or Winston Laro.
Biography
Born in Greenwich Farm, Kingston, Jamaica in 1943, Rondo entered the music business in the late 1950s, successfully competing in the Vere Johns Opportunity Hour talent content with his partner Satch. He recorded a single in Jamaica as part of the duo Gene & Roy ("Little Queenie"/"Squeeze Me"), before relocating to London in 1962, where he studied as a classical singer in Hammersmith.
In 1965 he formed the band Abashack, with whom he toured the UK, and went on to record singles on the Giant and Jolly labels of Stamford Hill-based R&B Records in 1968. He then recorded for Dandy Livingstone's Trojan Records sub-label Downtown, releasing several singles in 1969 and 1970. In 1970 he recorded his debut album, On My Way, for Trojan. He went on to record for Magnet Records, including contributing four tracks to the Reggae Desire album in 1974.
In 1972 he formed the pop-reggae band The Undivided (which later evolved into Undivided Roots), who released an album (Listen to the World) for Decca. He also recorded as a solo reggae vocalist for several UK-based producers including Clement Bushay, Dennis Harris (for whom he recorded duets with T.T. Ross), and Count Shelly. Rondo accompanied Susan Cadogan for her performance of "Hurt So Good" on Top of the Pops.
In the mid-1970s, Rondo adopted the Rastafari faith, and recorded more roots-oriented tracks such as "A Land Far Away" and "Give All the Parise to Jah". Rondo co-produced (with Bunny Lee) Delroy Wilson's Nice Times album in 1983 and contributed backing vocals to Alton Ellis's 25th Silver Jubilee album in 1984. He continued to record into the 1980s, and also set up the Roots Pool community centre and studio in North London. He was a major contributor to the British Reggae Artists Famine Appeal (BRAFA) and the charity single "Let's Make Africa Green Again", which raised funds for the Save the Children Fund. He continued to concentrate on community work in the latter half of the 1980s and early 1990s.
Rondo died from cancer in St Josephs Hospice, Hackney in June 1994. A memorial concert was held featuring artists such as Alton Ellis, Prince Lincoln, Justin Hinds, Dennis Alcapone, Owen Gray, and Carroll Thompson.
Discography
Albums
On My Way (1970), Trojan
Memories (1977), Venture
Singles
"Little Queenie"/"Squeeze Me" (1961), Magico - Gene & Roy
"Grey Lies" (1968), Giant - with Herbie Gray and the Cool-Tans
"Mary Mary"/"Baby Baby" (1968), Jolly
"Lover's Question" (1969), Downtown
"Sentimental Reason" (1969), Downtown
"Spreading Peace" (1970), Downtown
"Goodnight My Love" (1970), Downtown - credited to Winston Laro
"I Need Your Love" (1970), Bread - credited to Gene Laro, B-side of "Susanne" by Del Davis
"Happy Birthday Sweet Sixteen" (1972), Count Shelly
"Wanna Be Like Daddy" (1972), Downtown
"Each Moment" (1973), Magnet
"Prisoner of Love" (1973), Magnet
"This is Love" (1973), Magnet
"Rebel Woman" (1974), Queen Bee
"A Different World" (1974), RG
"Valley Of Tears" (1974), Magnet
"Oh Sweet Africa" (197?), Magnet
"Reggae Desire" (197?), Magnet
"Jim Dandy" (1975), Jamatel
"Impossible Dream" (1975), Faith
"Declaration Of Rights" (1975), Third World - B-side of "Buggis Mood" by Buggis
"Little Things Mean A Lot" (1975), Dip - Gene Rondo & T.T. Ross
"Miss Grace" (1975), Wild Flower - Gene Rondo & T.T. Ross
"My Dream is Yours" (1975), Comedy International
"If I Could Say What's On My Mind" (1975), Comedy International
"Try Me" (197?), Dip
"Ramblin' Man" (1976), Trojan
"Domestic Affair" (1977), Third World
"Time" (1978), Trans Universal - with the Star-Keys, B-side of Dennis Alcapone's "Truth & Rights"
"Jah Jah Worker" (1978), Burning Sounds - with Militant Barry
"Golden Love" (1977), Paradise
"If I Would Say" (1979), Pentagon
"In My Life" (1979), Jamaica Sound/RCA Victor
"Since I Fell For You"/"If You Take My Love" (1979), Jamaica Sound
"Something on My Mind" (197?), Jamaica Sound - Gene Laro & Dillinger
"No One But You" (1985), Roots Pool
"Miss Grace", Music Scene
"Yah Mo Be There (Jah Will Be There)" (1987), BMDI
Compilation appearances
Reggae Desire (1974), Magnet: "Pretty Blue Eyes", "This Is Love", "Mary Mary", "Oh Sweet Africa"
References
1943 births
1994 deaths
Jamaican reggae singers
20th-century Jamaican male singers
Musicians from Kingston, Jamaica |
165544 | https://en.wikipedia.org/wiki/LADSPA | LADSPA | LADSPA is an acronym for Linux Audio Developer's Simple Plugin API. It is an application programming interface (API) standard for handling audio filters and audio signal processing effects, licensed under LGPL-2.1-or-later. It was originally designed for Linux through consensus on the Linux Audio Developers Mailing List, but works on a variety of other platforms. It is used in many free audio software projects and there is a wide range of LADSPA plug-ins available.
LADSPA exists primarily as a header file written in the programming language C.
There are many audio plugin standards and most major modern software synthesizers and sound editors support a variety. The best known standard is probably Steinberg's Virtual Studio Technology. LADSPA is unusual in that it attempts to provide only the "Greatest Common Divisor" of other standards. This means that its scope is limited, but it is simple and plugins written using it are easy to embed in many other programs. The standard has changed little with time, so compatibility problems are rare.
DSSI extends LADSPA to cover instrument plugins.
LV2 is a successor, based on LADSPA and DSSI, but permitting easy extensibility, allowing custom user interfaces, MIDI messages, and custom extensions.
Competing technologies
Apple Inc.'s Audio Units
Digidesign's Real Time AudioSuite
Avid Technology's Avid Audio eXtension
Microsoft's DirectX plugin
Steinberg's Virtual Studio Technology
References
External links
ladspa.org
Application programming interfaces
Free audio software
Free software programmed in C
Music software plugin architectures
Linux APIs
Audio libraries |
8894207 | https://en.wikipedia.org/wiki/Windows%20Ultimate%20Extras | Windows Ultimate Extras | Windows Ultimate Extras were optional features offered by Microsoft to users of the Ultimate edition of Windows Vista and are accessible via Windows Update. Ultimate Extras replaced the market role of Microsoft Plus!, a product sold for prior consumer releases of Microsoft Windows. According to Microsoft's Barry Goffe, the company's goal with Ultimate Extras was to delight customers who purchased the Ultimate edition of Windows Vista, the most expensive retail edition of the operating system.
Windows Ultimate Extras have been discontinued as of Windows 7 and the operating system also removes all installed extras during an upgrade from Windows Vista Ultimate.
Contents
Microsoft released a total of nine Ultimate Extras for users of Windows Vista Ultimate.
BitLocker and EFS enhancements
The BitLocker Drive Preparation Tool utility and the Secure Online Key Backup utility were among the first Ultimate Extras to be made available, and were released to coincide with the general retail availability of Windows Vista. BitLocker Drive Preparation Tool prepares the hard drive to be encrypted with BitLocker, whereas Secure Online Key Backup enabled users to create an off-site backup of their BitLocker recovery password and Encrypting File System recovery certificates at Digital Locker, as part of the Windows Marketplace digital distribution platform. Secure Online Key Backup was rendered inoperable after Digital Locker shut down in August 2009.
Multilingual User Interface language packs
Unlike previous versions of Windows, Windows Vista is language-independent; the language architecture separates the language resources for the user interface from the binary code of the operating system. Support for installing additional languages is included in the Enterprise and Ultimate editions of Windows Vista. In the Ultimate edition, the functionality is made available through Windows Update as Ultimate Extras.
Microsoft stated that 16 languages were made available on January 30, 2007. The company released the remaining language packs on October 23, 2007 for a total of 35 language packs. An additional 36th language pack version is available for Windows Vista that supports traditional Chinese characters with the Hong Kong encoding character set.
Microsoft Tinker
Microsoft Tinker is a puzzle game where players must navigate a robot through mazes and obstacles. A total of 60 levels are included, and players can create their own levels with a level editor.
Hold 'Em
Hold 'Em is a poker card game released on January 29, 2007 that is fundamentally similar to Texas hold 'em. Hold 'Em allows users to play against up to five computer players and up to three levels of difficulty, and also allows users to customize aspects of the game's appearance; the game relies on DirectX to produce hardware-accelerated 3D animations and effects. For optimal performance, Hold 'Em requires a computer with a Windows Experience Index rating of 2.0 or higher.
According to Paul Thurrott, Hold 'Em was originally intended to be bundled alongside the premium games—Chess Titans, Mahjong Titans, and InkBall—included by default with the Home Premium and Ultimate editions of Windows Vista, but was instead made an Ultimate Extra because of its gambling themes.
Windows sound schemes
A total of three sound schemes for Windows Vista were released: Ultimate Extras Glass, Ultimate Extras Pearl, and Microsoft Tinker. The first two were made available on April 22, 2008, while the latter was made available on the same day as Microsoft Tinker. The Glass and Pearl sound schemes are similar to the Default sound scheme included in Windows Vista as they were also developed in accordance with the design language and principles of the Windows Aero graphical user interface.
Windows DreamScene
Windows DreamScene is a utility that enables MPEG and WMV videos to be displayed as desktop backgrounds. DreamScene requires that the Windows Aero graphical user interface be enabled in order to function as the feature relies on the Desktop Window Manager to display videos on the desktop.
Proposed extras
Additional extras were also proposed but not released, including a podcast creation application, a game performance optimization utility, custom themes, exclusive access to online content and services, Windows Movie Maker effects and transitions, templates for Windows DVD Maker, digital publications, and the Group Shot photo manipulation application developed by Microsoft Research and shown by Bill Gates at the Consumer Electronics Show in 2007.
Although not considered to be Ultimate Extras by the company, the Ultimate Extras team also released two wallpapers for users of Windows Vista Ultimate. Titled Start and Strands, the wallpapers were based on the design of the Windows Vista Ultimate retail packaging and were made available in three different display resolutions.
Critical reception
Reaction to Windows Ultimate Extras was mixed. While Microsoft was praised for creating a value proposition for users who purchased the most expensive edition of Windows Vista, the company was criticized for its delays during delivery of updates, perceived lack of quality of delivered updates, and a lack of transparency regarding their development. Early on, there were concerns that the features would not live up to users' expectations. The company announced several Ultimate Extras in January 2007, but only a fraction of these were released five months later. After months without an official update since January, Microsoft released an apology for the delays, stating that it intended to ship the remaining features before the end of summer of 2007. The delays between consecutive updates and months of silence had led to speculation that the development team within the company responsible for the features had been quietly disbanded.
When Microsoft announced its intentions to release the remaining Ultimate Extras and released an apology for delays, Paul Thurrott stated that the company had "dropped the ball" with the features. Ed Bott wrote that Ultimate Extras were "probably the biggest mistake Microsoft made with Vista," and that the company would downplay the Ultimate edition of Windows 7 as a result. Bott would later list them among his "decade's worth of Windows mistakes."
Microsoft was also criticized for changing the description for Ultimate Extras within the operating system. The offerings slated to be made available were initially described as "cutting-edge programs," "innovative services," and "unique publications," but the description for the features within the Control Panel applet was later modified in Windows Vista Service Pack 1 to be more modest; this was interpreted as an attempt made by the company to avoid fulfilling prior expectations.
Emil Protalinski of Ars Technica wrote that the Ultimate edition of Windows Vista "would have looked just fine without the joke that is 'Ultimate Extras'" and that the features were supposed to provide an incentive for consumers to purchase that edition, "not give critics something to point and laugh at." In the second part of his review of Windows 7, Peter Bright of Ars Technica wrote that "the value proposition of the Ultimate Extras was nothing short of piss-poor." Bright would later criticize Microsoft's decision not to release Internet Explorer 10 for Windows Vista, but would go on to state that this was still "not as bad as the Ultimate Extras farce."
See also
List of games included with Windows
Windows Anytime Upgrade
Windows Easy Transfer
Windows Vista editions
References
Discontinued Windows components
Ultimate Extras |
25131410 | https://en.wikipedia.org/wiki/3CX%20Phone%20System | 3CX Phone System | The 3CX Phone System is the software-based private branch exchange (PBX) Phone system developed and marketed by the company, 3CX. The 3CX Phone System is based on the SIP (Session Initiation Protocol) standard and enables extensions to make calls via the public switched telephone network (PSTN) or via Voice over Internet Protocol (VoIP) services on premises, in the cloud, or via a cloud service owned and operated by the 3CX company. The 3CX Phone System is available for Windows, Linux, Raspberry Pi and supports standard SIP soft/hard phones, VoIP services, faxing, voice and web meetings, as well as traditional PSTN phone lines.
History
The 3CX solution was developed by 3CX, an international VoIP IP PBX software development technology company as an open-standards, software-based PBX. First published as a free IP PBX product in 2006, the product was intended to provide a VoIP solution for use in a Microsoft Windows environment.
The first commercial edition of the product was launched in 2007. Reviews of the product have noted its easy configuration, management, and hardware compatibility. Smith on VoIP commented in a blog post about 3CX that it was very easy to use. In 2009, it was featured on the internet TV show Hak5. On 7 June 2017, 3CX released version 15.5 with a complete feature set from WebRTC video conferencing, presence, chat, voicemail and a brand new Web Client. In the next years, Live Chat, SMS and Facebook integrations were added, as well as the Microsoft Teams in the latest V18.
Features
3CX Phone System consists of a number of software-based components. The PBX, accessed and managed via a web-based management console, softphone for Windows, and smartphone clients for iOS and Android. The phone system can be used with either SIP phones or the clients, or a combination of the two. The PBX provides unified communications functionality including presence, chat, voicemail to email, fax to email, integrated video conferencing, call conferencing, Live Chat, SMS, and additional integrations with Facebook and a number of CRM platforms.
Release History
Table created according to the "3CX Phone System Build History".
See also
Voice modem
Comparison of VoIP software
List of SIP software
IP PBX
References
Communication software
Telephone exchange equipment |
58512652 | https://en.wikipedia.org/wiki/Electric%20grid%20security | Electric grid security | Electric grid security in the US refer to the activities that utilities, regulators, and other stakeholders play in securing the national electricity grid. The American electrical grid is going through one of the largest changes in its history, which is the move to smart grid technology. The smart grid allows energy customers and energy providers to more efficiently manage and generate electricity. Similar to other new technologies, the smart grid also introduces new concerns about security.
Utility owners and operators (whether investor-owned, municipal, or cooperative) typically are responsible for implementing system improvements with regards to cybersecurity. Executives in the utilities industry are beginning to recognize the business impact of cybersecurity.
The electric utility industry in the U.S. leads a number of initiatives to help protect the national electric grid from threats. The industry partners with the federal government, particularly the National Institute of Standards and Technology, the North American Electric Reliability Corporation, and federal intelligence and law enforcement agencies.
Electric grids can be targets of military or terrorist activity. When American military leaders created their first air war plan against the Axis in 1941, Germany's electric grid was at the top of the target list.
Issue overview
The North American electrical power grid is a highly connected system. The ongoing modernization of the grid is generally referred to as the "smart grid". Reliability and efficiency are two key drivers of the development of the smart grid. Another example is the ability for the electrical system to incorporate renewable energy sources such as wind power and geothermal power. One of the key issues for electric grid security is that these ongoing improvements and modernizations have created more risk to the system. As an example, one risk specifically comes from the integration of digital communications and computer infrastructure with the existing physical infrastructure of the power grid.
According to the academic journal IEEE Security & Privacy Magazine, "The smart grid . . . uses intelligent transmission and distribution networks to deliver electricity. This approach aims to improve the electric system's reliability, security, and efficiency through two-way communication of consumption data and dynamic optimization of electric-system operations, maintenance, and planning."
Government oversight
In the U.S., the Federal Energy Regulatory Commission (FERC) is in charge of the cybersecurity standards for the bulk power system. The system includes systems necessary for operating the interconnected grid.
Investor-owned utilities operate under a different authority, state public utility commissions. This falls outside of FERC's jurisdiction.
Cybersecurity
In 2016, members of the Russian hacker organization "Grizzly Steppe" infiltrated the computer system of a Vermont utility company, Burlington Electric, exposing the vulnerability of the nation's electric grid to attacks. The hackers did not disrupt the state's electric grid, however. Burlington Electric discovered malware code in a computer system that was not connected to the grid.
As of 2018, two evolutions are taking place in the power economic sector. These evolutions could make it harder for utilities to defend from a cyber threat. First, hackers have become more sophisticated in their attempts to disrupt electric grids. "Attacks are more targeted, including spear phishing efforts aimed at individuals, and are shifting from corporate networks to include industrial control systems." Second, the grid is becoming more and more distributed and connected. The growing "Internet of Things" world could make it so that every device could be a potential vulnerability.
Terrorist attack risk
As of 2006, over 200,000 miles of transmission lines that are 230 kV or higher existed in the United States. The main problem is that it is impossible to secure the whole system from terrorist attacks. The scenario of such a terrorist attack, however, would be minimal because it would only disrupt a small portion of the overall grid. For example, an attack that destroys a regional transmission tower would only have a temporary impact. The modern-day electric grid system is capable of restoring equipment that is damaged by natural disasters such as tornadoes, hurricanes, ice storms, and earthquakes in a generally short period of time. This is due to the resiliency of the national grid to such events. "It would be difficult for even a well-organized large group of terrorists to cause the physical damage of a small- to moderate-scale tornado."
Potential solutions
Today the utility industry is advancing cybersecurity with a series of initiatives. They are partnering with federal agencies. The goal is to improve sector-wide resilience to both physical and cyber threats. The industry is also working with National Institute of Standards and Technology, the North American Electric Reliability Corporation, and federal intelligence and law enforcement agencies.
In 2017, electric companies spent $57.2 billion on grid security.
In September 2018, Brien Sheahan, chairman and CEO of the Illinois Commerce Commission and a member of the U.S. Department of Energy (DOE) Nuclear Energy Advisory Committee, and Robert Powelson, a former Federal Energy Regulatory Commission (FERC) commissioner, wrote in a published piece in Utility Dive that cyberthreats to the national power system require stronger national standards and more collaboration between levels of government. Recent to their article, the U.S. Department of Homeland Security confirmed that Russian hackers targeted the control room's of American public utilities. The electric distribution system has become more and more networked together and interconnected. Critical public services depend on the system: water delivery, financial institutions, hospitals, and public safety. To prevent disruption to the network, Sheahan and Powelson recommended national standards and collaboration between federal and state energy regulators.
Some utility companies have cybersecurity-specific practices or teams. Baltimore Gas and Electric conducts regular drills with its employees. It also shares cyber-threat related information with industry and government partners. Duke Energy put together a corporate incident response team that is devoted to cybersecurity 24 hours a day. The unit works closely with government emergency management and law enforcement.
Some states have cybersecurity procedures and practices:
New Jersey: Utilities are required to put together comprehensive cybersecurity plans.
Pennsylvania: Utilities must keep physical and cybersecurity, emergency response and business continuity plans. They also have to report severe cyberattacks.
Texas: The state's public utility commission conducts annual security audits.
In December 2018, U.S. Senators Cory Gardner and Michael Bennet introduced legislation intended to improve grid security nation-wide. The bills would create a $90 million fund that would be distributed to states to develop energy security plans. The legislation would also require the U.S. Energy Department to identify any vulnerabilities to cyberattacks in the nation's electrical power grid.
In March 2019, Donald Trump issued an executive order that directed federal agencies to prepare for attacks involving an electromagnetic pulse. In May 2020, he issued an executive order that bans the use of grid equipment manufactured by a foreign adversary.
Electricity Subsector Coordinating Council
The Electricity Subsector Coordinating Council (ESCC) is the main liaison organization between the federal government and the electric power industry. Its mission is to coordinate efforts to prepare for, and respond to, national-level disasters or threats to critical infrastructure. The ESCC is composed of electric company CEOs and trade association leaders from all segments of the industry. Its federal government counterparts include senior administration officials from the White House, relevant cabinet agencies, federal law enforcement, and national security organizations.
See also
Smart grids by country
High-voltage transformer fire barriers
References
Further reading
Campbell, Richard J. "Electric Grid Cybersecurity." Congressional Research Service. 2018-09-04.
Katz, Jeff. "10 Grid Security Considerations for Utilities." SecurityIntelligence. 2016-11-10.
"Framework for Improving Critical Infrastructure Cybersecurity." National Institute of Standards and Technology. 2014-02-12.
Gheorghiu, Iulia. "What are utilities doing about the growing need for grid security?" UtilityDIVE. 2018-05-22.
"Growing cyber threats demand comprehensive grid security." IBM.
Public utilities
Computer security
Electric power |
2079997 | https://en.wikipedia.org/wiki/Statistica | Statistica | Statistica is an advanced analytics software package originally developed by StatSoft and currently maintained by TIBCO Software Inc.
Statistica provides data analysis, data management, statistics, data mining, machine learning, text analytics and data visualization procedures.
Overview
Statistica is a suite of analytics software products and solutions originally developed by StatSoft and acquired by Dell in March 2014. The software includes an array of data analysis, data management, data visualization, and data mining procedures; as well as a variety of predictive modeling, clustering, classification, and exploratory techniques. Additional techniques are available through integration with the free, open source R programming environment.
Different packages of analytical techniques are available in six product lines.
History
Statistica originally derives from a set of software packages and add-ons that were initially developed during the mid-1980s by StatSoft. Following the 1986 release of Complete Statistical System (CSS) and the 1988 release of Macintosh Statistical System (MacSS), the first DOS version (trademarked in capitals as STATISTICA) was released in 1991. In 1992, the Macintosh version of Statistica was released.
Statistica 5.0 was released in 1995. It ran on both the new 32-bit Windows 95/NT and the older version of Windows (3.1). It featured many new statistics and graphics procedures, a word-processor-style output editor (combining tables and graphs), and a built-in development environment that enabled the user to easily design new procedures (e.g., via the included Statistica Basic language) and integrate them with the Statistica system.
Statistica 5.1 was released in 1996 followed by Statistica CA '97 and Statistica '98 editions.
In 2001, Statistica 6 was based on the COM architecture and it included multithreading and support for distributed computing.
Statistica 9 was released in 2009, supporting 32 bit and 64-bit computing.
Statistica 10 was released in November 2010. This release featured further performance optimizations for the 64-bit CPU architecture, as well as multithreading technologies, integration with Microsoft SharePoint, Microsoft Office 2010 and other applications, the ability to generate Java and C# code, and other GUI and kernel improvements.
Statistica 12 was released in April 2013 and features a new GUI, performance improvements when handling large amounts of data, a new visual analytic workspace, a new database query tool as well as several analytics enhancements.
Localized versions of Statistica (including the entire family of products) are available in Chinese (both Traditional and Simplified), Czech, English, French, German, Italian, Japanese, Polish, Russian, and Spanish. Documentation is available in Arabic, Chinese, Czech, English, French, German, Hungarian, Italian, Japanese, Korean, Polish, Portuguese, Russian, Spanish, and other languages.
Acquisition history
Statistica was acquired by Dell in March 2014. In November 2016, Dell sold off several pieces of its software group, and Francisco Partners and Elliott Management Corporation acquired Statistica as part of its purchase of Quest Software from Dell. On May 15, 2017, TIBCO Software Inc. announced it entered into an agreement to acquire Statistica.
Release history
List of releases:
PsychoStat - 1984
Statistical Supplement for Lotus 1-2-3 - 1985
StatFast/Mac - 1985
CSS 1 - 1987
CSS 2 - 1988
MacSS - 1988
STATISTICA/DOS - 1991
STATISTICA/Mac - 1992
STATISTICA 4.0 - 1993
STATISTICA 4.5 - 1994
STATISTICA 5.0 - 1995
STATISTICA 5.1 - 1996
STATISTICA 5.5 - 1999
STATISTICA 6.0 - 2001
STATISTICA 7.0 - 2004
STATISTICA 7.1 - 2005
STATISTICA 8.0 - 2007
STATISTICA 9.0 - 2009
STATISTICA 9.1 - 2009
STATISTICA 10.0 - 2010
STATISTICA 11.0 - 2012
STATISTICA 12.0 - 2013
Statistica 12.5 - April 2014
Statistica 12.6 - December 2014
Statistica 12.7 - May 2015
Statistica 13.0 - Sept 2015
Statistica 13.1 - June 2016
Statistica 13.2 - Sep 30, 2016
Statistica 13.3 - June, 2017
Statistica 13.3.1 - November, 2017
Statistica 13.4 - May 2018
Statistica 13.5 - November 2018
Statistica 13.6 - November 2019
Statistica 14.0 - December 2020
Graphics
Statistica includes analytic and exploratory graphs in addition to standard 2- and 3-dimensional graphs. Brushing actions (interactive labeling, marking, and data exclusion) allow for investigation of outliers and exploratory data analysis.
User interface
Operation of the software typically involves loading a table of data and applying statistical functions from pull-down menus or (in versions starting from 9.0) from the ribbon bar. The menus then prompt for the variables to be included and the type of analysis required. It is not necessary to type command prompts. Each analysis may include graphical or tabular output and is stored in a separate workbook.
See also
Comparison of statistical packages
StatSoft
References
Further reading
Hill, T., and Lewicki, P. (2007). STATISTICS Methods and Applications. Tulsa, OK: StatSoft.
Nisbet, R., Elder, J., and Miner, G. (2009). Handbook of Statistical Analysis and Data Mining Applications. Burlington, MA: Academic Press (Elsevier).
External links
TIBCO Data Science (formerly known as Statistica)
StatSoft Homepage
Electronics statistics textbook online
Statistical software
Science software for Windows
Windows-only software |
39739125 | https://en.wikipedia.org/wiki/System%20generation | System generation | In computing system generation or sysgen is the process of creating a particular unique instance of an operating system by combining user-specified options and parameters with manufacturer-supplied general-purpose program code to produce an operating system tailored for a particular hardware and software environment.
Some other programs have similar processes, although not usually called "sysgen." For example, IBM's Customer Information Control System (CICS) was installed through a process called CICSGEN.
Rationale
A large general-purpose program such as an operating system has to provide support for all variations of Central processing unit (CPU) that it might be run on, for all supported main memory sizes, and for all possible configurations of input/output (I/O) equipment. No one installation requires all this support, so system generation provides a process for selecting the options and features actually required on any one system.
Sysgen produces a system that is most efficient in terms of CPU time, main memory requirements, I/O activity, and/or disk space. Often these parameters can be traded off, for example to generate a system that requires less memory at the expense of increased disk I/O operations.
See also
System Generation (OS)
References
System software |
58919970 | https://en.wikipedia.org/wiki/1993%20United%20States%20Senate%20hearings%20on%20video%20games | 1993 United States Senate hearings on video games | On December 7, 1993, and March 5, 1994, members of the combined United States Senate Committees on Governmental Affairs and the Judiciary held congressional hearings with several spokespersons for companies in the video game industry including Nintendo and Sega, involving violence in video games and the perceived impacts on children. The hearing was a result of concerns raised by members of the public on the 1993 releases of Night Trap, Mortal Kombat and later Doom which was released after the first hearing. Besides general concerns related to violence in video games, the situation had been inflamed by a moral panic over gun violence, as well as the state of the industry and an intense rivalry between Sega and Nintendo.
The hearings, led by Senators Joe Lieberman and Herb Kohl, put the video game companies to task for the realistic depiction of violence in video games, and threatened that Congress would take action to regulate the industry if they did not take steps themselves. As a result, the American video game industry created the Interactive Digital Software Association (now known as the Entertainment Software Association) in July 1994 to serve as an advocacy group for the industry, and subsequently formed the Entertainment Software Ratings Board (ESRB) to provide content ratings on video games sold at retail in North America.
Background
Since as early as the 1970s, video games have been criticized for having violent content that psychologically influence players. In 1982, the Surgeon General C. Everett Koop asserted that video games may be affecting the health and well-being of young people and were potentially addictive. However, until the 1990s, the perceived target market for video games was generally for children, and manufacturers of video games typically did not include high levels of graphic violence in their games. Most computer game software would be sold through toy stores like Toys 'R' Us or general retail outlets like Sears and Wal-Mart, rather than computer stores.
State of the industry
By 1993, the video game industry had recovered from the 1983 crash, and was estimated to be worth . Video game consoles had reached the 16-bit era with the ability to support higher resolution graphics. Alongside this, video games had started to draw older players, creating a market for games with more mature content, both on home consoles and in arcades.
During this period, two key players were Nintendo and Sega. Nintendo had been instrumental in helping to recover the North American market after the crash with the introduction of the Nintendo Entertainment System (NES); in 1990, Nintendo sales accounted for 90% of the US market. Its successor system, the Super Nintendo Entertainment System (SNES), was released in 1991. However, to avoid repeating one of the issues that caused the crash, Nintendo took care to limit and review what third-party games could be made for its platforms to avoid a glut of poor games. Sega, which had already released its Sega Genesis in the US in 1989, found their sales to be lagging behind Nintendo. In an aggressive campaign aimed at the United States market, Sega heavily pushed the game Sonic the Hedgehog, with its titular character aimed to become Sega's mascot and rival that of Nintendo's Mario. Sega was also less selective about which games it allowed on the platform, allowing many more third-party games onto the system to increase its library in contrast to Nintendo's. Further, Sega used marketing language purposely aimed at Nintendo, such as "Sega does what Nintendon't". By 1992, Sega had obtained 65% of the US gaming market, overtaking Nintendo's dominant position. Subsequently, a strong rivalry between Nintendo and Sega was formed, referred as the "Console Wars", which continued through the next decade and into the fifth generation of consoles, after which Sega dropped out of the hardware market and became principally a game developer and publisher, and at times working collaboratively with Nintendo.
Existing content ratings system
Prior to 1994, the video game industry did not have a unified content rating system. Entering 1993, the Children's Advertising Review Unit (CARU) of the Better Business Bureau had heard rumblings from politicians that the content of video games was under scrutiny, and sent word to its board members, Sega and Nintendo. Sega agreed that they should have a ratings system, and had developed its Videogame Rating Council in June 1993 in association with independent educators, psychologists, child development experts and sociologists. The Videogames Rating Council had three principal ratings for games released on the Genesis platform: GA for General Audiences, MA-13 for games intended for those thirteen years and older, and MA-17 for those 17 years and older. Sega debuted their system in June 1993 just prior to the release of Mortal Kombat for the Genesis, knowing the ratings system would help mitigate concerns over the violent content in the game. Nintendo did not have a ratings system, but as it had control over the cartridge manufacturing process, it would only publish games they felt appropriate for a family console. Nintendo also refused to use Sega's ratings solution due to their continued corporate rivalry.
A further difficulty for the industry was that there was no established trade group for the video game industry. While many of the game publishers belonged to the Software Publishers Association, this group represented more practical software interests, such as word processors and spreadsheets, and the Association did not hold the entertainment software membership in high regard.
Mortal Kombat
Fighting games had become a lucrative property after the release of Capcom's arcade Street Fighter II: The World Warrior in 1991, which established many conventions of the genre. While Sega lured many of Nintendo's third-party developers to break their exclusivity agreements to publish games only for the Nintendo consoles, Capcom remained loyal to Nintendo, licensing only a few titles to Sega to publish, and only the SNES received the home console port of Street Fighter II in 1992, which was said to help keep the SNES sales ahead of the Genesis in the United States for that year.
Numerous other fighting games followed to try to capture on Street Fighter II success, but most notably was Midway's Mortal Kombat, first released as an arcade game in 1992. Mortal Kombat was highly controversial at its release: as a fighter game, the game has photo-realistic sprites of the game's characters, graphic spurting of blood on several hits, and a number of "fatalities" such as decapitation or impaling a body on spikes. Despite the game's advertising indicating the game was meant for mature audiences, the perception that video games were still aimed at children caused parents and other concerned groups to criticize the violence within the game. However, at this point, Mortal Kombat was only an arcade game, making it relatively easy to segregate it from other arcade games if necessary. Greg Fischbach, the co-founder and CEO of Acclaim Entertainment, the company that had secured the license for home console versions of the game, said that while the negative press attention helped to boost the game's popularity, "[w]e didn't want that press or publicity", and recognized that the industry might need to take steps to quell similar problems in the future.
Because of its success in the arcade, Acclaim Entertainment started to bring Mortal Kombat to various home consoles. Both Sega and Nintendo sought the game for their consoles, the Genesis and the SNES, respectively. Both companies recognized the issue with the level of gore in the game, but took very different approaches. Sega, staying with their attempt to capture as much of the market, went with trying to keep as much of the visual gore from the arcade game in place. While the Genesis version of Mortal Kombat as shipped eliminated the blood and fatalities, they could be activated via the use of a well-published cheat code. As the game, as shipped, did not include the blood and gore, Sega labelled it with their MA-13 VRC label. On the other hand, Nintendo wanted to keep games on their system appropriate for families and children, and required Acclaim to change the red blood to grey sweat, edit the fatalities, and change other parts of the game's artwork to remove elements like severed heads on spikes. Sega's version of Mortal Kombat outsold Nintendo's by a factor of five. This furthered the existing rivalry between the two companies.
Night Trap
Night Trap is a 1992 game developed by Digital Pictures and released on the Sega CD, a CD-ROM attachment for the Sega Genesis. Night Trap is presented as an interactive movie, using full-motion video to show scenes and allowing players to choose their next option, creating divergence in the story. The game's narration centered on the disappearance of teenage girls (starring Dana Plato) at a winery estate, tied in with the appearance of vampire-like beings that feed on young females. Subsequently, the video scenes often veered into some sexually-alluring context as well as violence with various encounters.
Night Trap drew criticism worldwide, but which helped to publicize it. Similar issues were brought up in the United Kingdom, with former Sega of Europe development director Mike Brogan noting that "Night Trap got Sega an awful lot of publicity... Questions were even raised in the UK Parliament about its suitability. This came at a time when Sega was capitalizing on its image as an edgy company with attitude, and this only served to reinforce that image." However, in retrospective, much of this criticism was deemed misguided. Steven L. Kent writes that "Reading the transcripts of the 1993 hearings, it is hard to believe that anybody had ever actually played Night Trap. Few people bothered to acknowledge that the goal of Night Trap was not to kill women but to save them from vampires. Players did not even kill the vampires—they simply trapped them in Rube Goldberg–like booby traps. Nearly everyone who referred to Night Trap mentioned a scene in which a girl in a rather modest teddy is caught by the vampires and killed. The scene was meant to show players that they had lost and allowed too many vampires into the house." Jeremy Parish of 1UP.com noted that "its game objectives were mischaracterized either through ignorance or deliberate obfuscation, transforming it from bland and barely titillating FMV adventure to child-corrupting sexual boogeyman."
Lethal Enforcers
Lethal Enforcers was a 1992 arcade game released in 1992 by Konami which used light guns; the player takes the role of a police officer to lethally take down criminals while avoiding killing civilians and fellow police officers or being shot by criminals. The game was rendered using photo-realistic imagery which drew some concern. It was ported to home consoles the next year (on Sega systems in late 1993, and on the SNES by early 1994). These games shipped with the Konami Justifier, a plastic light gun modeled after a revolver.
Moral panic around gun violence
In the months prior to the hearings, there had been a small moral panic in America on gun-related crimes, which fueled the concerns related to violent video games. According to the Bureau of Justice Statistics, gun-related homicides had reached record highs in 1993 since the 1970s. Both Congress and the Justice Department were looking to reduce the amount of violence on television.
Congressional concern
The violence in video games became a concern after Mortal Kombats home console release in September 1992. One of Senator Joe Lieberman's former chief of staff, Bill Andresen, had been asked by his son to purchase the Sega version of Mortal Kombat for him. Andresen was appalled by the amount of violence in the game, and approached Lieberman on the matter. Lieberman was also shocked by the content of the game and began gathering more information. Lieberman had stated he had heard about Night Trap, evaluated the game himself, and also recognized its content as problematic.
By December 1, 1993, Lieberman held a press conference alongside other children's advocates including Bob Keeshan, the actor of "Captain Kangaroo". Lieberman stated his intention to open a congressional hearing the following week to address the issue of violent video games and the lack of content ratings, and his plans to introduce a ratings body through legislation to regulate the video game industry. During the conference, he showed footage from Mortal Kombat, Night Trap, and other games. Lieberman's research concluded that the average video game player at the time was between seven and twelve years old, and that video game publishers were marketing violence to children. Lieberman commented on the sales of Mortal Kombat to date, having sold 3 million copies by that point and was estimated to bring in over by the end of the year, demonstrating the greed of the industry to use violence to cater to children. Of Night Trap, Lieberman said "I looked at that game, too, and there was a classic. It ends with this attack scene on this woman in lingerie, in her bathroom. I know that the creator of the game said it was all meant to be a satire of Dracula; but nonetheless, I thought it sent out the wrong message." Lieberman stated "Few parents would buy these games for their kids if they knew what was in them" and "We're talking about video games that glorify violence and teach children to enjoy inflicting the most gruesome forms of cruelty imaginable." Lieberman subsequently stated that while he'd "like to ban all the violent video games", he knew this would conflict with the First Amendment, and instead wanted to seek a solution involving a content ratings system, which he felt would not impede First Amendment rights.
Hearings
First hearing
The first hearing was held on December 9, 1993, in front of the combined Governmental Affairs and Judiciary Senate committees. At the time, Senate was on recess, so the only Senators present were Lieberman, Kohl and Byron Dorgan.
Hours before the hearing, representatives of the video game industry announced that they have agreed to endorse and develop an industry-standard video game content ratings system, as an attempt to defuse the bad publicity of the hearings. This announcement was referred to several times throughout the first hearing.
Speaking witnesses to the panel included:
Dr. Parker Page of the Children's Television Resource and Education Center
Professor Eugene F. Provenzo of the University of Miami and author of the recently published Video Kids: Making Sense of Nintendo (1991)
Robert Chase of National Education Association
Marilyn Droz of the National Coalition on Television Violence
Howard Lincoln, vice president of Nintendo of America
Bill White, vice president of Sega of America
Ilene Rosenthal, General Counsel, Software Publishers Association
Dawn Weiner, Video Software Dealers Association
Craig Johnson, Past-President, Amusement and Music Operators Association
The first half of the hearing was devoted to the experts on education and child psychology. These four discussed their concerns and findings on the impact of violence in video games on children. Chase stated "Electronic games, because they are active rather than passive, can do more than desensitize impressionable children to violence. They actually encourage violence as the resolution of first resort by rewarding participants for killing one's opponents in the most grisly ways imaginable." Provenzo iterated his findings in writing Video Kids, stated that recent games had become "overwhelmingly violent, sexist, and racist", and affirmed that he felt that with games like Night Trap, the industry was "endorsing violence" and further called for a ratings system. Droz stated that children "need action, but they do not need to find murder as a form of entertainment".
The second half of the hearing focused on the industry representatives. During the hearing, both Sega and Nintendo continued their ongoing rivalry, accusing the other of the reason behind the hearings. Nintendo's Lincoln had led off his part of the testimony by acknowledging Nintendo's action to remove some of the violence of Mortal Kombat, which observers found gave Lincoln more respect from Lieberman than Sega's White received. One of White's key points was the transformation of the video game industry from primarily having a younger audience to an adult, and that Night Trap was meant only for adults. Lincoln retorted these claims, telling White, "I can't just sit here and allow you to be told that the video game industry has been transformed from children [as primary consumers] to adults." White referred to statistics collected from warranty cards on hardware and game sales that Sega kept and Nintendo would have also kept to justify the older demographics of current video games. Further, Lincoln asserted that Sega only developed its rating system after the release of Night Trap and only started to label its games after the game was criticized by consumers. White responded by showing a videotape of violent video games on the SNES and stressed the importance of rating video games, which at this point, Nintendo lacked.
Of Lethal Enforcers, Lieberman criticized the design of the Konami Enforcer to resemble a revolver; an infamous still of C-SPAN's coverage of the hearings was Lieberman holding up the Enforcer to talk about its realistic nature. Sega's White countered that Nintendo had a similar light gun product on the market, the Super Scope, that was shaped after a bazooka. Lieberman asserted that both the Enforcer and Super Scope looked too much like real weapons and should not be in the hands of children.
Lieberman was also critical of how the video game industry company approached advertising. During the hearing it showcased a Sega television advertisement where a school-aged child wins several video games over others, and then makes the other students obey his commands. Lieberman also expressed concern on Sega's Videogame Ratings Council, that while the ratings were reasonable, there was no standardization of how they were displayed, sometimes only printed on the game cartridge itself, and thus preventing parents from being able to review these before purchase.
Kohl warned the video game publishers that "If you don't do something about [content ratings], we will." speaking to the start of a bill Lieberman had been drafting that would have the government become involved in a ratings system. By the end of the hearing, Sega and Nintendo said they would commit themselves towards working with retail outlets including Sears and Toys 'R' Us to create a voluntary content ratings system to denote any violence or sexual content in their games, to be modeled after the film rating system created by the Motion Picture Association of America. At the conclusion of the first hearing, Lieberman decided they would have a second session in a few months to see on the progress of the industry on this effort.
Video Game Rating Act of 1994
Following the December 1993 hearing, Senator Lieberman, co-sponsored by Kohl and Dorgan, introduced the Video Games Rating Act of 1994 (S.1823) on February 3, 1994 to the Senate;
the equivalent bill (H.R.3785) was introduced to the House of Representatives by Tom Lantos. The Act, if passed, would have established an Interactive Entertainment Rating Commission, a five-member panel appointed by the President. This commission would then have coordinated with the video game industry to develop a ratings system and method of disseminating information related to violence and sexually explicit content to potential buyers. Lieberman asserted that the bill had been presented as to coerce the video game industry to take voluntary action themselves to come up with a ratings system, but that he had no plans to follow through on the bill should the industry come to an agreement.
Interim events
As a result of the Congressional hearings, Night Trap started to generate more sales. According to Digital Pictures founder Tom Zito, "You know, I sold 50,000 units of Night Trap a week after those hearings." Two weeks before Christmas 1993, Night Trap was removed from store shelves in the US's two largest toy store chains, Toys "R" Us and Kay-Bee Toys, after receiving numerous complaints. Michael Goldstein, the vice president of Toys 'R' Us, stated in mid-December that this was "a decision we made several weeks ago with the concurrence of Sega, which agrees with our decision". Sega withdrew Night Trap from all retail markets in January 1994, but not after selling over 250,000 copies. Bill White, Sega Vice President of Marketing, stated that Night Trap was pulled because the continued controversy surrounding it prevented constructive dialogue about an industry-wide rating system. Sega also stated at the time they would later release a censored version pending the establishment of an industry-wide ratings system.
Second hearing
The second hearing was held on March 5, 1994, which included as speakers:
Congressman Tom Lantos
Jack Heistand, Senior Vice President for Electronic Arts
Mary Evan, Vice President of Store Operations, Babbages
Chuck Kerby, Divisional Merchandise Manager, Wal-Mart
Steve Loenigsberg, President, American Amusement Machine Association
R.A. Green, III, Amusements & Music Operator Association
Heistand presented himself as part of the newly formed Interactive Entertainment Industry Rating Commission, the industry group working to establish the desired ratings systems. He reported to the joint committee that seven companies, including Electronic Arts, Sega, Nintendo, Atari, Acclaim, Philips and 3DO, representing about 60% of video game software in the United States, had committed to developing an industry software ratings board. Heistand said they anticipated to have come to agreements on the ratings standards by June 1994, such that by November of that year (in time for the holiday shopping season), they will be able to rate all new games coming to the market going forward, an estimated 2,500 games per year. However, Heistand also reported that the industry said there would too much effort to review all previously released games. Heistand also cautioned that the system may not take off if they could not get other software developers outside the group to also sign on to support the ratings system.
Lieberman acknowledged the proposed system as a critical step towards helping parents make informed decisions, but cautioned that until the rating system was in place, they would not be removing the proposed bill from their agenda. Retailers such as Wal-Mart, Toys "R" Us, and Babbages agreed that they would only stock games that have received these ratings, though had not yet decided on how to handle selling games rating for adults to children. The Senators still expressed concern at the type of content the industry was willing to produce. Kohl stated "Let me give you my honest perspective on this issue: Violent video games that degrade women are harmful to our children and are garbage...But we live by and cherish a Constitution that prevents government from censoring material. So we will try to live with a rating system". Lieberman stated "'the video game industry had practiced self-restraint before now, we wouldn't be here today".
Reactions
By April 1994, the coalition of companies represented by Heistand established the Interactive Digital Software Association (IDSA), with Acclaim's Fischbach serving as its initial CEO. One of the first tasks taken by the IDSA was to establish the promised rating system. While Sega offered their existing VRC as a basis, Nintendo, among others, steadfastly refused as they did not want to have to deal with anything created by their main competitor. Instead, a vendor-independent solution was developed, the Entertainment Software Rating Board (ESRB), with a new set of rating standards developed in conjunction with parents and educators. The ESRB ratings system was modeled after the Motion Picture Association of America, defining five age-related categories, but also adding a set of descriptive terms that would appear next to the rating to describe the specific content that would be found in the game.
The ESRB was formally introduced to Congress in July 1994 to show that they had met Lieberman's goal, and the Board became officially active on September 13, 1994. Lieberman stated in a 2017 interview that the video game industry "actually came up with a rating system that I think at the time — I honestly haven't been back to this in a long time — was the best. Much better than the movies." The IDSA also served to become the industry trade association to help advocate for the industry to the government and other groups. Eventually, IDSA was renamed to the Entertainment Software Association and launched the industry's principal trade show, the Electronic Entertainment Expo.
While The 3DO Company had stated they would help back an industry-wide solution to content ratings, they concurrently developed their own 3DO Rating System for games released on the 3DO platform. The 3DO system was voluntary and unlike the ESRB, allowed the publisher to select the rating. Publishers of 3DO games were split whether to use the ESRB or 3DO system. Ultimately, 3DO exited the hardware business around 1996, nullifying the need of the 3DO Rating System.
Separately, the Software Publishers Association (now the Software and Information Industry Association), the Association of Shareware Professionals, and other groups that represented developers of video game software on personal computers felt that the proposed ESRB system, which was based principally on age ratings, was not sufficient and wanted to inform parents to the specific types of content that would be in their games. These groups developed the Recreational Software Advisory Council (RSAC) in 1994 which rated games in three areas: violence, sexual content, and language, with each being rated among five levels. The following year, the RSAC system was also developed for Internet cites under the "RSACi" system. By 1999, RSAC and RSACi were transitioned to the Internet Content Rating Association dedicated specifically for rating Internet content, while software developers adopted the ESRB system for their games.
Legacy
Senator Lieberman continued to monitor the video game industry following the hearings, a general part of his own position related to violent content from entertainment industries. In 1997, he stated that one of his intentions of the 1993 hearings was to have the industry regulate how much violence that the video game industry was putting into its games by having them implement the ratings system, but felt that "[t]he rating system has not stopped game producers from putting out some very violent games."
Mortal Kombat II was released to arcades in its final form by January 1994, and to home consoles later that year. Among other changes, the game added a variation on "Fatalities" called "Friendships", which would occur if the player performed a separate move mechanically similar to a fatality; in such a case, the winning fighter would do a non-hostile action, such as giving the defeated fighter a virtual present. According to John Tobias, co-creator of Mortal Kombat, these friendships were added due to response from the Congressional hearings. When the home console versions were released, after the establishment for the ESRB, Nintendo did not take issue with the amount of violence in the game, and allowed it to release without any changes on the SNES. The SNES version of Mortal Kombat II outsold the Sega Genesis version that year.
id Software's Doom, a first-person shooter where the player fought hellish creatures and included graphic violence, was released on December 10, 1993, the day following the first hearing. Outside of the hearings, there was concern by parents and other organizations on the violence in this game as well. While Doom was not mentioned in either Senate hearing, it would come up again in 1999 following the Columbine High School massacre, where the perpetrators had described their planned attack as something straight out of Doom. As a result, Doom is frequently classified along with Mortal Kombat, Night Trap, and Lethal Enforcers as early examples of violent video games highlighted by the media.
At the time of the hearings, video games were not established as a protected form of speech covered under the First Amendment to the United States Constitution, though Lieberman and others had stated their concerns about First Amendment rights through censoring violent games and sought the ratings approach. Since the formation of the ESRB, attempts have been made by lawmakers at federal and state levels to restrict video game sales by their ESRB rating, principally in regards to their level of violence. In a landmark case in 2011, the Supreme Court of the United States ruled in Brown v. Entertainment Merchants Association that video games are an art form, protected by the First Amendment. The ruling found that while states can pass laws to block the sale of "obscene" video games to minors, violence would not fall within the Miller test of what is considered obscene.
One of Howard Lincoln's statements during the first hearing was "Let me say for the record, I want to state that Night Trap will never appear on a Nintendo System." The statement was jokingly referred to in 2018 when a remake of Night Trap for its 25th anniversary was announced for release on the Nintendo Switch among other systems.
References
External links
ESRB official FAQ
1993 controversies in the United States
Censorship in the United States
Entertainment Software Association
Investigations and hearings of the United States Congress
Video game controversies
Violence in video games |
32543240 | https://en.wikipedia.org/wiki/Firefox%20OS | Firefox OS | Firefox OS (project name: Boot to Gecko, also known as B2G) is a discontinued open-source operating system made for smartphones, tablet computers, smart TVs and dongles designed by Mozilla and external contributors. It is based on the rendering engine of the Firefox web browser, Gecko, and on the Linux kernel. It was first commercially released in 2013.
Firefox OS was designed to provide a complete, community-based alternative operating system, for running web applications directly or those installed from an application marketplace. The applications use open standards and approaches such as JavaScript and HTML5, a robust privilege model, and open web APIs that can communicate directly with hardware, e.g. cellphone hardware. As such, Mozilla with Firefox OS competed with commercially developed operating systems such as Apple's iOS, Google's Android, Microsoft's Windows Phone, BlackBerry's BlackBerry 10, Samsung's/Linux Foundation's Tizen and Jolla's Sailfish OS. In December 2015, Mozilla announced it would stop development of new Firefox OS smartphones, and in September 2016 announced the end of development. Successors to Firefox OS include the discontinued B2G OS and Acadine Technologies' H5OS as well as KaiOS Technologies' KaiOS and Panasonic's My Home Screen.
History
Firefox OS was publicly demonstrated in February 2012, on Android-compatible smartphones. By December 16, 2014, fourteen operators in 28 countries throughout the world offered Firefox OS phones.
On December 8, 2015, Mozilla announced that it would stop sales of Firefox OS smartphones through carriers. Mozilla later announced that Firefox OS smartphones would be discontinued by May 2016, as the development of "Firefox OS for smartphones" would cease after the release of version 2.6. Around the same time, it was reported that Acadine Technologies, a startup founded by Li Gong (former president of Mozilla Corporation) with various other former Mozilla staff among its employees, would take over the mission of developing carrier partnerships, for its own Firefox OS derivative H5OS.
In January 2016 Mozilla announced that Firefox OS would power Panasonic's UHD TVs (as previously announced Firefox OS "would pivot to connected devices"). In September 2016 Mozilla announced that work on Firefox OS had ceased, and that all B2G-related code would be removed from mozilla-central.
Project inception and roll-out
Commencement of project
On July 25, 2011, Andreas Gal, Director of Research at Mozilla Corporation, announced the "Boot to Gecko" Project (B2G) on the mozilla.dev.platform mailing list. The project proposal was to "pursue the goal of building a complete, standalone operating system for the open web" in order to "find the gaps that keep web developers from being able to build apps that are in every way the equals of native apps built for the iPhone, Android, and Windows Phone 7." The announcement identified these work areas: new web APIs to expose device and OS capabilities such as telephone and camera, a privilege model to safely expose these to web pages, applications to prove these capabilities, and low-level code to boot on an Android-compatible device.
This led to much blog coverage. According to Ars Technica, "Mozilla says that B2G is motivated by a desire to demonstrate that the standards-based open Web has the potential to be a competitive alternative to the existing single-vendor application development stacks offered by the dominant mobile operating systems."
In 2012, Andreas Gal expanded on Mozilla's aims. He characterized the current set of mobile operating systems as "walled gardens" and presented Firefox OS as more accessible: "We use completely open standards and there’s no proprietary software or technology involved." (That changed in 2014; see Digital rights management (DRM), below.) Gal also said that because the software stack is entirely HTML5, there are already a large number of established developers. This assumption is employed in Mozilla's WebAPI. These are intended W3C standards that attempt to bridge the capability gap that currently exists between native frameworks and web applications. The goal of these efforts is to enable developers to build applications using WebAPI which would then run in any standards compliant browser without the need to rewrite their application for each platform.
Development history
In July 2012, Boot to Gecko was rebranded as 'Firefox OS', after Mozilla's well-known desktop browser, Firefox, and screenshots began appearing in August 2012.
In September 2012, analysts Strategy Analysts forecast that Firefox OS would account for 1% of the global smartphone market in 2013, its first year of commercial availability.
In February 2013, Mozilla announced plans for its global commercial roll-out of Firefox OS.
Mozilla announced at a press conference before the start of Mobile World Congress in Barcelona that the first wave of Firefox OS devices would be available to consumers in Brazil, Colombia, Hungary, Mexico, Montenegro, Poland, Serbia, Spain and Venezuela. Mozilla also announced that LG Electronics, ZTE, Huawei and TCL Corporation had committed to making Firefox OS devices.
In December 2013, new features were added with the 1.2 release, including conference calling, silent SMS authentication for mobile billing, improved push notifications, and three state settings for Do Not Track.
Async Pan and Zoom (APZ), included in version 1.3, should improve user interface responsiveness.
Work was done to optimize Firefox OS to run a 128 MB platform with version 1.3T. A 128 MB device is out that seems to use that version but it may be unfinished.
In 2015, Mozilla ported Firefox OS (an "experimental version") to MIPS32 to work in a sub-$100 tablet (that can also run Android 4.4 KitKat). Mozilla has worked on developing the OS for Smart Feature Phones.
Firefox OS was discontinued in January 2017.
Digital rights management (DRM)
In 2014, Gal announced a change in course, writing that future versions of the Firefox browser would include DRM. Implementation of DRM in the Firefox browser began with version 38.
In August 2015, attempts by Matchstick TV (based on Firefox OS) to add DRM caused the demise of Matchstick, a decision that Boing Boing called "suicide-by-DRM".
Demonstrations
At Mobile World Congress 2012, Mozilla and Telefónica announced that the Spanish telecommunications provider intended to deliver "open Web devices" in 2012, based on HTML5 and these APIs.
Mozilla also announced support for the project from Adobe and Qualcomm, and that Deutsche Telekom’s Innovation Labs would join the project.
Mozilla demonstrated a "sneak preview" of the software and apps running on Samsung Galaxy S II phones (replacing their usual Android operating system).
In August 2012, a Nokia employee demonstrated the OS running on a Raspberry Pi.
Firefox OS is compatible with a number of devices, including Otoro, PandaBoard, Emulator (ARM and x86), Desktop, Nexus S, Nexus S 4G, Samsung Galaxy S II, Galaxy Nexus and Nexus 4. A MIPS port was created by Imagination Technologies in March 2015.
In December 2012, Mozilla rolled out another update and released Firefox OS Simulator 1.0, which can be downloaded as an add-on for Firefox. The latest version of Firefox OS Simulator, version 4.0, was released on July 3, 2013 and announced on July 11, 2013.
Mozilla's planned US$25 Firefox smartphone displayed at MWC, is built by Spreadtrum. Mozilla has collaborated with four handset makers and five wireless carriers to provide five Firefox-powered smartphones in Europe and Latin America so far with cellphone launches being led by UK marketer John D. Bernard. In India, Mozilla planned a launching at $25 in partnership with Intex & Spice, but the price ended up being $33 (converted from 1,999 Rupees).
Core technologies
The initial development work involves three major software layers:
Gonk – platform denomination for a combination of the Linux kernel and the HAL from Android
Gecko – the web browser engine and application run-time services layer
Gaia – an HTML5 layer and user-interface system
Gonk
Gonk consists of a Linux kernel and user-space hardware abstraction layer (HAL). The kernel and several user-space libraries are common open-source projects: Linux, libusb, BlueZ, etc. Some other parts of the HAL are shared with the Android project: GPS, camera, among others. Gonk is basically an extremely simple Linux distribution and is therefore from Gecko's perspective, simply a porting target of Gecko; there is a port of Gecko to Gonk, just like there is a port of Gecko to OS X, and a port of Gecko to Android. However, since the development team have full control over Gonk, the developers can fully expose all the features and interfaces required for comprehensive mobile platforms such as Gecko, but which aren't currently possible to access on other mobile OSes. For example, using Gonk, Gecko can obtain direct access to the full telephone stack and display framebuffer, but doesn't have this access on any other OS.
Gecko
Gecko is the web browser engine of Firefox OS. Gecko implements open standards for HTML, CSS, and JavaScript. Gecko includes a networking stack, graphics stack, layout engine, virtual machine (for JavaScript), and porting layers.
Gaia
Gaia is the user interface of Firefox OS and controls everything drawn to the screen. Gaia includes by default implementations of a lock screen, home screen, telephone dialer and contacts application, text-messaging application, camera application and gallery support, plus the classic phone apps: mail, calendar, calculator and marketplace. Gaia is written entirely in HTML, CSS, and JavaScript. It interfaces with the operating system through Open Web APIs, which are implemented by Gecko. Because it uses only standard web APIs, it can work on other OSes and other web browsers.
Release history
Forks
Panasonic continues to develop the operating system for use in their Smart TVs, which run My Home Screen, powered by the Firefox OS.
Acadine Technologies has derived their H5OS from Firefox OS as well. Li Gong, the founder of the company, had overseen the development of Firefox OS while serving as president of Mozilla Corporation.
A fork called KaiOS has been used on a few feature phones, including Alcatel's OneTouch Go Flip (known as Cingular Flip 2 on AT&T), Reliance Jio's JioPhone (LYF F30C), and Intex's Turbo+ 4G. The system brings support for 4G LTE, Wi-Fi, GPS, and HTML5-based apps onto non-touch devices with an optimized user interface, less memory usage, and longer battery life.
B2G OS
B2G OS (Boot 2 Gecko) was a community-developed mobile operating system, and the successor to Firefox OS. It follows the Firefox OS goal of providing a complete, community-based alternative operating system, that runs software as web applications. Its mobile apps therefore use open web standards and programming languages such as JavaScript and HTML5, a robust privilege model, and open web APIs that can communicate directly with the device's hardware.
It is now the basis of KaiOS which has (as of Jan 2019) over 17 percent of the Indian mobile phone market and is the third most popular phone OS. KaiOS is closed-source.
History
B2G OS was forked from Firefox OS following Mozilla's decision to discontinue support for their mobile operating system. The decision was made, according to Ari Jaaksi and David Bryant, in order to "evolve quickly and enable substantial new architectural changes in Gecko, Mozilla’s Platform Engineering organization needs to remove all B2G-related code from mozilla-central."
, B2G OS is no longer maintained.
Comparison with Android
Firefox OS used the Linux kernel like Android does. Firefox OS used the Gecko engine on top of the Linux kernel to render the screen output. Apps were written using HTML5, CSS and JavaScript—all three being cooperative languages used in making internet webpages. In essence apps on Firefox OS were web apps and the OS could be thought of as a Web browser that stored content off-line. On the other hand, Android's apps are coded in Java using Android Studio. Android also enjoys greater maturity and support. Despite these differences, Firefox OS did feature all the essentials required to use a smartphone. Firefox launched its first official device in Germany in 2014, which was an Alcatel One Touch Fire. The device had a 3.5” HVGA screen, Cortex A5 processor, 256MB RAM and 512MB storage. As of December 2015, Mozilla had launched 12 smartphones across 24 countries.
Criticisms
Chris Ziegler of the technology website The Verge wrote that Firefox OS would take app distribution to the pre-iPhone era, requiring application developers to deal with multiple carriers and their app stores. At the Mobile World Congress, Mozilla's CEO Gary Kovacs said that Firefox OS has the advantage that users need not install an app to use it. Mozilla sought to make the most of this with the search functionality built into Firefox OS, a core feature of the platform.
Janne Lindqvist, a mobile security researcher at the Rutgers University WINLAB, expressed concern about the discovery mechanism of a Web-based platform, but a Mozilla spokesperson stated that Mozilla required developers to "package downloadable apps in a zip file that has been cryptographically signed by the store from which it originated, assuring that it has been reviewed." In addition, "apps coming back from search are given only limited access to device programming interfaces and applications, unless the user grants permission for further access."
Devices
Officially and unofficially supported devices
The structural similarities between Firefox OS and Android allow the Mozilla platform to run on a number of devices that ship with Android. While some ports of Firefox OS are hardly different from their original versions, others are heavily modified to fit the device in question. There are quite a few to note that are specifically made for Firefox OS as stated above. There are some that are designed for the developers themselves and others that are consumer-phones. There are also emulators for testing both apps and the OS itself on the desktop which are designed for both OS testing and the developers themselves.
Firefox OS specific devices for developers:
Geeksphone Keon
Geeksphone Peak
T2Mobile Flame
ZTE Open
ZTE Open C
Firefox OS specific devices for consumers:
Alcatel Onetouch FireC 4020D
APC Paper
Cherry Mobile Ace
Intex Cloud FX
KDDI Fx0
Spice Fire One MI FX1
Spice Fire One Mi-FX 2
Symphony GoFox F15
Zen 105 Firefox
ZTE Open II
Firefox OS has been ported to the following devices:
HTC Explorer
Huawei Ascend G510
Huawei Ascend Y300
Ingenic JZ4780 based devices (2015)
Moto G
Nexus 4
Nexus 5
Nexus 7 (2013)
Samsung Galaxy S6 Edge+
Sony Xperia E
Sony Xperia SP
Sony Xperia T2 Ultra
Sony Xperia Z3
See also
H5OS
KaiOS
OpenFlint – open streaming technology for Firefox OS using the Matchstick dongle
Stagefright (bug) – security bug fixed in Firefox OS 2.2, but mostly known to affect Android
MeeGo
Sailfish OS
WebOS
Comparison of mobile operating systems
Comparison of Firefox OS devices
References
External links
2013 software
ARM operating systems
Discontinued operating systems
Embedded Linux distributions
Free mobile software
Gecko-based software
Mobile Linux
Mozilla
Smartphones
Software that uses XUL
X86 operating systems
Linux distributions |
43513477 | https://en.wikipedia.org/wiki/Kdump%20%28Linux%29 | Kdump (Linux) | kdump is a feature of the Linux kernel that creates crash dumps in the event of a kernel crash. When triggered, kdump exports a memory image (also known as vmcore) that can be analyzed for the purposes of debugging and determining the cause of a crash. The dumped image of main memory, exported as an Executable and Linkable Format (ELF) object, can be accessed either directly through during the handling of a kernel crash, or it can be automatically saved to a locally accessible file system, to a raw device, or to a remote system accessible over network.
Internals
In the event of a kernel crash, kdump preserves system consistency by booting another Linux kernel, which is known as the dump-capture kernel, and using it to export and save a memory dump. As a result, the system boots into a clean and reliable environment instead of relying on an already crashed kernel that may cause various issues, such as causing file system corruption while writing a memory dump file. To implement this "dual kernel" layout, kdump uses kexec for "warm" booting into the dump-capture kernel immediately after the kernel crash, using kexec's ability to boot "over" the currently running kernel while avoiding the execution of a bootloader and hardware initialization performed by the system firmware (BIOS or UEFI). A dump-capture kernel can be either a separate Linux kernel image built specifically for that purpose, or the primary kernel image can be reused on architectures that support relocatable kernels.
The contents of main memory (RAM) are preserved while booting into and running the dump-capture kernel by reserving a small amount of RAM in advance, into which the dump-capture kernel is preloaded so none of the RAM used by the primary kernel is overwritten when a kernel crash is handled. This reserved amount of RAM is used solely by the dump-capture kernel and is otherwise unused during normal system operation. Some architectures, including x86 and ppc64, require a small fixed-position portion of RAM to boot a kernel regardless of where it is loaded; in this case, kexec creates a copy of that portion of RAM so it is also accessible to the dump-capture kernel. Size and optional position of the reserved portion of RAM are specified through the kernel boot parameter , and the command-line utility is used after the primary kernel boots to preload a dump-capture kernel image and its associated initrd image into the reserved portion of RAM.
In addition to the functionality that is part of the Linux kernel, additional userspace utilities support the kdump mechanism, including the utility mentioned above. Besides the official utilities, which are provided as a patch to the kexec's suite of userspace utilities, some Linux distributions provide additional utilities that simplify the configuration of kdump's operation, including the setup of automated saving of memory dump files. Created memory dump files can be analyzed using the GNU Debugger (), or by using Red Hat's dedicated utility.
History
kdump functionality, together with kexec, was merged into the Linux kernel mainline in kernel version 2.6.13, which was released on August 29, 2005.
See also
debugfs a Linux kernel's RAM-based file system specifically designed for debugging purposes
kdump (BSD) a BSD utility for viewing trace files generated by the ktrace utility
Linux kernel oops a potentially non-fatal deviation from correct behavior of the Linux kernel
ProcDump a utility for creating core dumps of applications based on performance triggers
References
External links
Kdump, a Kexec-based Kernel Crash Dumping Mechanism, IBM, 2005, by Vivek Goyal, Eric W. Biederman, and Hariprasad Nellitheertha
Using Kdump for examining Linux kernel crashes, June 21, 2017, by Pratyush Anand
Kdump: Usage and internals, Red Hat, June 2017, by Pratyush Anand and Dave Young
Free software programmed in C
Linux kernel features
Unix programming tools |
47485471 | https://en.wikipedia.org/wiki/Giorgio%20Ausiello | Giorgio Ausiello | Giorgio Ausiello is an Italian computer scientist. Born in 1941, in 1966 he graduated in Physics under the supervision of Corrado Böhm. From 1966 to 1980 he served as researcher at the Italian National Research Council (CNR). In 1980 he became Professor of Compilers and Operating Systems at Sapienza University of Rome and since 1990 he has been Professor of Theoretical Computer Science in the Department of Computer, Control and Management Engineering, where he has been until recently the leader of the research group on Algorithm Engineering. At academic level Giorgio Ausiello has been Chairman of the degree in Computer Engineering, Director of the Graduate School, then member of the Academic Senate and finally Chairman of the Research Committee of Sapienza University. In 2012 he has been nominated Professor Emeritus of Sapienza University of Rome.
Throughout his research career Ausiello has addressed various research domains ranging from theory of programming to algorithms and complexity. Major scientific contributions concern database theory, approximability of NP-hard optimization problems, dynamic and online algorithms, graph algorithms, directed hypergraph algorithms. Most of the research work has been carried on in cooperation with some of the main European academic groups in the context of EU research projects.
Ausiello has contributed to several initiatives for the development of theoretical computer science in Italy and in Europe. In 1972 he was among the founders of the European Association for Theoretical Computer Science (EATCS) of which he has been President from 2006 to 2009. In 2014 he has been nominated Fellow of EATCS. In 1997, with Jozef Gruska, he took part in the creation of the IFIP Technical Committee for 'Foundations of Computer Science' (IFIP-TC1) and was the first Chairman of TC1.
From 2001 to 2015 Ausiello has been Editor in Chief of the journal Theoretical Computer Science Series A (Algorithms, Automata, Complexity and Games). He is also co-Editor in Chief of the Springer series "Advanced Research in Computing and in Software Science" (ARCoSS, a subline of LNCS), member of the Advisory Board of the "Monograph series of EATCS", member of the Editorial Board of the International Journal of Foundations of Computer Science, member of the Editorial Board of Computer Science Review. He has been elected member of Academia Europea in 1996. In 2004 he has become Doctor Honoris Causa of the Paris-Dauphine University.
At international level he has been Italian national representative in the Board of EU IST research programs (1988-1994 and 2006-2009) and member of the Board of Trustees of the International Computer Science Institute, Berkeley, USA. (1997-2001). In Italy he has consulted for some of the main research institutions in the field. Since 1979 to 1994 he has been involved in the major national research efforts in informatics as member of the scientific board of the CNR projects "Informatics", "Robotics" and "Information Systems and Parallel Computing".
Books
G. Ausiello 'Complessità di calcolo delle funzioni', Boringhieri, 1974.
G. Ausiello, A. Marchetti-Spaccamela, M. Protasi 'Teoria e progetto di algoritmi fondamentali', Franco Angeli, 1985.
G. Ausiello, P. Crescenzi, G. Gambosi, V. Kann, A. Marchetti-Spaccamela, M. Protasi 'Complexity and Approximation. Combinatorial Optimization Problems and their Approximability Properties', Springer, 1999.
G. Ausiello, R. Petreschi 'The Power of Algorithms' Springer, 2013.
G. Ausiello, F. d'Amore, G. Gambosi, L. Laura 'Linguaggi, Modelli, Complessità', Franco Angeli, 2014.
References
External links
1941 births
Living people
Italian computer scientists
Sapienza University of Rome faculty
People from Dogliani |
12791001 | https://en.wikipedia.org/wiki/ISO/IEC%2027003 | ISO/IEC 27003 | ISO/IEC 27003 Information technology — Security techniques — Information security management systems — Guidance. It is part of a family of standards of information security management system (ISMS) , which is a systematic approach to securing sensitive information, of ISO/IEC. It provides standards for a robust approach to managing information security (infosec) and building resilience. It was published on February 1, 2010 and revised in April 2017. It is currently not certifiable and is not translated into Spanish.
This standard appears in ISO/IEC 27000-series (more information can be found in ISO/IEC 27000). The ISO/IEC 27003 standard provide guidance for all the requirements of ISO/IEC 27001, but it does not have detailed descriptions regarding “monitoring, measurement, analysis and evaluation” and information security risk management. Also, Provides recommendations, possibilities and permissions in relation to them. It is not the intention of this standard to provide general guidance on all aspects of information security.
What is the standard about?
This standard is about:
This document provides explanation and guidance on ISO/IEC 27001:2013.
This standard is applicable to all types of organizations regardless of size.
Terms and structure
The terms and definitions given in this standard are defined within the standard ISO/IEC 27000. The ISO/IEC 27003 standard is structured as follows:
Leadership
Planning
Support
Operation
Performance evaluation
Improvement
In addition to that, it has 1 annex (A):
Annex A - (informative) Policy framework
References
External links
ISO Website
Information assurance standards
27003 |
476390 | https://en.wikipedia.org/wiki/Hamakor | Hamakor | Hamakor (), officially known as Hamakor – Israeli Society for Free and Open Source Software (), is an Israeli non-profit organization dedicated to the advancement of free and open source software in Israel.
Hamakor was founded in January 2003. Its primary purpose is the somewhat axiomatic charter of giving an official face to the decentralized Open Source community where such a face is needed.
Background and formation
Several members of the open-source community, most notably Gilad Ben-Yossef and Doron Ofek, pioneered the idea of forming an official, legally recognized organization. They wanted to counter difficulties with the non-open-source-related bodies and to address the inherently decentralized way open source is developed and advocated. Two main bodies expected to deal with organizations in that respect: the media, which is accustomed to having someone to call to get a comment, and the Knesset, where standing up in front of legislators requires answering the implicit question "who am I and why should you listen to me?". It was felt that, as individuals, the community's ability to make a difference suffered in comparison with an organized body.
The idea matured for some time. The final push to form the actual organization came from a desire by Gilad Ben-Yossef, circa July 2002, to see Revolution OS in Israel. It turned out that the only way to see the movie was to rent a cinema hall and to pay the distributor. Gilad decided to open the event to the public and to charge a small fee for entrance to cover costs. The event, labeled "August Penguin", took place on the first Friday of August, 2002. The event proved a financial success, in that an extra 2000 NIS were recorded after expenses. Gilad pledged to use that money to start the official organization. This also affected Hamakor's charter, adding to it the capacity to act as a recognized money-handler on behalf of the community for organizing events or any other monetary activity.
Meaning of the name
The name for Hamakor was offered by Ira Abramov. It was offered after a series of Linux related names was offered, as a name that is not specifically related to Linux (as opposed to free software at large) on one hand, and yet does have a Linux specific reference hidden in it. The word "Makor" in Hebrew means "source" (as in open source), but also "beak", being a hidden reference to Tux being a bird.
The original name as submitted to the registrar of non-profits was "Hamakor – The Israeli Society for Free and Open Source Software". The registrar struck down "the", claiming Hamakor has no right to claim exclusivity.
Two levels of membership
Most of the points on Hamakor's charter were either fairly straight forwardly based on the reasons for forming it or a result of legal requirements. One point that was largely debated, however, was whether anyone can become a full voting member. Hamakor was aimed to be a representative of the community. The concern raised was that the commercial companies would try to take control over the organization. As a result, two restrictions were made. The first was that membership was limited to individuals only. The second was the creation of a two level membership. Hamakor has a members, which has full voting capacity, and friends. A friend is an individual who enjoys all the benefits (such that there are) a member has except for one – a friend cannot vote one charter changes and when electing the board.
To move from a friend status to a full member status one has to show some activity promoting free/open source software. The definition of "some" has changed over time. During some periods the board required actually writing non-trivial amounts of code, while in others the board deemed helping a friend install Linux on his computer enough.
In most cases, almost anyone who wished to become a full member could do so.
August Penguin
Having a gathering on the first Friday of August has become a tradition, and August Penguin has turned into the yearly conference for Hamakor. Of course, due to the way the organization was formed, there is one more August Penguin conference than years Hamakor exists, having the unique situation where an organization's yearly conference actually pre-dates the organization's forming.
In 2005, the conference made an attempt to make the conference more aimed at the professional audience. As part of it, it was moved to Thursday (Friday is part of the weekend in Israel). In 2006, the conference was moved back to Friday, but due to scheduling problems happened on the second Friday of August. The conference is no longer about watching a movie, and has lectures from various sources. The conference also accepts sponsors, with the Israeli Internet Association being the most consistent one over the years.
Meeting the objectives
It can be claimed the Hamakor managed to meet the original objectives behind its forming. Journalists do contact, either Hamakor directly or known people within the open source community for comments about events. Hamakor was active in blocking several DMCA like law proposals in Israel, and even managed to affect (alongside the Israeli chapter of the Internet Society) the renewed Copyright law in Israel, both in blocking anti-circumvention text from entering it and in keeping pro-customer text in it, and even managed to enhance the explicit reverse engineering allowance in the law to cover security audits as a legitimate reason to reverse engineer. Alongside August Penguin, it can be claimed that all of the original goals set for the society were met.
Since Hamakor operates with a balanced budget, and since the above activity is reactive in nature, some feel that Hamakor should take it on itself to do more. To date, Hamakor has not managed to overcome the gap between the centralized way a legal entity must act and the decentralized way a community acts. Hamakor's action are based solely on volunteers, and as such, is time constrained. Hamakor was involved in several initiatives that were attempted, but either never managed to make a real difference, or otherwise are taking years to show actual results. One such initiative was forming Linux based classrooms in schools – several schools came along, Hamakor managed to get equipment donated (or, in some cases, bought it), but lack of local volunteers made the project grind to halt almost completely.
Another aspect of this contradiction is the fact that while a lot of free software activity takes place in Israel, it is not directly Hamakor's doing. This is not a problem if Hamakor is limited to its original objectives, but when these objectives are expanded, resource limitation becomes a visible problem.
Activities
Hamakor's primary activities include organization of the yearly August Penguin convention and support for the Israeli "Welcome to Linux" tradition of a yearly series of presentations and Linux Installfests that introduce Linux to beginners. The latter was started by the Haifa Linux Club but is now also conducted by the Tel Aviv Linux Club and the Jerusalem Linux Club.
Hamakor prize
Hamakor awards a prize annually, recognizing an Israeli who has made a significant and important contribution to Free Software. The awardee is determined in an Academy Award-like process, with all Hamakor members and friends being eligible to vote for the most deserving person, who is typically announced at the annual August Penguin event. In 2010, the prize was given in several categories: personal contribution, team/group or institute contribution and contribution for the Education system.
Elected board members
Hamakor's board consists of three elected members. The following people have served on Hamakor's board: (the year listed is the year of election)
Gilad Ben-Yossef (2003, 2005)
Shachar Shemesh (2003, 2006, 2007)
Doron Ofek (2003, 2009)
Muli Ben-Yehuda (2003)
Nadav Har'El (2004)
Alon Altman (2004, 2005)
Ori Idan (2004)
Orna Agmon (2005)
Dotan Mazor (2006)
Nadav Vinik (2006, 2007, 2008)
Ram-On Agmon (2007, 2008, 2009, 2011)
Eyal Levin (2008, 2009, 2011)
Lior Kaplan (2009, 2011)
Adir Abraham (2011)
References
External links
Hamakor home page – Hebrew
Hamakor home page – English
Free and open-source software organizations
Non-profit organizations based in Israel |
6509398 | https://en.wikipedia.org/wiki/Paul%20Dourish | Paul Dourish | Paul Dourish (born 1966) is a computer scientist best known for his work and research at the intersection of computer science and social science. Born in Scotland, he holds the Steckler Endowed Chair of Information and Computer Science at the University of California, Irvine, where he joined the faculty in 2000,
and where he directs the Steckler Center for Responsible, Ethical, and
Accessible Technology.
He is a Fellow of the AAAS,
the ACM, and
the British Computer Society, and is a two-time winner of the ACM
CSCW "Lasting Impact" award, in 2016 and 2021.
Dourish has published three books and over 100 scientific articles, and holds 19 US patents.
Life
Born and raised in Glasgow, Scotland, Dourish studied at St Aloysius' College. He then received a B.Sc. in Artificial Intelligence and Computer Science from the University of Edinburgh in 1989. He moved to work at Rank Xerox EuroPARC (later the Xerox Research Center Europe) in Cambridge, UK, during which time he completed a Ph.D. in Computer Science at University College London (UCL).
After completing his Ph.D, he moved to California, working for Apple Computer in Cupertino, California. He worked in research laboratories at Apple Computer until they closed 10 months later and then at Xerox's Palo Alto Research Center.
In 2000, Dourish moved to Southern California, when he joined the faculty at the University of California, Irvine. Since then, he has remained a full professor of Informatics. He has held visiting positions at Intel, Microsoft, Stanford University, MIT, the IT University of Copenhagen, and the University of Melbourne.
Work
His published work is primarily in the areas of Human-Computer Interaction, Computer supported cooperative work, and Ubiquitous computing. He is the author of over 100 scientific publications, and holds 19 US patents. He is amongst the most prolific and widely cited scholars in Human-Computer Interaction; Microsoft's academic search system lists him as the fourth most influential author in the area while Google Scholar calculates his h-index at over 50.
His research tends to draw both on technical and social domains, and speak to the relationship between them. His research topics have included the role of informal awareness in supporting coordination in collaborative systems, the relationship between 'place' and 'space' in information systems, and
methodological questions about the use of ethnographic techniques in information systems design.
At UC Irvine, he is a teaching professor of Informatics in the Donald Bren School of Information and Computer Sciences department, where he is a member of the Laboratory for Ubiquitous Computing and Interaction (LUCI), and in the interdisciplinary graduate program in Arts Computation Engineering. In addition to his appointment in Informatics, he has courtesy appointments in Computer Science and Anthropology. From 2004–2006, he was Associate Director at the
California Institute for Telecommunications and Information Technology.
He co-directs the Center for Social Computing, one of Intel Corporation's US science and technology centers. Based at UC Irvine, this center involves academic partners from NYU, Cornell, Georgia Tech, and Indiana University.
At UC Irvine, Dourish is also a member of:
The divisional council of the California Institute for Telecommunications and Information Technology
The Center for Cyber-Security and Privacy
The Institute for Software Research
The Center for Organizational Research
The Center for Computer Games and Virtual Worlds
The Center for Unconventional Security Affairs
The Center for Biomedical Informatics
The advisory board of the Center for Ethnography and the Institute for Money, Technology, and Financial Inclusion
The executive board of the UC-wide Pacific Rim Research Program
Along with being a member of the aforementioned organizations, Dourish is a "co-conspirator" in the Laboratory for Ubiquitous Computing and Interaction, a faculty associate of the Center for Research on Information Technology and Organizations, and a co-coordinator of the People and Practices PAPR@UCI initiative.
Awards
In 2008, he was elected to the CHI Academy in recognition of his contributions to Human-Computer Interaction. Dourish won the Diana Forsythe Prize in 2002, and the BM Faculty Award in 2006 under the American Medical Informatics Association. He was also awarded the National Science Foundation CAREER Award in 2002. Dourish recently received a $201,000 grant to conduct research on people's online participation in social movements. Dourish recently received a $400,000 grant to research how the creative design process works when a team is split up through different cultures. Dourish also recently received a $247,000 grant to research how social media ties into death in real life.
In 2015 he was named a fellow of the Association for Computing Machinery "for contributions in social computing and human-computer interaction."
Research
Dourish mainly performs research in three specific areas of human-computer interaction (HCI). This includes work under ubiquitous computing (ubicomp), computer-supported cooperative work (CSCW), and Social Studies of Science and Technology. Dourish combines this technical research with sociology, anthropology, and cultural studies in an effort that he calls "embodied interaction."
One of Dourish's most recognized contributions has been bringing sociological and phenomenological understandings of human activity to the design of technological systems. For example, his work on spatiality in virtual worlds and computer mediated communication has emphasized how people—in interaction with systems and with one another—evolve new understandings of space, media, and relationships. He also drew on Schutzian phenomenology to argue that tangible computing and social computing share an underlying emphasis on people as embodied, social actors. Emphasizing people as social and embodied points to the importance of how individuals are constituted through their interactions and movements in space with other people. This model is counterposed to models of the person in Human-Computer Interaction that focus exclusively on people's cognitive capabilities.
Previous projects that Dourish has worked on include studies of privacy and spatiality. In this first study, Dourish emphasized privacy as "something that people do rather than something that people have". He was interested in how people rate information and activities based on privacy and risk. Through the studies, he sought knowledge of private practice as a social phenomenon. His second study involved the impact on shaping spatiality by information technologies. His goal was to study spatiality as a social and cultural production.
Dourish's recent work has dealt with information technology use in trans-national and trans-cultural contexts. For example, his work on postcolonial computing has tried to unpack how assumptions about technology and knowledge drawn from Western or industrialized nation experiences create shape (or misshape) technology design. In the process, he has worked with indigenous Australian people, Chinese gamers, mobility between Thailand and the US, and Indian people regarding IT design. Dourish and his team were drawn by these new settings to dismiss the presumption that "everyone is or wants to be just like us". The new experience also helped to challenge current technological practices by showing the assumptions made in familiar settings.
Dourish is interested and intrigued by opportunities presented through design as potential means of ethnographic engagement. He combines social theory, empirical examination, and technology design with varying emphasis throughout his projects.
Publications
Dourish has published three books. He published "Where the Action Is: The Foundations of Embodied Interaction" (MIT Press) in 2001. This book explores the relationship between phenomenological sociology
and interaction design, particularly with reference to physically embodied computation and ubiquitous computing.
He proposes Tangible computing and Social computing as two different aspects of the same program of investigation, named embodiment.
His second book, "Divining a Digital Future: Mess and Mythology in Ubiquitous Computing," written in collaboration with Genevieve Bell, is an exploration of the social and cultural aspects of ubiquitous computing, with a particular focus on the disciplinary and methodological issues that have shaped the ubiquitous computing research agenda. It was published by MIT Press in 2011.
His third book, "The Stuff of Bits: An Essay on the Materialities of Information," explores the "material arrangements” of various digital objects—that is, how information is represented and interpreted. Through a series of case studies, featuring digital artifacts and practices such as emulation, spreadsheets, databases, and computer networks, he connects the representation of information to broader issues of human experience, touching on “questions of power, policy, and polity in the realm of the digital." The book was published by MIT Press in 2017.
In addition to the three books, he has published conference proceedings, journal papers, conference papers, book chapters, technical reports, essay & position papers, editorial activities, and patents. A full list of his publications can be found at Paul Dourish. Many of the patents that he holds involve document management.
Teaching
Paul Dourish is a professor of informatics, computer science, and anthropology at UC Irvine. Some classes Dourish teaches are Ubiquitous Computing and Interaction, Social Analysis of Computerization, and Human-Computer Interaction. His Ubiquitous Computing and Interaction class focuses on how humans obtain information and interact using computers. Dourish's Socian Analysis of Computerization class focuses on how the internet, information, and technology affect our everyday lives. Finally, Dourish's Research in Computer-Human Interaction class examines the interactions between users and their devices and can be applied to either a person theoretically studying the field to write a dissertation or to a student wanting to apply these ideas to their own products.
See also
Lucy Suchman
Terry Winograd
Mark Weiser
Bonnie Nardi
Genevieve Bell
Béatrice Galinon-Mélénec
Critical technical practice
Selected bibliography
Dourish, P. 2001. Where the Action Is: The Foundations of Embodied Interaction. Cambridge: MIT Press.
Dourish, P. 2004. What We Talk About When We Talk About Context. Personal and Ubiquitous Computing, 8(1), 19–30.
Dourish, P. and Anderson, K. 2006. Collective Information Practice: Exploring Privacy and Security as Social and Cultural Phenomena. Human-Computer Interaction, 21(3), 319–342.
References
External links
Dourish's UC personal home page
LUCI The Laboratory for Ubiquitous Computing and Interaction
Dourish's personal website
Dourish's awards
Dourish's UC Irvine faculty information
1966 births
Living people
American computer scientists
Scottish computer scientists
British computer scientists
Human–computer interaction researchers
University of California, Irvine faculty
Ubiquitous computing researchers
People educated at St Aloysius' College, Glasgow
Scientists from Glasgow
Fellows of the Association for Computing Machinery
Alumni of the University of Edinburgh
Scientists at PARC (company) |
54035610 | https://en.wikipedia.org/wiki/BlackBerry%20Mobile | BlackBerry Mobile | BlackBerry Mobile is a trading name that was used by TCL Communication before 2020 to sell BlackBerry branded devices in all world markets, excluding Indonesia, India, Bangladesh, Sri Lanka and Nepal.
BlackBerry Limited, the creator of the BlackBerry brand, decided in 2016 to cease competing in the smartphone market directly, to focus on making security software. Due to this, TCL Communication is now in charge of manufacturing, distributing, and designing BlackBerry-branded devices for the global market. The BlackBerry KEYone is the first device made under BlackBerry Mobile brand, although it was partially designed by BlackBerry Limited.
In February 2020, it was announced that TCL Corporation will stop manufacturing the devices on August 31, 2020, coinciding with the end of their access to the BlackBerry license. The last developed phone is the Blackberry Key2LE.
Device software
Devices made under BlackBerry Mobile will continue to be shipped running Android, along with security software, provided by BlackBerry Limited. This suite of software includes DTEK, BlackBerry Messenger, and BlackBerry Hub. Also, the software has a "secure boot" at start-up, to ensure that the Android system has not been tampered with. Many of these features are comparable to those from BlackBerry 10, BlackBerry Ltd's former flagship operating system.
History
In the early 2000s, Research In Motion Limited, other known as RIM, became dominant in the mobile industry, under the BlackBerry brand. They had a global dominance in the smartphone industry.
In 2007, RIM had the highest growth of the BlackBerry brand. Afterward, they slowly lost dominance, as many consumers were moving towards devices like the iPhone from Apple and the Samsung Galaxy, due to their all-touchscreen form factor. Later on, the physical QWERTY keyboard on a smartphone became a significant feature of BlackBerry, which the brand was named after.
RIM renamed to BlackBerry Limited, and then set a new strategy, this one focusing on improving their brand. The BlackBerry Priv was launched in 2015, as their first device running Android. The device came with a full touchscreen, with a QWERTY keyboard underneath.
In 2016, BlackBerry Ltd outsourced production to TCL to manufacture the BlackBerry DTEK50 and DTEK60. Later on, in 2016, BlackBerry announced that they are moving away from in-house manufacturing and production, and moving to become a software security company. In December 2016, TCL was chosen to be the global licensee of the BlackBerry brand.
At CES 2017, TCL showed off the rumored BlackBerry 'Mercury', although not stating any specifications of the device. Ahead of Mobile World Congress in Barcelona, Spain, TCL officially announced the device, stating its official name is the BlackBerry KeyOne. This device was designed by BlackBerry Ltd. rather than TCL. The device is sold under "BlackBerry Mobile" brand.
In 2020, BlackBerry signed a new licensing agreement for smartphones with the US-based startup company, OnwardMobility. The company never released a device before shutting down in 2022.
Devices
BlackBerry KeyOne (2017)
BlackBerry Motion (2017)
BlackBerry Key2 (2018)
BlackBerry Key2 LE (2018)
BlackBerry Evolve (2018)
BlackBerry 5G (202X)
See also
BlackBerry DTEK50
BlackBerry DTEK60
References
BlackBerry |
21918942 | https://en.wikipedia.org/wiki/Trade%20Control%20and%20Expert%20System | Trade Control and Expert System | The Trade Control and Expert System (TRACES), is a web-based veterinarian certification tool used by the European Union for controlling the import and export of live animals and animal products within and without its borders. Its network falls under the responsibility of the European Commission. TRACES constitutes a key element of how the European Union facilitates trade and improves health protection for the consumer, as laid down in the First Pillar principle. Other countries use computer networks to provide veterinary certification, but TRACES is the only supranational network working at a continental scale of 28 countries and almost 500 million people.
Background
Since the end of the nineteenth century, following the development of modern veterinary medicine and food safety, European states have built, in parallel with customs structures, veterinary inspection stations located at the borders known as Border Inspection Posts. There all goods of animal origin including live animals are checked in order to avoid outbreaks of zoonoses and epizooties.
At the establishment of the European Economic Community (EEC) in 1958, each country used its own national legislation to set standards for the health of internationally traded animals and their products.
In the 1990s, according to the its first pillar, the European Union began studying how to provide a European-scale computer network dedicated to food safety and animal health with the aim of strengthening the single European market and the protection of consumers.
The TRACES network started up in April 2004 as a replacement for the older ANIMO and SHIFT networks.
Features
The first mention of TRACES was in the decision of the Commission 2003/623/CE of 19 August 2003.
It is based on a network using internet veterinary authorities of member states and participating non-EU countries. Through it, central and local authorities, border inspection posts and economics operators are linked.
It provides electronic sanitary certificates mandatory for tracking goods and live animals: Common Veterinary Entry Document (CVED) as defined in decision 2003/279/CE of the Commission of 15 April 2003 for products (CVED P) and in regulation 2004/282/CE of the Commission of 18 February 2004 for CVED for live animals (CVED A).
TRACES sends an electronic message from the departure point to the transfer point and the arrival point to notify that a consignment is arriving. Similarly, every concerned point sends a message to other points which enables a well-developed follow up of the consignment (goods or animals) movement.
It provides the ad-hoc European Union legislation, manages the non-EU country establishment list which is the agreed-upon list for importing into the EU, and keeps on file the rejected consignments and the reason for rejection.
Economic operators are able to start the process electronically by filling in the first part of the mandatory certificates for importing goods and animals into the EU.
Its next step will be electronic certification without any paperwork. At the moment, the legal basis for exchange of goods or live animals among non-EU countries and the EU is a paper certificate, even if the decision 2004/292/CE says it mandatory for member states and economic operators to use TRACES since 31 December 2004.
TRACES uses all the official languages of the EU, plus Russian. The Directorate-General for Health and Food Safety, Directorate G, unit G5, sector TRACES, is in charge of the workload.
History
Before TRACES, the EU tried twice to create a computer-based network dedicated to food safety and animal health for exchange of goods and live animals via the ANIMO and SHIFT networks.
ANIMO
ANIMO (Animal Movement System), a computer-based tracking system for animal movements, was set up in 1990. Its creation was triggered by directive 90/425/CEE.
On 15 July 1991, the directive of the Council 91/496/CEE defined the veterinary checks to be carried out on imported goods from non-EU countries.
The 19 July 1991 decision of the Commission 91/398/CEE is in relation to a computer-based network linking veterinary authorities (ANIMO).
The Commission launched an invitation to tender in December 1991. On 3 December 1991, the decision of the Commission 91/638/CEE concerned itself with the designation of the host centre.
On 2 July 1992, Commission 92/373/CEE stated that ANIMO was to be hosted in Belgium.
On 25 September of that year, Commission 92/486/CEE stated that the common host would work with member states.
On 21 December 1992, the Commission 93/70/CEE specified the message ANIMO will send using its own coding system which is different from the ISO code used by the World Customs Organisation and now in use in TRACES.
Finally on 4 June 2002, Commission 2002/459/CE defined the list of ANIMO units and repealed Decision 287/2000/CE.
ANIMO was used by member states, Switzerland, Norway, Iceland, Andorra, San Marin, Slovenia, Malta and Cyprus. ANIMO was only able to send messages and lacked interactivity with veterinary authorities. ANIMO was able to trace the origin of animals and goods in the event of problems and to warn veterinary authorities if the relevant data had been added to system, but this was not done systematically. The system lacked a database on European legislation about importation from non-EU countries. This resulted in a loss of time at border inspection posts, as one had to wait for the proper legislation to be found. ANIMO was also devoted only to live animals. It did not track data concerning rejected animals or goods; a rejected consignment could attempt entry point at another border post. Additionally, ANIMO did not track the movement of animals or goods within the EU. Therefore, the Commission developed another tool, the SHIFT network.
SHIFT
SHIFT, the system to assist with the health controls of import of items of veterinary concern at frontier inspection posts from third countries, was first developed after the publication of decision 88/192/CEE of the Council on 28 March 1988.
Article 1 stated, "The Commission shall be responsible for drawing up a programme for the development of computerization of veterinary importation procedures (Shift project)."
Decision 92/438/CEE of the Council specified the computerisation of veterinary import procedures (SHIFT project), amended Directives 90/675/EEC, 91/496/EEC, 91/628/EEC and Decision 90/424/EEC, and repealed Decision 88/192/EEC. This decision again gave the Commission the responsibility of organising a computer network.
SHIFT was designed to electronically manage the sanitary aspects of animal and animal products coming from non-EU countries. It was divided into three parts:
The Community Import Requirement Database (CIRD), was dispatched to veterinary officials in border inspection posts the legislation necessary for imports. It was also supposed to control the valid data of consignments. The impossibility of updating this database in real time was the main reason for its failure.
The Rejected Consignments System (RCS), a database which tracked information regarding rejected animals and animal products to prevent them from attempting to enter the EU through the border at a different location. This worked as a prototype in Greece and Belgium.
The List Management System (LMS) recorded details of establishments in non-EU countries which had been approved to import into the EU by the veterinary authority of their country.
The failure of ANIMO and SHIFT
SHIFT has never operated except partially in Belgium and Greece. Both ANIMO and SHIFT failed to provide a useful tool to strengthen food safety and secure animal health in Europe. Members states were probably not ready to delegate responsibility in such sensitive matter as food safety and public health. AMINO and SHIFT predated common use of the internet, and staff at border inspection posts may not have been aware or concerned about being part of a national and European safety network. The Commission itself was only mildly enthusiastic.
The establishment of TRACES
After ANIMO and SHIFT proved to be particularly ineffective during the outbreak of swine fever at the end of the 1990s, the European Parliament in Resolution A5-0396/2000 of 13 December 2000:
Again, in 2002, following the 2001 foot-and-mouth disease outbreak in the UK, European Parliament stated:
The decision of the Commission 24/2003/EC of 30 December 2002 foresaw the elaboration of the new computer system and the 19 August 2003 decision 2003/623/CE announced the development of an integrated computerized veterinary system known as TRACES:
Parameters of the new network included:
An Internet-based architecture between member states' veterinary structures (especially border inspection posts), members states' central veterinary authorities, the European Commission and the central authorities and local inspection posts of non-EU countries;
An access to EU legislation;
Tracking of rejected goods or live animals;
Management of non-EU countries lists of approved establishments to import into the EU.
TRACES was developed with inside competencies, not with an external host centre. It is under the responsibility of the Directorate-General for Health and Consumer Protection or DG-SANCO.
All European member states were required to use TRACES as of 1 January 2005.
Working flow
TRACES functions in all of the official languages of the EU except Irish, as well as in Chinese, Croatian, Icelandic, Norwegian, Russian and Turkish.
Providing certificates
TRACES provides electronic, with the possibility to print, veterinary and sanitary certificates which are mandatory with consignments during import and movement in the EU. These certificates follow both live animals and animal products as they travel to and through the EU.
Certificates of intra-community trade, Regulation 599/2004 of the Commission;
Certificates for importing into the EU from a non-EU country, Decision 2007/240/CE of the Commission;
Common Veterinary Entry Document for animals, Regulation 282/2004/CE of the Commission and for products, Regulation 136/2004/CE of the Commission.
Notification
Notification is just an extemporaneous exchange of information as defined in Directive of the Council 90/425/CEE, laid down in Articles 4 and 8 and 10 and 20.
At each step of transport, at the border inspection post for example, TRACES provides an electronic message to whomever is concerned by this movement. If a main problem of public health or animal health is identified during an inspection, this notification is twinned by a notification in the RASFF alert network.
Management of non-EU countries' establishments lists
Non-EU establishments must be approved by the veterinary authorities of their country before being listed by the Commission. This procedure allows them the right to import to the EU. When filling in the certificate the economic operator has only to call up his own establishment in the list and tick the box. Regulation 854/2004 of the European Parliament.
Serious threats
TRACES provides EU legislation covering the required field for each certificate, imposes the physical checks applicable and the reinforced checks. In case of serious threat or disease outbreak the Commission can activate via TRACES the necessary safeguard measures through a 20 May 1994 decision of the Commission 94/360/CE which deals with reinforced checks and safeguard measures.
Traceability
Traceability is the core element of TRACES. It tracks every importation or movement in the EU of animals or animal products. In case of any problems, movements can be tracked instantaneously. Data about rejected consignments and the reasons of rejection, are also kept.
Notes
External links
Animal disease control
European Commission projects
Information technology organizations based in Europe |
35101726 | https://en.wikipedia.org/wiki/SIMPL | SIMPL | Synchronous Interprocess Messaging Project for LINUX (SIMPL) is a free and open-source project that allows QNX-style synchronous message passing by adding a Linux library using user space techniques like shared memory and Unix pipes to implement SendMssg/ReceiveMssg/ReplyMssg inter-process messaging mechanisms.
Mechanism
A client thread sending a message is BLOCKED (the process thread execution is temporarily suspended) until the server thread sends a received message acknowledgement, processes the message, and executes a reply. When the server thread replies the client thread becomes READY (unblocked). The server thread typically loops, waiting to receive a message from a client thread.
Blocking synchronizes the client thread execution, blocking it and implicitly schedules the server thread to be scheduled for execution without requiring explicit process control work by the kernel to determine which thread to run next as with other forms of IPC.
The send and receive operations are blocking and synchronous, reply doesn't block, the client thread is already blocked waiting for the reply and no additional synchronization is required. The server thread replies to the client and continues running while the kernel and/or networking code asynchronously passes the reply data to the client thread and marks it READY for execution.
Advantages of Synchronized Message Passing
Synchronized Message Passing has the following advantages:
Simple coding model simplifies the task of partitioning a complex system and aids in testing
Inherent thread synchronization coordinates the execution of communicating programs
No data buffering is required
Simplification of network interactions - threads can be in different programs on different machines
Limitations
SIMPL does not appear to be thread safe.
Similar projects
There is one other QNX inspired synchronous message passing projects available for Linux. SRR IPC (for Send/Receive/Reply) by Sam Roberts and Andrew Thomas of Cogent Real-Time Systems, Inc. which is related to the SIMPL project and adds a QNX compatible API layer. SRR is a loadable kernel module designed to be QNX API compatible to facilitate porting of code.
See also
Distributed computing
Inter-process communication
References
External links
SIMPL
SourceForge, SIMPL-Synchronous Interprocess Messaging
Amazon- Programming the SIMPL Way
SRR Module The srripc Linux Kernel Module, Version 1.4.43 January 13, 2010
SRR -- QNX API compatible message passing for Linux
Cogent DataHub software download page (including SRR Kernel Module)
Distributed computing architecture
Inter-process communication
Linux kernel features |
1034724 | https://en.wikipedia.org/wiki/Theodore%20Ts%27o | Theodore Ts'o | Theodore (Ted) Yue Tak Ts'o (曹子德) (born 1968) is an American software engineer mainly known for his contributions to the Linux kernel, in particular his contributions to file systems. He is the Secondary developer and maintainer of e2fsprogs, the userspace utilities for the ext2, ext3, and ext4 filesystems, and is a maintainer for the ext4 file system.
Biography
Ts'o graduated from MIT with a degree in computer science in 1990, after which he worked in MIT's Information Systems (IS) department until 1999. During this time he was project leader of the Kerberos V5 team.
In 1994, Ts'o created the /dev/random Linux device node and the corresponding kernel driver, which was Linux's (and Unix's) first kernel interface that provided high-quality cryptographic random numbers to user programs. /dev/random works without access to a hardware random number generator, allowing user programs to depend upon its existence. Separate daemons such as rngd take random numbers from such hardware and make them accessible via /dev/random. Since its creation, /dev/random and /dev/urandom have become standard interfaces on Unix, Linux, BSD, and macOS systems.
After MIT IS, Ts'o went to work for VA Linux Systems for two years. In late 2001 he joined IBM, where he worked on improvements in the Linux kernel's performance and scalability. After working on a real-time kernel at IBM, Ts'o joined the Linux Foundation in late 2007 for a two-year fellowship. He initially served as Chief Platform Strategist, before becoming Chief Technology Officer in 2008. Ts'o also served as Treasurer for USENIX until 2008, and has chaired the annual Linux Kernel Developers Summit.
In 2010 Ts'o moved to Google, saying he would be working on "kernel, file system, and storage stuff".
Ts'o is a Debian Developer, maintaining several packages, mostly filesystem-related ones, including e2fsprogs since March 2003. He was a member of the Security Area Directorate for the Internet Engineering Task Force, and was one of the chairs for the IPsec working group. He was one of the founding board members for the Free Standards Group.
Awards
Ts'o was awarded the Free Software Foundation's 2006 Award for the Advancement of Free Software.
References
Further reading
1968 births
American chief technology officers
American people of Chinese descent
Computer programmers
Free software programmers
Geeknet
Google employees
Linux kernel programmers
Linux people
Living people
MIT School of Engineering alumni
Open source people
People in information technology |
41740706 | https://en.wikipedia.org/wiki/Nanolinux | Nanolinux | NanoLinux
is an open source, free and very lightweight Linux distribution that requires only 14 MB of disk space including tiny versions of the most common desktop applications and several games. It is based on the Core version of the Tiny Core Linux
distribution and uses Busybox, Nano-X instead of X.Org, FLTK 1.3.x as the default GUI toolkit, and SLWM (super-lightweight window manager). The included applications are mainly based on FLTK.
Applications included in the distribution
Nanolinux includes several lightweight applications, including:
Dillo graphical web browser
FlWriter word processor
Sprsht spreadsheet application
FLTDJ personal information manager
AntiPaint painting application
Fluff file manager
NXterm terminal emulator
Flcalc calculator
FlView image viewer
Fleditor text editor
FlChat IRC client
FlMusic CD player
FlRadio internet radio
Webserver, mount tool, system statistics, package install utility.
The distribution also includes several games, such as Tuxchess, Checkers, NXeyes, Mastermind, Sudoku and Blocks.
Support for TrueType fonts and UTF-8 is also provided. Nanolinux is distributed as Live CD ISO images, installation on flash disk
and hard disk
is documented on its Wiki pages.
System requirements
Minimal configuration:
The Live CD version without swapfile requires 64 MB of RAM and 14 MB of disk space.
Reviews
Juan Luis Bermúdez (in Spanish)
linceus (in Spanish)
serg666 (in Russian)
Pedro Pinto (in Portuguese)
Taringa (in Spanish)
See also
Comparison of Linux live distributions
Lightweight Linux distribution
List of Linux distributions that run from RAM
External links
Nanolinux official site
References
Nanolinux on Softpedia
RedesZone
wordpress.com
Light-weight Linux distributions
Operating system distributions bootable from read-only media
Linux distributions without systemd
Linux distributions |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.