id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
1271785
https://en.wikipedia.org/wiki/RBBS-PC
RBBS-PC
RBBS-PC (acronym for Remote Bulletin Board System for the Personal Computer) was a public domain, open-source BBS software program. It was written entirely in BASIC by a large team of people, starting with Russell Lane and then later enhanced by Tom Mack, Ken Goosens and others. It supported messaging conferences, questionnaires, doors (through the dropfile), and much more. History In 1982, Larry Jordan of the Capital PC Users Group started modifying some existing BBS software that had been ported from CP/M by Russell Lane. The first major release of this effort, RBBS-PC CPC09, in May 1983 was written in interpreted BASIC and included the Xmodem file transfer protocol added by Jordan. In June 1983, Jordan turned over maintenance and enhancements to Tom Mack and Ken Goosens. The first release under Mack, version 10.0, was released July 4, 1983. New versions and features were released steadily throughout the rest of the 1980s. The final complete version, 17.4, was released March 22, 1992. Since version 17.4 at least four other code paths have developed. Some work has been done to unify the code paths and to develop version 18.0. Dan Drinnons CDOR Mods and Mapleleaf versions were further enhanced by beta testers Mike Moore and Bob Manapeli using Ken Goosens LineBled program to manipulate the source code to endless variations of the program. Philosophy From the beginning of RBBS-PC's development, the authors of the software had two goals as stated in the RBBS-PC documentation: To show what could be done with the BASIC language and that "real programmers can/do program in BASIC." To open a new medium of communication where anyone with a personal computer the ability to communicate freely. This idea was summarized as "Users helping users for free to help the free exchange of information." References External links RBBS-PC files The BBS Software Directory - RBBS Bulletin board system software DOS software Pre–World Wide Web online services Computer-related introductions in 1983 Public-domain software with source code
480658
https://en.wikipedia.org/wiki/List%20of%20web%20service%20specifications
List of web service specifications
There are a variety of specifications associated with web services. These specifications are in varying degrees of maturity and are maintained or supported by various standards bodies and entities. These specifications are the basic web services framework established by first-generation standards represented by WSDL, SOAP, and UDDI. Specifications may complement, overlap, and compete with each other. Web service specifications are occasionally referred to collectively as "WS-*", though there is not a single managed set of specifications that this consistently refers to, nor a recognized owning body across them all. Web service standards listings These sites contain documents and links about the different Web services standards identified on this page. IBM Developerworks: Standard and Web Service innoQ's WS-Standard Overview () MSDN .NET Developer Centre: Web Service Specification Index Page OASIS Standards and Other Approved Work Open Grid Forum Final Document XML CoverPage W3C's Web Services Activity XML specification XML (eXtensible Markup Language) XML Namespaces XML Schema XPath XQuery XML Information Set XInclude XML Pointer Messaging specification SOAP (formerly known as Simple Object Access Protocol) SOAP-over-UDP SOAP Message Transmission Optimization Mechanism WS-Notification WS-BaseNotification WS-Topics WS-BrokeredNotification WS-Addressing WS-Transfer WS-Eventing WS-Enumeration WS-MakeConnection Metadata exchange specification JSON-WSP WS-Policy WS-PolicyAssertions WS-PolicyAttachment WS-Discovery WS-Inspection WS-MetadataExchange Universal Description Discovery and Integration (UDDI) WSDL 2.0 Core WSDL 2.0 SOAP Binding Web Services Semantics (WSDL-S) WS-Resource Framework (WSRF) Security specification WS-Security XML Signature XML Encryption XML Key Management (XKMS) WS-SecureConversation WS-SecurityPolicy WS-Trust WS-Federation WS-Federation Active Requestor Profile WS-Federation Passive Requestor Profile Web Services Security Kerberos Binding Web Single Sign-On Interoperability Profile Web Single Sign-On Metadata Exchange Protocol Security Assertion Markup Language (SAML) XACML Privacy P3P Reliable messaging specifications WS-ReliableMessaging WS-Reliability WS-RM Policy Assertion Resource specifications Web Services Resource Framework WS-Resource WS-BaseFaults WS-ServiceGroup WS-ResourceProperties WS-ResourceLifetime WS-Transfer WS-Fragment Resource Representation SOAP Header Block Web services interoperability (WS-I) specification These specifications provide additional information to improve interoperability between vendor implementations. WS-I Basic Profile WS-I Basic Security Profile Simple Soap Binding Profile Business process specifications WS-BPEL WS-CDL Web Service Choreography Interface (WSCI) WS-Choreography XML Process Definition Language Web Services Conversation Language (WSCL) Transaction specifications WS-BusinessActivity WS-AtomicTransaction WS-Coordination WS-CAF WS-Transaction WS-Context WS-CF WS-TXM Management specifications WS-Management WS-Management Catalog WS-ResourceTransfer WSDM Presentation-oriented specification Web Services for Remote Portlets Draft specifications WS-Provisioning – Describes the APIs and schemas necessary to facilitate interoperability between provisioning systems in a consistent manner using Web services Other Devices Profile for Web Services (DPWS) ebXML Standardization ISO/IEC 19784-2:2007 Information technology -- Biometric application programming interface -- Part 2: Biometric archive function provider interface ISO 19133:2005 Geographic information -- Location-based services -- Tracking and navigation ISO/IEC 20000-1:2005 Information technology -- Service management -- Part 1: Specification ISO/IEC 20000-2:2005 Information technology -- Service management -- Part 2: Code of practice ISO/IEC 24824-2:2006 Information technology -- Generic applications of ASN.1: Fast Web Services ISO/IEC 25437:2006 Information technology -- Telecommunications and information exchange between systems -- WS-Session -- Web Services for Application Session Services See also Web service References Specifications Web service specifications
9459985
https://en.wikipedia.org/wiki/Alexander%20Reinefeld
Alexander Reinefeld
Alexander Reinefeld (born 1957) is a German computer scientist and games researcher. He is the head of computer science at Zuse Institute Berlin. His contributions to the field include the NegaScout algorithm. Biography Alexander Reinefeld studied physics at the Technical University of Braunschweig and computer science at the University of Hamburg and during two one-year visits in Edmonton at the University of Alberta. In 1982 he concluded his Diplom (equivalent to MSc) in computer science and in 1987 he received his Ph.D at the University of Hamburg. From 1983 to 1987, he worked as a scientific employee, and from 1989 to 1992 as assistant at the University of Hamburg. During the years 1987 to 1990 he collected industrial experience as a management consultant in the areas of systems analysis, databases and compiler building. In 1992 Reinefeld collaborated with the Paderborn Center for Parallel Computing (PC²) at the University of Paderborn. Since 1998, Alexander Reinefeld leads the area of Computer Science in the Zuse Institute Berlin (ZIB). He is a member of the Gesellschaft für Informatik, the ACM, the IEEE Computer Society, the German university association Deutscher Hochschulverband (DHV) and Chair of Parallel and Distributed Systems at the Humboldt University of Berlin. Search algorithms In 1983 Alexander Reinefeld introduced the NegaScout search-algorithm, an improvement of Judea Pearl's Scout. Ten years later, in 1993 Reinefeld made an attempt to resuscitate Stockman's SSS* algorithm, and proposed an improvement of the recursive RecSSS*, initially developed by Subir Bhattacharya and Amitava Bagchi. Despite promising results with some trees of depth 8, the space (memory) requirements were still too high, and with the research of Aske Plaat, Wim Pijls and Arie de Bruin concerning the alpha–beta pruning algorithm with zero windows and transposition table in SSS* and Dual* as MT, SSS* was finally declared "dead" by Pijls and De Bruin in 1996. Chess programs In 1979 at the University of Hamburg, motivated and supported by his advisor Frieder Schwenkel, Alexander Reinefeld designed the chess program Murks, partly implemented in microcode for an Interdata M85 minicomputer. Reinefeld claimed that world chess champion Mikhail Botvinnik played against Murks during his visit. In 1980/81, a team of four students, Manfred Allers, Dirk Hauschildt, Dieter Steinwender and Alexander Reinefeld, ported Murks to a Motorola 68000 microprocessor, then dubbed MicroMurks. They built their own MC68000 microcomputer from scratch. Micromurks II represented by Dieter Steinwender, participated at the WMCCC 1983 in Budapest. External links Alexander Reinefeld's personal homepage. 1957 births Living people Computer chess people Place of birth missing (living people)
38134506
https://en.wikipedia.org/wiki/TradeCard
TradeCard
TradeCard, Inc. was an American software company. Its main product, also called TradeCard, was a SaaS collaboration product that was designed to allow companies to manage their extended supply chains including tracking movement of goods and payments. TradeCard software helped to improve visibility, cash flow and margins for over 10,000 retailers and brands, factories and suppliers, and service providers (financial institutions, logistics service providers, customs brokers and agents) operating in 78 countries. On January 7, 2013, TradeCard and GT Nexus announced plans to undergo a merger of equals, creating a global supply-chain management company that would employ about 1,000 people and serve about 20,000 businesses in industries including manufacturing, retail and pharmaceuticals. The combined company rebranded itself as GT Nexus. History TradeCard was founded in 1999 by Kurt Cavano as a privately owned firm. In 2003, Warburg Pincus led three funding rounds, with TradeCard closing $10 million. In 2010, Deloitte cited TradeCard for its entrepreneurial and disruptive cloud technology enterprise resource planning solution that provides new IT architectures designed to address unmet needs of enterprises. In 2011, TradeCard's revenue grew by 36% over the previous year, and the company claimed on its website that it handled $25 billion in sourcing volume on its platform, by 10,000 organizations and 45,000 unique users. In 2012, founder and CEO Kurt Cavano transitioned to the Chairman role and Sean Feeney was appointed CEO. TradeCard was headquartered in New York City, with offices in San Francisco, Amsterdam, Hong Kong, Shenzhen, Shanghai, Taipei, Seoul, Colombo and Ho Chi Minh City. Clients TradeCard provided global supply chain and financial supply chain products to retail companies, factories and suppliers, and service providers (financial institutions, logistics service providers, customs brokers and agents). Clients include retailers and brands such as Coach, Inc. Levi Strauss & Co., Columbia Sportswear, Guess, Rite Aid, and Perry Ellis International. Awards 2012 Best Platform Connecting Buyers, Suppliers and Financial Institutions by Global Finance 2012 Supply and Demand Chain 100 2012 Pros to Know by Supply and Demand Chain 2011 Top Innovator by Apparel Magazine 2011 Great Supply Chain Partner by SupplyChainBrain References External links Supply chain software companies American companies established in 1999 Companies based in New York City ERP software companies Software distribution As a service Software industry Cloud platforms ERP software Private equity portfolio companies Business software companies Warburg Pincus companies
488572
https://en.wikipedia.org/wiki/Sony%20Ericsson%20P900
Sony Ericsson P900
The Sony Ericsson P900 is a Symbian OS v7.0 based smartphone from Sony Ericsson. It was introduced in 2003 and is the successor of the Sony Ericsson P800; like the P800, the P900 uses the UIQ platform. Like other Symbian-based smartphones of its time, but unlike Ericsson R380, the P900 is an open phone. This means that it is possible to develop and install third party applications without restrictions. A UIQ 2.1 SDK based upon Symbian C++ is freely available from the Sony Ericsson developer website. Additionally, the P900 supports applications written in Java. Because of this openness, many third-party applications exist that can be used on the P900 and other UIQ phones (such as the Motorola A1000 and BenQ P30). Many are shareware and freeware. As the P900 uses UIQ version 2.1, it is backwards compatible with UIQ 2.0 as found in the P800. Applications made for the P800 will normally work on a P900 as well. It, like the P800 and P910i, has an ARM9 processor clocked at 156 MHz. The P900 can be used without the flip as well. This makes the phone more like a PDA, but still usable as a traditional phone. The P900 supports Memory Stick Duo cards (but not Memory Stick Pro Duo) up to 128 MB in size, as does the P800. However, it has been confirmed that this 128 MB limit is just a software restriction. The P900 was well received and is sometimes considered one of the best Symbian OS devices to have been released. An updated version of the P900, the Sony Ericsson P910i was released in July 2004. It features a small QWERTY keyboard and enhanced software, but was reported to be less battery-efficient . The P910i has double the P900's memory (64 MB, versus the P900's 32 MB) and supports Memory Stick Pro Duo, allowing the phone up to 4 GB of storage on a single card. The P900 is the first Sony Ericsson product for which Research in Motion's BlackBerry wireless email service will be available. Some of the specifications of the P900 are: Triband GSM – 900 / 1800 / 1900 MHz Dimensions – 115 × 57 × 24 mm Weight – 150 g (with flip), 140 g (without flip) Internal camera: VGA (display resolution up to 640 × 480 pixels) Connectivity: Bluetooth, Infrared, USB dock GPRS (WAP 2.0) Messaging: SMS, EMS, MMS, POP3, IMAP, SMTP 'P905' P905 is the unofficial term for a Sony Ericsson P900 flashed with hacked Sony Ericsson P910a firmware. The P900 is flashed using a 'Fighter Kit'. This is usually done for four reasons: The P900 has a software restriction on the size of the memory card that can be used (128 MB). The P900 software does not support HTML email. The P900 software does not provide an option to alter the screen brightness. Flashing the firmware is cheaper than purchasing a new P910. Though this may seem all beneficial, there may be the drawback of not being able to connect to certain Bluetooth headsets (however, this is rare). Though the P905 may be identical to the P910 in software terms they are not identical in hardware terms. References External links Web site of Sony Ericsson Mobile Communications , a short review of the P900 by Dan Gillmor P900 and Linux, collection of tips, tricks, hacks, software and Linux connectivity. Smartphones P900 Mobile phones introduced in 2003 Symbian devices Mobile phones with infrared transmitter
31953149
https://en.wikipedia.org/wiki/Opus%20%28audio%20format%29
Opus (audio format)
Opus is a lossy audio coding format developed by the Xiph.Org Foundation and standardized by the Internet Engineering Task Force, designed to efficiently code speech and general audio in a single format, while remaining low-latency enough for real-time interactive communication and low-complexity enough for low-end embedded processors. Opus replaces both Vorbis and Speex for new applications, and several blind listening tests have ranked it higher-quality than any other standard audio format at any given bitrate until transparency is reached, including MP3, AAC, and HE-AAC. Opus combines the speech-oriented LPC-based SILK algorithm and the lower-latency MDCT-based CELT algorithm, switching between or combining them as needed for maximal efficiency. Bitrate, audio bandwidth, complexity, and algorithm can all be adjusted seamlessly in each frame. Opus has the low algorithmic delay (26.5 ms by default) necessary for use as part of a real-time communication link, networked music performances, and live lip sync; by trading-off quality or bitrate, the delay can be reduced down to 5 ms. Its delay is exceptionally low compared to competing codecs, which require well over 100 ms, yet Opus performs very competitively with these formats in terms of quality per bitrate. As an open format standardized through RFC 6716, a reference implementation called libopus is available under the New BSD License. The reference has both fixed-point and floating-point optimizations for low- and high-end devices, with SIMD optimizations on platforms that support them. All known software patents that cover Opus are licensed under royalty-free terms. Opus is widely used as the voice-over-IP (VoIP) codec in applications such as Discord, WhatsApp, and the PlayStation 4. Features Opus supports constant and variable bitrate encoding from 6 kbit/s to 510 kbit/s (or up to 256 kbit/s per channel for multi-channel tracks), frame sizes from 2.5 ms to 60 ms, and five sampling rates from 8 kHz (with 4 kHz bandwidth) to 48 kHz (with 20 kHz bandwidth, the human hearing range). An Opus stream can support up to 255 audio channels, and it allows channel coupling between channels in groups of two using mid-side coding. Opus has very short latency (26.5 ms using the default 20 ms frames and default application setting), which makes it suitable for real-time applications such as telephony, Voice over IP and videoconferencing; research by Xiph led to the CELT codec, which allows the highest quality while maintaining low delay. In any Opus stream, the bitrate, bandwidth, and delay can be continually varied without introducing any distortion or discontinuity; even mixing packets from different streams will cause a smooth change, rather than the distortion common in other codecs. Unlike Vorbis, Opus does not require large codebooks for each individual file, making it more efficient for short clips of audio and more resilient. As an open standard, the algorithms are openly documented, and a reference implementation (including the source code) is published. Broadcom and the Xiph.Org Foundation own software patents on some of the CELT algorithms, and Skype Technologies/Microsoft own some on the SILK algorithms; each offers a royalty-free perpetual for use with Opus, reserving only the right to make use of their patents to defend against infringement suits of third parties. Qualcomm, Huawei, France Telecom, and Ericsson have claimed that their patents may apply, which Xiph's legal counsel denies, and none have pursued any legal action. The Opus license automatically and retroactively terminates for any entity that attempts to file a patent suit. The Opus format is based on a combination of the full-bandwidth CELT format and the speech-oriented SILK format, both heavily modified: CELT is based on the modified discrete cosine transform (MDCT) that most music codecs use, using CELP techniques in the frequency domain for better prediction, while SILK uses linear predictive coding (LPC) and an optional Long-Term Prediction filter to model speech. In Opus, both were modified to support more frame sizes, as well as further algorithmic improvements and integration, such as using CELT's range encoder for both types. To minimize overhead at low bitrates, if latency is not as pressing, SILK has support for packing multiple 20 ms frames together, sharing context and headers; SILK also allows Low Bit-Rate Redundancy (LBRR) frames, allowing low-quality packet loss recovery. CELT includes both spectral replication and noise generation, similar to AAC's SBR and PNS, and can further save bits by filtering out all harmonics of tonal sounds entirely, then replicating them in the decoder. Better tone detection is an ongoing project to improve quality. The format has three different modes: speech, hybrid, and CELT. When compressing speech, SILK is used for audio frequencies up to 8 kHz. If wider bandwidth is desired, a hybrid mode uses CELT to encode the frequency range above 8 kHz. The third mode is pure-CELT, designed for general audio. SILK is inherently VBR and cannot hit a bitrate target, while CELT can always be encoded to any specific number of bytes, enabling hybrid and CELT mode when CBR is required. SILK supports frame sizes of 10, 20, 40 and 60 ms. CELT supports frame sizes of 2.5, 5, 10 and 20 ms. Thus, hybrid mode only supports frame sizes of 10 and 20 ms; frames shorter than 10 ms will always use CELT mode. A typical Opus packet contains a single frame, but packets of up to 120 ms are produced by combining multiple frames per packet. Opus can transparently switch between modes, frame sizes, bandwidths, and channel counts on a per-packet basis, although specific applications may choose to limit this. The reference implementation is written in C and compiles on hardware architectures with or without a floating-point unit, although floating-point is currently required for audio bandwidth detection (dynamic switching between SILK, CELT, and hybrid encoding) and most speed optimizations. Containers Opus packets are not self-delimiting, but are designed to be used inside a container of some sort which supplies the decoder with each packet's length. Opus was originally specified for encapsulation in Ogg containers, specified as audio/ogg; codecs=opus, and for Ogg Opus files the .opus filename extension is recommended. Opus streams are also supported in Matroska, WebM, MPEG-TS, and MP4. Alternatively, each Opus packet may be wrapped in a network packet which supplies the packet length. Opus packets may be sent over an ordered datagram protocol such as RTP. An optional self-delimited packet format is defined in an appendix to the specification. This uses one or two additional bytes per packet to encode the packet length, allowing packets to be concatenated without encapsulation. Bandwidth and sampling rate Opus allows the following bandwidths during encoding. Opus compression does not depend on the input sample rate; timestamps are measured in 48 kHz units even if the full bandwidth is not used. Likewise, the output sample rate may be freely chosen. For example, audio can be input at 16 kHz yet be set to encode only narrowband audio. History Opus was proposed for the standardization of a new audio format at the IETF, which was eventually accepted and granted by the codec working group. It is based on two initially separate standard proposals from the Xiph.Org Foundation and Skype Technologies S.A. (now Microsoft). Its main developers are Jean-Marc Valin (Xiph.Org, Octasic, Mozilla Corporation), Koen Vos (Skype), and Timothy B. Terriberry (Xiph.Org, Mozilla Corporation). Among others, Juin-Hwey (Raymond) Chen (Broadcom), Gregory Maxwell (Xiph.Org, Wikimedia), and Christopher Montgomery (Xiph.Org) were also involved. The development of the CELT part of the format goes back to thoughts on a successor for Vorbis under the working name Ghost. As a newer speech codec from the Xiph.Org Foundation, Opus replaces Xiph's older speech codec Speex, an earlier project of Jean-Marc Valin. CELT has been worked on since November 2007. The SILK part has been under development at Skype since January 2007 as the successor of their SVOPC, an internal project to make the company independent from third-party codecs like iSAC and iLBC and respective license payments. In March 2009, Skype suggested the development and standardization of a wideband audio format within the IETF. Nearly a year passed with much debate on the formation of an appropriate working group. Representatives of several companies which were taking part in the standardization of patent-encumbered competing format, including Polycom and Ericsson—the creators and licensors of G.719—as well as France Télécom, Huawei and the Orange Labs (department of France Télécom), which were involved in the creation of G.718, stated objections against the start of the standardization process for a royalty-free format. (Some of the opponents would later claim patent rights that Xiph dismissed; see above.) The working group finally formed in February 2010, and even the corresponding Study Group 16 from the ITU-T pledged to support its work. In July 2010, a prototype of a hybrid format was presented that combined the two proposed format candidates SILK and CELT. In September 2010, Opus was submitted to the IETF as proposal for standardization. For a short time the format went under the name of Harmony before it got its present name in October 2010. At the beginning of February 2011, the bitstream format was tentatively frozen, subject to last changes. Near the end of July 2011, Jean-Marc Valin was hired by the Mozilla Corporation to continue working on Opus. Finalization (1.0) In November 2011, the working group issued the last call for changes on the bitstream format. The bitstream has been frozen since January 8, 2012. On July 2, 2012, Opus was approved by the IETF for standardization. The reference software entered release candidate state on August 8, 2012. The final specification was released as RFC 6716 on September 10, 2012. and versions 1.0 and 1.0.1 of the reference implementation libopus were released the day after. On July 11, 2013, libopus 1.0.3 brought bug fixes and a new Surround sound API that improves channel allocation and quality, especially for LFE. 1.1 On December 5, 2013, libopus 1.1 was released, incorporating overall speed improvements and significant encoder quality improvements: Tonality estimation boosts bitrate and quality for previously problematic samples, like harpsichords; automated speech/music detection improves quality in mixed audio; mid-side stereo reduces the bitrate needs of many songs; band precision boosting for improved transients; and DC rejection below 3 Hz. Two new VBR modes were added: unconstrained for more consistent quality, and temporal VBR that boosts louder frames and generally improves quality. libopus 1.1.1 was released on November 26, 2015, and 1.1.2 on January 12, 2016, both adding speed optimizations and bug fixes. July 15, 2016 saw the release of version 1.1.3 and includes bug fixes, optimizations, documentation updates and experimental Ambisonics work. 1.2 libopus 1.2 Beta was released on May 24, 2017. libopus 1.2 was released on June 20, 2017. Improvements brought in 1.2 allow it to create fullband music at bit rates as low as 32 kbit/s, and wideband speech at just 12 kbit/s. libopus 1.2 includes optional support for the decoder specification changes made in drafts of RFC 8251, improving the quality of output from such low-rate streams. 1.3 libopus 1.3 was released on October 18, 2018. The Opus 1.3 major release again brings quality improvements, new features, and bug fixes. Changes since 1.2.x include: Improvements to voice activity detection (VAD) and speech/music classification using a recurrent neural network (RNN) Support for ambisonics coding using channel mapping families 2 and 3 Improvements to stereo speech coding at low bitrate Using wideband speech encoding down to 9 kb/s (mediumband is no longer used) Making it possible to use SILK down to bitrates around 5 kb/s Minor quality improvement on tones Enabling the spec fixes in RFC 8251 by default Security/hardening improvements Notable bug fixes include: Fixes to the CELT PLC Bandwidth detection fixes 1.3.1 libopus 1.3.1 was released on April 12, 2019. This Opus 1.3.1 minor release fixes an issue with the analysis on files with digital silence (all zeros), especially on x87 builds (mostly affects 32-bit builds). It also includes two new features: A new OPUS_GET_IN_DTX query to know if the encoder is in DTX mode (last frame was either a comfort noise frame or not encoded at all) A new (and still experimental) CMake-based build system that is eventually meant to replace the VS2015 build system (the autotools one will stay) Quality comparison and low-latency performance Opus performs well at both low and high bit rates. In listening tests around 64 kbit/s, Opus shows superior quality compared to HE-AAC codecs, which were previously dominant due to their use of the patented spectral band replication (SBR) technology. In listening tests around 96 kbit/s, Opus shows slightly superior quality compared to AAC and significantly better quality compared to Vorbis and MP3. Opus has very low algorithmic delay, a necessity for use as part of a low-audio-latency communication link, which can permit natural conversation, networked music performances, or lip sync at live events. Total algorithmic delay for an audio format is the sum of delays that must be incurred in the encoder and the decoder of a live audio stream regardless of processing speed and transmission speed, such as buffering audio samples into blocks or frames, allowing for window overlap and possibly allowing for noise-shaping look-ahead in a decoder and any other forms of look-ahead, or for an MP3 encoder, the use of bit reservoir. Total one-way latency below 150 ms is the preferred target of most VoIP systems, to enable natural conversation with turn-taking little affected by delay. Musicians typically feel in-time with up to around 30 ms audio latency, roughly in accord with the fusion time of the Haas effect, though matching playback delay of each user's own instrument to the round-trip latency can also help. It is suggested for lip sync that around 45–100 ms audio latency may be acceptable. Opus permits trading-off reduced quality or increased bitrate to achieve an even smaller algorithmic delay (5.0 ms minimum). While the reference implementation's default Opus frame is 20.0 ms long, the SILK layer requires a further 5.0 ms lookahead plus 1.5 ms for resampling, giving a default delay of 26.5 ms. When the CELT layer is active, it requires 2.5 ms lookahead for window overlap to which a matching delay of 4.0 ms is added by default to synchronize with the SILK layer. If the encoder is instantiated in the special restricted low delay mode, the 4.0 ms matching delay is removed and the SILK layer is disabled, permitting the minimal algorithmic delay of 5.0 ms. Support The format and algorithms are openly documented and the reference implementation is published as free software. Xiph's reference implementation is called libopus and a package called opus-tools provides command-line encoder and decoder utilities. It is published under the terms of a BSD-like license. It is written in C and can be compiled for hardware architectures with or without a floating-point unit. The accompanying diagnostic tool opusinfo reports detailed technical information about Opus files, including information on the standard compliance of the bitstream format. It is based on ogginfo from the vorbis-tools and therefore — unlike the encoder and decoder — is available under the terms of version 2 of the GPL. Implementations contains a complete source code for the reference implementation written in C. RFC contains errata. The FFmpeg project has encoder and decoder implementations not derived from the reference library. The libopus reference library has been ported to both C# and Java as part of a project called Concentus. These ports sacrifice performance for the sake of being easily integrated into cross-platform applications. Software Digital Radio Mondiale – a digital radio format for AM frequencies – can broadcast and receive Opus audio (albeit not recognised in official standard) using Dream software-defined radio. The Wikimedia Foundation sponsored a free and open source online JavaScript Opus encoder for browsers supporting the required HTML5 features. Since 2016, WhatsApp has been using Opus as its audio file format. Signal switched from Speex to Opus audio codec for better audio quality in the beginning of 2017. Operating system support Most end-user software relies on multimedia frameworks provided by the operating system. Native Opus codec support is implemented in most major multimedia frameworks for Unix-like operating systems, including GStreamer, FFmpeg, and Libav libraries. Google added native support for Opus audio playback in Android 5.0 "Lollipop". However, it was limited to Opus audio encapsulated in Matroska containers, such as .mkv and .webm files. Android 7.0 "Nougat" introduced support for Opus audio encapsulated in .ogg containers. Android 10 finally added native support for .opus extensions. Due to the addition of WebRTC support in Apple's WebKit rendering engine, macOS High Sierra and iOS 11 come with native playback support for Opus audio encapsulated in Core Audio Format containers. On Windows 10, version 1607, Microsoft provided native support for Opus audio encapsulated in Matroska and WebM files. On version 1709, support for Opus audio encapsulated in .ogg containers was made available through a pre-installed add-on called Web Media Extensions. On Windows 10 version 1903, native support for the .opus container was added. On Windows 8.1 and older, third-party decoders, such as LAV Filters, are available to provide support for the format. Media player support While support in multimedia frameworks automatically enables Opus support in software which is built on top of such frameworks, several applications developers made additional efforts for supporting the Opus audio format in their software. Such support was added to AIMP, Amarok, cmus, Music Player Daemon, foobar2000, Mpxplay, MusicBee, SMplayer, VLC media player, Winamp and Xmplay audio players; Icecast, Airtime (software) audio streaming software; and Asunder audio CD ripper, CDBurnerXP CD burner, FFmpeg, Libav and MediaCoder media encoding tools. Streaming Icecast radio trials are live since September 2012 and January 2013. SteamOS uses Opus or Vorbis for streaming audio. Browser support Opus support is mandatory for WebRTC implementations. Opus is supported in Mozilla Firefox, Chromium and Google Chrome, Blink-based Opera, as well as all browsers for Unix-like systems relying on GStreamer for multimedia formats support. Although Internet Explorer will not provide Opus playback natively, support for the format is built into the Edge browser, along with VP9, for full WebM support. Safari supports Opus as of iOS 11 and macOS High Sierra. VoIP support Due to its abilities, Opus gained early interest from voice-over-IP (VoIP) software vendors. Several SIP clients, including Acrobits Softphone, CSipSimple (via additional plug-in), Empathy (via GStreamer), Jitsi, Tuenti, Line2 (currently only on iOS), Linphone, Phoner and PhonerLite, SFLphone, Telephone, Mumble, Discord and TeamSpeak 3 voice chat software also support Opus. TrueConf supports Opus in its VoIP products. Asterisk lacked builtin Opus support for legal reasons, but a third-party patch was available for download and official support via a binary blob was added in September 2016. Tox P2P videoconferencing software uses Opus exclusively. Classified-ads distributed messaging app sends raw opus frames inside TLS socket in its VoIP implementation. Opus is widely used as the voice codec in WhatsApp, which has over 1.5billion users worldwide. WhatsApp uses Opus at 816 kHz sampling rates, with the Real-time Transport Protocol (RTP). The PlayStation 4 video game console also uses the CELT/Opus codec for its PlayStation Network system party chat. It is also used in the Zoom videoconferencing app. Hardware Since version 3.13, Rockbox enables Opus playback on supported portable media players, including some products from the iPod series by Apple, devices made by iriver, Archos and Sandisk, and on Android devices using "Rockbox as an Application". All recent Grandstream IP phones support Opus audio both for encoding and decoding. OBihai OBi1062, OBi1032 and OBi1022 IP phones all support Opus. Recent BlueSound wireless speakers support Opus playback. Devices running Hiby OS, like the Hiby R3, are capable of decoding Opus files natively. Many broadcast IP codecs include Opus such as those manufactured by Comrex, GatesAir and Tieline. Notes References Citations Sources This article contains quotations from the Opus Codec website, which is available under the Creative Commons Attribution 3.0 (CC BY 3.0) license. External links Opus on Hydrogenaudio Knowledgebase See also Comparison of audio coding formats Streaming media xHE-AAC Speech codecs Free audio codecs Lossy compression algorithms Xiph.Org projects Software using the BSD license Open formats
615948
https://en.wikipedia.org/wiki/James%20H.%20Ellis
James H. Ellis
James Henry Ellis (25 September 1924 – 25 November 1997) was a British engineer and cryptographer. In 1970, while working at the Government Communications Headquarters (GCHQ) in Cheltenham, he conceived of the possibility of "non-secret encryption", more commonly termed public-key cryptography. Early life, education and career Ellis was born in Australia, although he was conceived in Britain, and grew up in Britain. He almost died at birth, and it was thought that he might be mentally retarded. He became an orphan who lived with his grandparents in London's East End. He showed a gift for mathematics and physics at a grammar school in Leyton, and gained a degree in physics. He then worked at the Post Office Research Station at Dollis Hill. In 1952, Ellis joined the Government Communications Headquarters (GCHQ) in Eastcote, west London. In 1965, he moved to Cheltenham to join the newly formed Communications-Electronics Security Group (CESG), an arm of GCHQ. In 1949, Ellis married Brenda, an artist and designer, and they had four children but she never knew anything about his work. Invention of non-secret encryption Ellis first proposed his scheme for "non-secret encryption" in 1970, in a (then) secret GCHQ internal report "The Possibility of Secure Non-Secret Digital Encryption". Ellis said that the idea first occurred to him after reading a paper from World War II by someone at Bell Labs describing the scheme named Project C43, a way to protect voice communications by the receiver adding (and then later subtracting) random noise (possibly this 1944 paper or the 1945 paper co-authored by Claude Shannon). He realised that 'noise' could be applied mathematically but was unable to devise a way to implement the idea. Shortly after joining GCHQ in September 1973, after studying mathematics at Cambridge University, Clifford Cocks was told of Ellis' proof and that no one had been able to figure out a way to implement it. He went home, thought about it, and returned with the basic idea for what has become known as the RSA asymmetric key encryption algorithm. Because any new and potentially beneficial/harmful technique developed by GCHQ is by definition classified information, the discovery was kept secret. Not long thereafter, Cocks' friend and fellow mathematician, Malcolm Williamson, now also working at GCHQ, after being told of Cocks' and Ellis' work, thought about the problem of key distribution and developed what has since become known as Diffie–Hellman key exchange. Again, this discovery was classified information and it was therefore kept secret. When, a few years later, Diffie and Hellman published their 1976 paper, and shortly after that Rivest, Shamir and Adleman announced their algorithm, Cocks, Ellis and Williamson suggested that GCHQ announce that they had previously developed both. GCHQ decided against publication at the time. At this point, only GCHQ and the National Security Agency (NSA) in the USA knew about the work of Ellis, Cocks and Williamson. Whitfield Diffie heard a rumour, probably from the NSA, and travelled to see James Ellis. The two men talked about a range of subjects until, at the end, Diffie asked Ellis "Tell me how you invented public-key cryptography". After a long pause, Ellis replied "Well, I don't know how much I should say. Let me just say that you people made much more of it than we did." On 18 December 1997, Clifford Cocks delivered a public talk which contained a brief history of GCHQ's contribution so that Ellis, Cocks and Williamson received some acknowledgment after nearly three decades of secrecy. James Ellis died on 25 November 1997, a month before the public announcement was made. In March 2016, the director of GCHQ made a speech at MIT re-emphasising GCHQ's early contribution to public-key cryptography and in particular the contributions of Ellis, Cocks and Williamson. References External links Ellis, J.H., The possibility of secure non-secret digital encryption, CSEG Report 3006, January 1970. Ellis, J.H., The possibility of secure non-secret analogue encryption, CSEG Report 3007, May 1970. Alumni of Imperial College London GCHQ cryptographers History of computing in the United Kingdom Modern cryptographers People from Leytonstone Public-key cryptographers 1924 births 1997 deaths Engineers from London
57182853
https://en.wikipedia.org/wiki/Rudy%20Mills
Rudy Mills
Rudolph "Rudy" Methaian Mills is a reggae musician known for his releases during the rock steady era in the 1960s. He was discovered by producer Derrick Harriott who released his hit song A Long Story. Versions of the song were later used by other artists including Bongo Herman & Bingy Bunny sampling it in an instrumental rendition. Mills released singles with Island Records, and his music has been featured on over 20 compilations, including releases from Rhino Entertainment, Trojan Records, Jamaican Gold, Heartbeat, BMG, and Universal Music TV. Mills was in a band called Progressions. Mills' song "John Jones" was a hit among the skinhead subculture in England. It was released on Trojan Records / B & C Records label Big Shot and was one of their hits. Mills song "A Place Called Happiness" was the B side. John Jones was also released on the LP titled Tighten Up. "John Jones" was used on the soundtrack to the British comedy series Plebs and was released on its soundtrack. In 2019, Mills released the single "Lonely". References External links Rudy Mills discography via 45cat Reggae singers Living people Year of birth missing (living people) Place of birth missing (living people) Island Records artists Rhino Records artists Trojan Records artists
55668212
https://en.wikipedia.org/wiki/Joseph%20Weisbecker
Joseph Weisbecker
Joseph A. Weisbecker (September 4, 1932 – November 15, 1990) was an early microprocessor and microcomputer researcher, as well as a gifted writer and designer of toys and games. He was a recipient of the David Sarnoff award for outstanding technical achievement, recipient of IEEE Computer magazine's "Best Paper" award, as well as several RCA lab awards for his work. His designs include the RCA 1800 and 1802 processors, the 1861 "Pixie" graphics chip, the RCA Microtutor, the COSMAC ELF, RCA Studio II, and COSMAC VIP computers. His daughter Joyce Weisbecker took to programming his prototypes, becoming the first female video game designer in the process, using his language called CHIP-8. Early career Professionally, Weisbecker began working with digital logic and computer systems in 1951. It was also his hobby, however, and even his early work is marked by designs that are intended for educational or hobbyist use. These include a hobby tic-tac-toe computer built from relays in 1951, grade school educational aids built using lights and switches in 1955, and the Think-a-Dot, an inexpensive game to teach basic computer concepts in 1964. As a staff engineer at RCA, he performed advanced development research on LSI circuits as well as development of new product lines based on those circuits and other RCA products. Microprocessors In 1970 and 1971, Weisbecker developed a new 8 bit architecture computer system. This work preceded the release of the 4004 by competitor Intel. He built a demonstration home computer powered by the 1802 called FRED (Flexible Recreational and Educational Device) that utilized cassette tape for storage and a television for display. Subsequent to the success of the 4004, RCA released Weisbecker's work as the COSMAC 1801R and 1801U using its CMOS process in 1975. In 1976 the two 1801 ICs were integrated into a single chip, the 1802. In the time between 1971 and the production release of the 1800 series processor, Weisbecker developed a range of inexpensive application circuits for use with the 1800s, including light guns, card readers, and cassette interfaces. Several of these circuits were used in a demonstration model microprocessor-based electronic game system which anticipated home video games. The commercial promise of this system gave RCA the motivation they required to produce the 1800 series processors. Weisbecker designed the 1861 PIXIE graphics processor in 1975 as a minimal-cost simple video output for microcomputer systems. In a single chip, it provided all the functions necessary for a bit-mapped graphic display. Small systems During this same time (1975), Weisbecker developed an educational "development board" or "trainer" style single board computer, the RCA Microtutor, to teach basic computer concepts and programming. He also designed the production form of the home video game system, which became the RCA Studio II. In 1976 Popular Electronics published Weisbecker' design for the COSMAC ELF, a close relative to the Microtutor designed to be built at home by a hobbyist with no special computer resources. In 1977 further articles added 1861-based video to the Elf, similar to the Studio II, as well as additional memory and rudimentary operating systems. In 1976, RCA released the COSMAC VIP, which not only had the features of an expanded Elf, but for which Weisbecker had created the CHIP-8 programming language. CHIP-8 is a very small high level language designed for easy keyboard and video interaction. It is still in use on many systems, notably the Z-80 based TI-38 calculators. But due to its simplicity, it can be on any platform and its teaching of programming Binary numbers, it is also to be found further a field on modern computers ! He later developed color graphic chips for use in a more advanced video game with expansion capabilities, as well as a line of color graphic terminals. Products using these included the Studio III. Writings Along with the hardware and software development of the systems he designed, Joe wrote detailed manuals, use guides, and tutorials for each. He had a commitment to fun and inexpensive computer systems. Titles such as "Fun and Games with COSMAC" (IEEE Electro #77, April 1977) and "An Easy Programming System" (Byte, December 1978) demonstrate this, as do the contents of his many papers, manuals, and articles. Legacy The COSMAC ELF continues to be a popular educational microcomputer construction project to the present time, and several newer designs have been based on it. The design principles of the 1802 - large, general purpose register file and a limited set of instructions that execute in few cycles - presaged the RISC design philosophy. The 1802 has been called "the grandfather of RISC." The 1802's load mode is unique, and along with RCA's radiation-hardened Silicon on sapphire process was instrumental in the selection of the 1802 processor for several space probes and space-based instruments. Some of these processors are still working in their third decade in space. The 1802 is unique among first generation processors in that it is still in production today. Its closest rival in this respect is the 8080A, the successor to the 8080 and 8008 designs from Intel. Along with Chuck Peddle, Weisbecker's IC designs exemplify the concept of doing a lot with very little. His designs are still studied by IC designers today for their unique approaches to solving design problems with elegance and simplicity. They are also studied with respect to design longevity, as systems based on the 1802 have been run and supported in production for over 30 years in a range of applications. References Further reading External links Joe Weisbecker Video Game Collection at Hagley Museum and Library 1932 births 1990 deaths American electrical engineers Commodore people Computer hardware engineers RCA people 20th-century American engineers
12706069
https://en.wikipedia.org/wiki/XMind
XMind
XMind is a mind mapping and brainstorming software, developed by XMind Ltd. In addition to the management elements, the software can be used to capture ideas, clarify thinking, manage complex information, and promote team collaboration. As of April 2013, XMind was selected as the most popular mind mapping software on Lifehacker. It supports mind maps, fishbone diagrams, tree diagrams, organization charts, spreadsheets, etc. Normally, it is used for knowledge management, meeting minutes, task management, and GTD. Meanwhile, XMind can read FreeMind and MindManager files, and save to Evernote. For XMind Pro/Zen, it can export the mind maps into Microsoft Word, PowerPoint, Excel, PDF, FreeMind and Mindjet MindManager documents. Editions XMind can create mind maps to visualize information, facilitate communication and manage projects. There are 3 different editions: XMind Pro, XMind: Zen, XMind for iOS. Versions Awards XMIND 2008 won the "Best Commercial RCP Application" award at EclipseCon 2008 XMIND 3 won "The Best Project for Academia" award at SourceForge.net Community Choice Awards XMIND was picked by PCWorld for inclusion in Productivity Software: Best of 2010 XMind 2013 was picked as "the Most Popular Mind Mapping Software" on Lifehacker XMind won "Red Herring Asia Top 100" XMind was rated as "The Best Brainstorming and Mind-Mapping Tech Tool" on lifehack Eclipse application XMind 3 is based on Eclipse Rich Client Platform 3.4 for its shell and Eclipse Graphical Editing Framework for its core editors. It depends on Java Runtime Environment 5.0 and later. File format XMind 3 saves content in an XMIND Workbook file format. The .xmind format suffix is used, whereas XMIND 2008 used the .xmap suffix. An XMIND workbook may contain more than one sheet, as in spreadsheet software. Each sheet may contain multiple topics, including one central topic, multiple main topics and multiple floating topics. Each sheet contains one mind map or Fishbone Chart or Spreadsheet Chart. The .xmind file format implementing XMind Workbooks consists of a ZIP compressed archive containing an XML document for contents, an XML document for styles, a .png image file for thumbnails, and some directories for attachments. The file format is open and based on some principles of OpenDocument. See also Mind map Brainstorming List of concept- and mind-mapping software Tony Buzan Fishbone diagram List of Eclipse-based software References External links Concept- and mind-mapping software programmed in Java Project management software Proprietary commercial software for Linux
16027532
https://en.wikipedia.org/wiki/Corporate%20profiling
Corporate profiling
Corporate profiling is a process that delivers an in-depth blueprint of the organizational structure, its technology, people, processes, up-stream and down-stream customers and their relationships. Using this process, organizations are able to identify where the interlinked relationships occur. Profiling also identifies any common or causal factors between corporate, business and Information technology and more significantly, it elucidates hidden or not so obvious factors that would otherwise be overlooked. In order for comprehensive profiling to be undertaken there must be a common objective between the three business components (corporate, business and IT) and a level of executive support and cohesion that will drive tri-directional communication channels throughout the business components. Information technology The process of corporate profiling ensures that all factors are considered, thereby significantly reducing the opportunity for Information technology Project Failure. Corporate Profiling is the first step to undertaking an IT implementation. Organizational structure
2466160
https://en.wikipedia.org/wiki/Drew%20Major
Drew Major
Drew Major (born June 17, 1956) is a computer scientist and entrepreneur. He is best known for his role as one of the principal engineers of the Novell NetWare operating system from early in Novell's history. He currently resides in Orem, Utah with his wife, Mary, and their four sons. Major received a Bachelor of Science degree from Brigham Young University in 1980, and graduated with honors in mathematics and computer science. He was born in California but has lived most of his life in Utah. SuperSet Software SuperSet Software was a group founded by friends and former Eyring Research Institute (ERI) co-workers Drew Major, Dale Neibaur, Kyle Powell, and later joined by Mark Hurst. Their work was based on classwork that they started in October 1981 at Brigham Young University, Provo, Utah, USA, and upon previous work experiences at Eyring Research Institute working with the Xerox Network Systems (XNS) protocol which led to the development of the Novell IPX and SPX networking protocols, and the NetWare operating system. In 1983, Ray Noorda took over leadership of Novell and engaged the SuperSet group to work on networking products. The team was originally assigned to create a CP/M disk sharing system to help network the CP/M hardware that Novell was selling at the time. Under Ray Noorda's leadership, the group developed a successful file sharing system for the newly introduced IBM-compatible PC. The group also wrote a text-mode game called Snipes and used it to test the new network and demonstrate its capabilities. Novell Major joined Novell in 1983, and his partners Kyle Powell, Dale Neibaur, and Mark Hurst began to work in enabling PCs to share files and other resources via a local area network (LAN). Major was the lead architect and developer of the NetWare operating system for over 15 years. Major left Novell in 2003. Move Networks After leaving Novell, Major co-founded video networking company Move Networks, Inc. The company began to experience financial problems in 2010 for failing to deliver on key technologies, which resulted in some of its larger customers abandoning the company's technology. The company was subsequently acquired for 45 million dollars by EchoStar, Inc. in January 2011. External links References 1956 births Living people American Latter Day Saints Brigham Young University alumni Novell people
33238006
https://en.wikipedia.org/wiki/Tizen
Tizen
Tizen () is a Linux-based mobile operating system backed by the Linux Foundation, mainly developed and used primarily by Samsung Electronics. The project was originally conceived as an HTML5-based platform for mobile devices to succeed MeeGo. Samsung merged its previous Linux-based OS effort, Bada, into Tizen, and has since used it primarily on platforms such as wearable devices and smart TVs. Much of Tizen is open source software, although the software development kit contains proprietary components owned by Samsung, and portions of the OS are licensed under the Flora License, a derivative of the Apache License 2.0 that grants a patent license only to "Tizen certified platforms". In May 2021, Google announced that Samsung would partner with the company on integrating Tizen features with its Android-derived Wear OS, and committed to use it on future wearables. History The project was initiated as mobile Linux and was launched by Intel on July 2007, on April 2009 the operating system had managed to updated to version 2.0 which the core was based on Fedora. However on the same month, Intel turned Moblin over to the Linux Foundation for future development. Eventually, the operating system was merged with Nokia Maemo, a Debian based Linux distro, into MeeGo which mainly developed by Nokia, Intel and Linux Foundation. In 2011, after Nokia abandoned the project, Linux Foundation initiate the Tizen project as a successor to MeeGo, another Linux-based mobile operating system, with its main backer Intel joining Samsung Electronics, as well as Access Co., NEC Casio, NTT DoCoMo, Panasonic Mobile, SK Telecom, Telefónica, and Vodafone as commercial partners. Tizen would be designed to use HTML5 apps, and target mobile and embedded platforms such as netbooks, smartphones, tablets, smart TVs, and in-car entertainment systems. U.S. carrier Sprint Corporation (which was a backer of MeeGo) joined the Tizen Association in May 2012. On September 16, 2012, Automotive Grade Linux announced its intent to use Tizen as the basis of its reference distribution. In January 2013, Samsung announced its intent to release multiple Tizen-based phones that year. In February 2013, Samsung merged its Bada operating system into Tizen. In October 2013, the first Tizen tablet was shipped by Systena. The tablet was part of a development kit exclusive to Japan. In 2014, Samsung released the Gear 2 smartwatch that used a Tizen-based operating system as opposed to Android. On May 14, 2014, it was announced that Tizen would ship with Qt. This project was abandoned in January 2017. On February 21, 2016, Samsung announced the Samsung Connect Auto, a connected car solution offering diagnostic, Wi-Fi, and other car-connected services. The device plugs directly into the OBD-II port underneath the steering wheel. On November 16, 2016, Samsung said they would be collaborating with Microsoft to bring .NET Core support to Tizen. According to Strategy Analytics research, approximately 21% of the smart TVs sold in 2018 run on the Tizen platform making it the most popular smart TV platform. On May 19, 2021 during Google I/O, Google announced that Samsung had agreed to work on integrating features of Tizen with the next version of Wear OS, and that it had committed to using Wear OS for its future wearable products. Samsung will continue to use Tizen for its smart TVs. Releases Tizen 1.0: April 30, 2012 Tizen 2.0: February 18, 2013 Tizen 2.1: May 18, 2013 Tizen 2.2: July 22, 2013 Tizen 2.2.1: November 9, 2013 Tizen 2.3: February 9, 2015 Tizen 2.3.1: September 3, 2015 Tizen 2.3.1 Rev1: November 13, 2015 Tizen 2.3.2: September 3, 2016 Tizen 2.3.2 Patch: December 23, 2016 Tizen 2.4: October 30, 2015 Tizen 2.4 Rev1: December 1, 2015 Tizen 2.4 Rev2: December 23, 2015 Tizen 2.4 Rev3: February 5, 2016 Tizen 2.4 Rev4: March 4, 2016 Tizen 2.4 Rev5: April 4, 2016 Tizen 2.4 Rev6: May 19, 2016 Tizen 2.4 Rev7: June 30, 2016 Tizen 2.4 Rev8: August 2, 2016 Tizen 3.0: January 18, 2017 Tizen IVI 3.0 (In-Vehicle Infotainment): April 22, 2014 Tizen 3.0 Milestones (M1): September 17, 2015 Tizen 3.0 Public M2: January 18, 2017 Tizen 3.0 Public M3: July 5, 2017 Tizen 3.0 Public M4: November 30, 2017 Tizen 4.0: May 31, 2017 Tizen 4.0 Public M1: May 31, 2017 Tizen 4.0 Public M2: November 1, 2017 Tizen 4.0 Public M3: August 31, 2018 Tizen 5.0: May 31, 2018 Tizen 5.0 Public M1: May 31, 2018 Tizen 5.0 Public M2: October 30, 2018 Tizen 5.5: May 31, 2019 Tizen 5.5 Public M1: May 31, 2019 Tizen 5.5 Public M2: October 30, 2019 Tizen 5.5 Public M3: August 27, 2020 Tizen 6.0: May 31, 2020 Tizen 6.0 Public M1: May 31, 2020 Tizen 6.0 Public M2: October 27, 2020 Tizen 6.5: May 31, 2021 Tizen 6.5 Public M1: May 31, 2021 Tizen 6.5 Public M2: October 31, 2021 Compatible devices Smartwatch Samsung Galaxy Gear Samsung Gear S Samsung Gear S2 Samsung Gear S3 Samsung Gear 2 Samsung Gear Fit 2 Samsung Gear Fit 2 Pro Samsung Gear Sport Samsung Galaxy Watch Samsung Galaxy Watch Active Samsung Galaxy Watch Active 2 Samsung Galaxy Watch 3 Camera Samsung NX200 Samsung NX300 Samsung NX1 Smartphone Samsung Z Samsung Z1 Samsung Z2 Samsung Z3 Samsung Z4 Television Samsung Smart TVs since 2015 Appliances Family Hub 3.0 Refrigerator LED Wall controllers SBB-SNOWJ3U Controversies On April 3, 2017, Vice reported on its "Motherboard" website that Amihai Neiderman, an Israeli security expert, has found more than 40 zero-day vulnerabilities in Tizen's code, allowing hackers to remotely access a wide variety of current Samsung products running Tizen, such as Smart TVs and mobile phones. After the article was published, Samsung, whom Neiderman tried to contact months before, reached out to him to inquire about his allegations. See also Comparison of mobile operating systems KaiOS Sailfish OS References External links Comprehensive list of Tizen devices detailed and incomplete list of devices that run Tizen 2012 software ARM operating systems Embedded operating systems Embedded Linux distributions Free mobile software Intel software Linux Foundation projects Mobile Linux Mobile operating systems Samsung Electronics Smartphones Smart TV Tablet operating systems Samsung software ARM Linux distributions IA-32 Linux distributions RPM-based Linux distributions South Korean brands Linux distributions
18996620
https://en.wikipedia.org/wiki/Opera%20%28web%20browser%29
Opera (web browser)
Opera is a multi-platform web browser developed by its namesake company Opera. Opera is a Chromium-based browser. It distinguishes itself from other browsers through its user interface and other features. Opera was initially released on April 10, 1995, making it one of the oldest desktop web browsers still actively developed today. It was commercial software for the first ten years and had its own proprietary layout engine, Presto. In 2013, Opera switched from the Presto engine to Chromium. The web browser can be used on Microsoft Windows, Android, iOS, macOS, and Linux operating systems. There are also mobile versions called Opera Mobile and Opera Mini. Additionally, Opera users have access to a news app based on an AI-platform, Opera News. The company released a gaming-oriented version of the browser called Opera GX in 2019. History In 1994, Jon Stephenson von Tetzchner and Geir Ivarsøy started developing the Opera web browser while working at Telenor, a Norwegian telecommunications company. In 1995, they founded Opera Software AS. Opera was initially released on April 10, 1995 and was first publicly released in 1996 with version 2.10, which ran on Microsoft Windows 95. Opera began development of its first browser for mobile device platforms in 1998. Opera 4.0, released in 2000, included a new cross-platform core that facilitated the creation of editions of Opera for multiple operating systems and platforms. Up to this point, Opera was trialware and had to be purchased after the trial period ended. Version 5.0 (released in 2000) saw the end of this requirement. Instead, Opera became ad-sponsored, displaying advertisements to users who had not paid for it. Later versions of Opera gave the user the choice of seeing banner ads or targeted text advertisements from Google. With version 8.5 (released in 2005) the advertisements were completely removed and the primary financial support for the browser came through revenue from Google (which is by contract Opera's default search engine). Among the new features introduced in version 9.1 (released in 2006) was fraud protection using technology from GeoTrust, a digital certificate provider, and PhishTank, an organization that tracks known phishing web sites. This feature was further expanded in version 9.5, when GeoTrust was replaced with Netcraft, and malware protection from Haute Secure was added. In 2006, Opera Software ASA was released as well as Internet Channel and Nintendo DS Browser for Nintendo's DS and Wii gaming systems. A new JavaScript engine called Carakan, after the Javanese alphabet, was introduced with version 10.50. According to Opera Software, Carakan made Opera 10.50 more than seven times faster in SunSpider than Opera 10.10. On December 16, 2010, Opera 11 was released, featuring extensions, tab stacking (where dragging one tab over another allows creating a group of tabs), visual mouse gestures and changes to the address bar. Opera 12 was released on June 14, 2012. On February 12, 2013, Opera Software announced that it would drop its own Presto layout engine in favour of WebKit as implemented by Google's Chrome browser, using code from the Chromium project. Opera Software planned as well to contribute code to WebKit. On April 3, 2013, Google announced that it would fork components from WebKit to form a new layout engine known as Blink. The same day, Opera Software confirmed that it would follow Google in implementing the Blink layout engine. On May 28, 2013, a beta release of Opera 15 was made available, the first version of which was based on the Chromium project. Many distinctive Opera features of the previous versions were dropped, and Opera Mail was separated into a standalone application derived from Opera 12. Acquisition by Chinese consortium In 2016, the company changed ownership when a group of Chinese investors purchased the web browser, consumer business, and brand of Opera Software ASA. On 18 July 2016, Opera Software ASA announced it had sold its browser, privacy and performance apps, and the Opera brand to Golden Brick Capital Private Equity Fund I Limited Partnership, a consortium of Chinese investors. In January 2017, the source code of Opera 12.15, one of the last few versions that was still based on the Presto layout engine, was leaked. To demonstrate how radically different a browser could look, Opera Neon, dubbed a "concept browser," was released in January 2017. PC World compared it to demo models that automakers and hardware vendors release to show their visions of the future. Instead of a Speed Dial (also explained in the following chapter "Features"), it displays the frequently accessed websites in resemblance to a desktop with computer icons scattered all over it in an artistic formation. Features Opera has originated features later adopted by other web browsers, including: Speed Dial, pop-up blocking, re-opening recently closed pages, private browsing, and tabbed browsing. Additional features include a built-in screenshot tool called Snapshot which also includes an image-markup tool, built-in ad blockers and tracking blockers. Built-in messengers Opera’s desktop browser includes access to social media messaging apps WhatsApp, Telegram, Facebook Messenger, Twitter, Instagram, and VKontakte. Usability and accessibility Opera includes a bookmarks bar and a download manager. It also has "Speed Dial" which allows the user to add an unlimited number of pages shown in thumbnail form in a page displayed when a new tab is opened. Opera was one of the first browsers to support Cascading Style Sheets (CSS) in 1998. Opera Turbo, a feature that compresses requested web pages (except HTTPS pages) before sending them to the users, is no longer available on the desktop browser. Opera Turbo is available in Opera Mini, the mobile browser. Privacy and security One security feature is the option to delete private data, such as HTTP cookies, browsing history, items in cache and passwords with the click of a button. When visiting a site, Opera displays a security badge in the address bar which shows details about the website, including security certificates. Opera's fraud and malware protection warns the user about suspicious web pages and is enabled by default. It checks the requested page against several databases of known phishing and malware websites, called blacklists. In 2016, a free virtual private network (VPN) service was implemented in the browser. Opera said that this would allow encrypted access to websites otherwise blocked, and provide security on public WiFi networks. It was later determined that the browser VPN operated as a web proxy rather than a VPN, meaning that it only secured connections made by the browser and not by any other apps on the computer. Crypto wallet support In 2018, a built-in cryptocurrency wallet to the Opera Web Browser was released, announcing that they would be the first browser with a built-in Crypto Wallet. On December 13, 2018, Opera released a video showing many decentralized applications like Cryptokitties running on the Android version of the Opera Web Browser. In March 2020, Opera updated its Android browser to access crypto domains, making it the first browser to be able to support a domain name system (DNS) which is not part of the traditional DNS directly without the need of a plugin or add-on. This was through a collaboration with a San Francisco based startup, Unstoppable Domains. Other versions Opera GX Opera GX is a gaming-oriented counterpart of Opera. The browser was announced and released in early access for Windows on June 11, 2019, during E3 2019. The macOS version was released in December of the same year. Opera GX adds features geared towards gamers on top of the regular Opera browser. The browser allows users to limit network, CPU, and memory usage to preserve system resources. It also adds integrations with other apps such as Twitch, Discord, Twitter, and Instagram. The browser also has a built-in page called the GX Corner, which collates gaming-related releases, deals, and news articles. On May 20, 2021, Opera released a mobile version of Opera GX in beta for iOS and Android. Development stages Opera Software uses a release cycle consisting of three "streams," corresponding to phases of development, that can be downloaded and installed independently of each other: "developer," "beta," and "stable." New features are first introduced in the developer build, then, depending on user feedback, may progress to the beta version and eventually be released. The developer stream allows early testing of new features, mainly targeting developers, extension creators, and early adopters. Opera developer is not intended for everyday browsing as it is unstable and is prone to failure or crashing, but it enables advanced users to try out new features that are still under development, without affecting their normal installation of the browser. New versions of the browser are released frequently, generally a few times a week. The beta stream, formerly known as "Opera Next," is a feature complete package, allowing stability and quality to mature before the final release. A new version is released every couple of weeks. Both streams can be installed alongside the official release without interference. Each has a different icon to help the user distinguish between the variants. Market adoption Integrations In 2005, Adobe Systems integrated Opera's rendering engine, Presto, into its Adobe Creative Suite applications. Opera technology was employed in Adobe GoLive, Adobe Photoshop, Adobe Dreamweaver, and other components of the Adobe Creative Suite. Opera's layout engine is also found in Virtual Mechanics SiteSpinner Pro. The Internet Channel is a version of the Opera 9 web browser for use on the Nintendo Wii created by Opera Software and Nintendo. Opera Software is also implemented in the Nintendo DS Browser for Nintendo's handheld systems. Opera is one of the top 5 browsers used around the world. As of April 2021, Opera's offerings had over 320 million active users. Reception The Opera browser has been listed as a “tried and tested direct alternative to Chrome.” It scores close to Chrome on the HTML5test, which scores browsers’ compatibility with different web standards. Versions with the Presto layout engine have been positively reviewed, although they have been criticized for website compatibility issues. Because of this issue, Opera 8.01 and higher had included workarounds to help certain popular but problematic web sites display properly. Versions with the Blink layout engine have been criticized by some users for missing features such as UI customization, and for abandoning Opera Software's own Presto layout engine. Despite that, versions with the Blink layout engine have been noted for being fast and stable, for handling the latest web standards and for having a better website compatibility and a modern-style user interface. See also Opera browser platform variants: Opera Mini: a browser for tablets and telephones Opera Mobile: a browser for tablets and telephones Related other browsers: Otter Browser: an open-source browser that recreates some aspects of the classic Opera Vivaldi: a freeware browser created by former Opera Software employees Related topics: Comparison of browser synchronizers History of the web browser List of pop-up blocking software List of web browsers Timeline of web browsers References External links C++ software Cross-platform web browsers Embedded Linux Freeware Java device platform OS/2 web browsers MacOS web browsers Pocket PC software Portable software POSIX web browsers Proprietary cross-platform software Proprietary freeware for Linux Science and technology in Norway Software based on WebKit Software companies of Norway Telenor Windows web browsers 1994 software 1995 software Computer-related introductions in 1995 BSD software
159886
https://en.wikipedia.org/wiki/Network%20Time%20Protocol
Network Time Protocol
The Network Time Protocol (NTP) is a networking protocol for clock synchronization between computer systems over packet-switched, variable-latency data networks. In operation since before 1985, NTP is one of the oldest Internet protocols in current use. NTP was designed by David L. Mills of the University of Delaware. NTP is intended to synchronize all participating computers to within a few milliseconds of Coordinated Universal Time (UTC). It uses the intersection algorithm, a modified version of Marzullo's algorithm, to select accurate time servers and is designed to mitigate the effects of variable network latency. NTP can usually maintain time to within tens of milliseconds over the public Internet, and can achieve better than one millisecond accuracy in local area networks under ideal conditions. Asymmetric routes and network congestion can cause errors of 100 ms or more. The protocol is usually described in terms of a client–server model, but can as easily be used in peer-to-peer relationships where both peers consider the other to be a potential time source. Implementations send and receive timestamps using the User Datagram Protocol (UDP) on port number 123. They can also use broadcasting or multicasting, where clients passively listen to time updates after an initial round-trip calibrating exchange. NTP supplies a warning of any impending leap second adjustment, but no information about local time zones or daylight saving time is transmitted. The current protocol is version 4 (NTPv4), which is a proposed standard as documented in . It is backward compatible with version 3, specified in . Network Time Security (NTS), a secure version of NTP with TLS and AEAD is a proposed standard and documented in . History In 1979, network time synchronization technology was used in what was possibly the first public demonstration of Internet services running over a trans-Atlantic satellite network, at the National Computer Conference in New York. The technology was later described in the 1981 Internet Engineering Note (IEN) 173 and a public protocol was developed from it that was documented in . The technology was first deployed in a local area network as part of the Hello routing protocol and implemented in the Fuzzball router, an experimental operating system used in network prototyping, where it ran for many years. Other related network tools were available both then and now. They include the Daytime and Time protocols for recording the time of events, as well as the ICMP Timestamp messages and IP Timestamp option (). More complete synchronization systems, although lacking NTP's data analysis and clock disciplining algorithms, include the Unix daemon timed, which uses an election algorithm to appoint a server for all the clients; and the Digital Time Synchronization Service (DTSS), which uses a hierarchy of servers similar to the NTP stratum model. In 1985, NTP version 0 (NTPv0) was implemented in both Fuzzball and Unix, and the NTP packet header and round-trip delay and offset calculations, which have persisted into NTPv4, were documented in . Despite the relatively slow computers and networks available at the time, accuracy of better than 100 milliseconds was usually obtained on Atlantic spanning links, with accuracy of tens of milliseconds on Ethernet networks. In 1988, a much more complete specification of the NTPv1 protocol, with associated algorithms, was published in . It drew on the experimental results and clock filter algorithm documented in and was the first version to describe the client–server and peer-to-peer modes. In 1991, the NTPv1 architecture, protocol and algorithms were brought to the attention of a wider engineering community with the publication of an article by David L. Mills in the IEEE Transactions on Communications. In 1989, was published defining NTPv2 by means of a state machine, with pseudocode to describe its operation. It introduced a management protocol and cryptographic authentication scheme which have both survived into NTPv4, along with the bulk of the algorithm. However the design of NTPv2 was criticized for lacking formal correctness by the DTSS community, and the clock selection procedure was modified to incorporate Marzullo's algorithm for NTPv3 onwards. In 1992, defined NTPv3. The RFC included an analysis of all sources of error, from the reference clock down to the final client, which enabled the calculation of a metric that helps choose the best server where several candidates appear to disagree. Broadcast mode was introduced. In subsequent years, as new features were added and algorithm improvements were made, it became apparent that a new protocol version was required. In 2010, was published containing a proposed specification for NTPv4. The protocol has significantly progressed since then, and , an updated RFC has yet to be published. Following the retirement of Mills from the University of Delaware, the reference implementation is currently maintained as an open source project led by Harlan Stenn. Clock strata NTP uses a hierarchical, semi-layered system of time sources. Each level of this hierarchy is termed a stratum and is assigned a number starting with zero for the reference clock at the top. A server synchronized to a stratum n server runs at stratum n + 1. The number represents the distance from the reference clock and is used to prevent cyclical dependencies in the hierarchy. Stratum is not always an indication of quality or reliability; it is common to find stratum 3 time sources that are higher quality than other stratum 2 time sources. A brief description of strata 0, 1, 2 and 3 is provided below. Stratum 0 These are high-precision timekeeping devices such as atomic clocks, GNSS (including GPS) or other radio clocks. They generate a very accurate pulse per second signal that triggers an interrupt and timestamp on a connected computer. Stratum 0 devices are also known as reference clocks. NTP servers cannot advertise themselves as stratum 0. A stratum field set to 0 in NTP packet indicates an unspecified stratum. Stratum 1 These are computers whose system time is synchronized to within a few microseconds of their attached stratum 0 devices. Stratum 1 servers may peer with other stratum 1 servers for sanity check and backup. They are also referred to as primary time servers. Stratum 2 These are computers that are synchronized over a network to stratum 1 servers. Often a stratum 2 computer queries several stratum 1 servers. Stratum 2 computers may also peer with other stratum 2 computers to provide more stable and robust time for all devices in the peer group. Stratum 3 These are computers that are synchronized to stratum 2 servers. They employ the same algorithms for peering and data sampling as stratum 2, and can themselves act as servers for stratum 4 computers, and so on. The upper limit for stratum is 15; stratum 16 is used to indicate that a device is unsynchronized. The NTP algorithms on each computer interact to construct a Bellman-Ford shortest-path spanning tree, to minimize the accumulated round-trip delay to the stratum 1 servers for all the clients. In addition to stratum, the protocol is able to identify the synchronization source for each server in terms of a reference identifier (refid). Timestamps The 64-bit timestamps used by NTP consist of a 32-bit part for seconds and a 32-bit part for fractional second, giving a time scale that rolls over every 232 seconds (136 years) and a theoretical resolution of 2−32 seconds (233 picoseconds). NTP uses an epoch of January 1, 1900. Therefore, the first rollover occurs on February 7, 2036. NTPv4 introduces a 128-bit date format: 64 bits for the second and 64 bits for the fractional-second. The most-significant 32-bits of this format is the Era Number which resolves rollover ambiguity in most cases. According to Mills, "The 64-bit value for the fraction is enough to resolve the amount of time it takes a photon to pass an electron at the speed of light. The 64-bit second value is enough to provide unambiguous time representation until the universe goes dim." Clock synchronization algorithm A typical NTP client regularly polls one or more NTP servers. The client must compute its time offset and round-trip delay. Time offset θ is positive or negative (client time > server time) difference in absolute time between the two clocks. It is defined by and the round-trip delay δ by where t0 is the client's timestamp of the request packet transmission, t1 is the server's timestamp of the request packet reception, t2 is the server's timestamp of the response packet transmission and t3 is the client's timestamp of the response packet reception. To derive the expression for the offset, note that for the request packet, and for the response packet, Solving for θ yields the definition of the time offset. The values for θ and δ are passed through filters and subjected to statistical analysis. Outliers are discarded and an estimate of time offset is derived from the best three remaining candidates. The clock frequency is then adjusted to reduce the offset gradually, creating a feedback loop. Accurate synchronization is achieved when both the incoming and outgoing routes between the client and the server have symmetrical nominal delay. If the routes do not have a common nominal delay, a systematic bias exists of half the difference between the forward and backward travel times. Software implementations Reference implementation The NTP reference implementation, along with the protocol, has been continuously developed for over 20 years. Backwards compatibility has been maintained as new features have been added. It contains several sensitive algorithms, especially to discipline the clock, that can misbehave when synchronized to servers that use different algorithms. The software has been ported to almost every computing platform, including personal computers. It runs as a daemon called ntpd under Unix or as a service under Windows. Reference clocks are supported and their offsets are filtered and analysed in the same way as remote servers, although they are usually polled more frequently. This implementation was audited in 2017, finding numerous potential security issues. SNTP Simple Network Time Protocol (SNTP) is a less complex implementation of NTP, using the same protocol but without requiring the storage of state over extended periods of time. It is used in some embedded systems and in applications where full NTP capability is not required. Windows Time All Microsoft Windows versions since Windows 2000 include the Windows Time service (W32Time), which has the ability to synchronize the computer clock to an NTP server. W32Time was originally implemented for the purpose of the Kerberos version 5 authentication protocol, which required time to be within 5 minutes of the correct value to prevent replay attacks. The version in Windows 2000 and Windows XP only implements SNTP, and violates several aspects of the NTP version 3 standard. Beginning with Windows Server 2003 and Windows Vista, W32Time became compatible with a significant subset of NTPv3. Microsoft states that W32Time cannot reliably maintain time synchronization with one second accuracy. If higher accuracy is desired, Microsoft recommends using a newer version of Windows or different NTP implementation. Beginning with Windows 10 version 1607 and Windows Server 2016, W32Time can be configured to reach time accuracy of 1 s, 50 ms or 1 ms under certain specified operating conditions. OpenNTPD In 2004, Henning Brauer presented OpenNTPD, an NTP implementation with a focus on security and encompassing a privilege separated design. Whilst it is aimed more closely at the simpler generic needs of OpenBSD users, it also includes some protocol security improvements while still being compatible with existing NTP servers. A portable version is available in Linux package repositories. Ntimed A new NTP client, ntimed, was started by Poul-Henning Kamp in 2014 and abandoned in 2015. The new implementation was sponsored by the Linux Foundation as a replacement for the reference implementation, as it was determined to be easier to write a new implementation from scratch than to reduce the size of the reference implementation. Although it has not been officially released, ntimed can synchronize clocks reliably. NTPsec NTPsec is a fork of the reference implementation that has been systematically security-hardened. The fork point was in June 2015 and was in response to a series of compromises in 2014. The first production release shipped in October 2017. Between removal of unsafe features, removal of support for obsolete hardware, and removal of support for obsolete Unix variants, NTPsec has been able to pare away 75% of the original codebase, making the remainder easier to audit. A 2017 audit of the code showed eight security issues, including two that were not present in the original reference implementation, but NTPsec did not suffer from eight other issues that remained in the reference implementation. chrony chrony comes by default in Red Hat distributions and is available in the Ubuntu repositories. is aimed at ordinary computers, which are unstable, go into sleep mode or have intermittent connection to the Internet. is also designed for virtual machines, a much more unstable environment. It is characterized by low resource consumption (cost) and supports Precision Time Protocol hardware for greater timestamp precision. It has two main components: , a daemon that is executed when the computer starts, and , a command line interface to the user for its configuration. It has been evaluated as very safe and with just a few incidents, its advantage is the versatility of its code, written from scratch to avoid unnecessary complexity. Support for Network Time Security (NTS) was added on version 4.0. is available under GNU General Public License version 2, was created by Richard Curnow in 1997 and is currently maintained by Miroslav Lichvar. Leap seconds On the day of a leap second event, ntpd receives notification from either a configuration file, an attached reference clock, or a remote server. Although the NTP clock is actually halted during the event, because of the requirement that time must appear to be strictly increasing, any processes that query the system time cause it to increase by a tiny amount, preserving the order of events. If a negative leap second should ever become necessary, it would be deleted with the sequence 23:59:58, 00:00:00, skipping 23:59:59. An alternative implementation, called leap smearing, consists in introducing the leap second incrementally during a period of 24 hours, from noon to noon in UTC time. This implementation is used by Google (both internally and on their public NTP servers) and by Amazon AWS. Security concerns Only a few other security problems have been identified in the reference implementation of the NTP codebase, but those that appeared in 2009 were cause for significant concern. The protocol has been undergoing revision and review throughout its history. The codebase for the reference implementation has undergone security audits from several sources for several years. A stack buffer overflow exploit was discovered and patched in 2014. Apple was concerned enough about this vulnerability that it used its auto-update capability for the first time. Some implementation errors are basic, such as a missing return statement in a routine, that can lead to unlimited access to systems that are running some versions of NTP in the root daemon. Systems that do not use the root daemon, such as those derived from Berkeley Software Distribution (BSD), are not subject to this flaw. A 2017 security audit of three NTP implementations, conducted on behalf of the Linux Foundation's Core Infrastructure Initiative, suggested that both NTP and NTPsec were more problematic than Chrony from a security standpoint. NTP servers can be susceptible to man-in-the-middle attacks unless packets are cryptographically signed for authentication. The computational overhead involved can make this impractical on busy servers, particularly during denial of service attacks. NTP message spoofing from a man-in-the-middle attack can be used to alter clocks on client computers and allow a number of attacks based on bypassing of cryptographic key expiration. Some of the services affected by fake NTP messages identified are TLS, DNSSEC, various caching schemes (such as DNS cache), Border Gateway Protocol (BGP), Bitcoin and a number of persistent login schemes. NTP has been used in distributed denial of service attacks. A small query is sent to an NTP server with the return IP address spoofed to be the target address. Similar to the DNS amplification attack, the server responds with a much larger reply that allows an attacker to substantially increase the amount of data being sent to the target. To avoid participating in an attack, NTP server software can be upgraded or servers can be configured to ignore external queries. To improve NTP security, a secure version called Network Time Security (NTS) was developed and currently supported by several time servers. See also Allan variance Clock network International Atomic Time IRIG timecode NITZ NTP pool Ntpdate Notes References Further reading External links Official Stratum One Time Servers list IETF NTP working group Microsft Windows accurate time guide and more Time and NTP paper NTP Survey 2005 Current NIST leap seconds file compatible with ntpd Application layer protocols Internet Standards Network time-related software
62854708
https://en.wikipedia.org/wiki/010%20Editor
010 Editor
010 Editor is a commercial hex editor and text editor for Microsoft Windows, Linux and macOS. Typically 010 Editor is used to edit text files, binary files, hard drives, processes, tagged data (e.g. XML, HTML), source code (e.g. C++, PHP, JavaScript), shell scripts (e.g. Bash, batch files), log files, etc. A large variety of binary data formats can be edited through the use of Binary Templates. The software uses a tabbed document interface for displaying text and binary files. Full search and replace with regular expressions is supported along with comparisons, histograms, checksum/hash algorithms, and column mode editing. Different character encodings including ASCII, Unicode, and UTF-8 are supported including conversions between encodings. The software is scriptable using a language similar to ANSI C. Originally created in 2003 by Graeme Sweet, 010 Editor was designed to fix problems in large multibeam bathymetry datasets used in ocean visualization. The software was designed around the idea of Binary Templates. A text editor was added in 2008. 010 Editor is available as Trialware and can be run for free for 30 days. After 30 days a license must be purchased to continue using the software. Binary Templates A Binary Template is a text file containing a series of structs similar to ANSI C. The main difference between ANSI C is that structs in Binary Templates may contain control statements such as if, for or while. When 010 Editor executes a Binary Template on a binary data file, each variable defined in the Binary Template is mapped to a set of bytes in the binary file and added to a hierarchical tree structure. The tree structure can then be used to view and edit data in the binary file in an easier fashion than using the raw hex bytes. Binary Templates typically have a '.bt' extension. 010 Editor has an online repository of Binary Templates containing over 80 formats. When a binary file is opened in 010 Editor and a Binary Template exists for the file, the software can automatically download and install the Template. Templates can also be added to the repository or updated directly from the software. Technology Data files in 010 Editor are stored as a series of blocks, where each block can either point to a block of data somewhere on disk or in memory. When a large section of data from a binary file is copied to another binary file, a new block pointer is inserted into the file but the actual data is not copied. This scheme allows partial loading of files from disk and is also used to provide unlimited undo and redo. Currently when large text blocks are opened or copied the data is scanned for linefeeds, meaning there may be a delay before editing can resume. 010 Editor uses the Qt library to provide multi-platform support. Features Edit text files, hex files, processes, physical and logical drives Multiple files shown as draggable tabs which can be organized in tab groups Large file support (50 GB+ for text files, 8 Exabytes for hex files) Find and Replace with various data types and regular expressions Find and Replace across multiples files Unlimited undo and redo Column Mode Editing Supports 30 different character encodings (e.g. ASCII, ANSI, Unicode, UTF-8) plus custom encodings and conversions ASCII, Unix, Mac and Unicode linefeed support including visualizing whitespace Comparisons and histograms Inspector for interpreting bytes as different data types Scriptable using a language similar to ANSI C Scripts can be shared online and downloaded using an integrated online repository Syntax highlighters can be created, shared and downloaded through the online repository Bookmarks can be created using different data types Edit NTFS, FAT, exFAT, and HFS drives using templates Checksum/Hash algorithms including CRC-16, CRC-32, Adler32, MD2, MD4, MD5, RIPEMD160, SHA-1, SHA-256, SHA-512, TIGER Import or export hex data in Intel Hex Format, Motorola S-Records, Hex Text, C/C++/Java Code, Base64, Uuencoding, RTF, or HTML Arithmetic and bitwise operations on hex data Printing with header, footer and margin control Integrated debugger for finding problems with Binary Templates and scripts Portable version for running from USB drives Dark and light themes See also Hex editor Comparison of hex editors Text editor List of text editors Comparison of text editors References External links Introduction to Binary Templates Binary Templates Repository Hex editors Windows text editors MacOS text editors Linux text editors Programming tools for Windows Data recovery software
3585323
https://en.wikipedia.org/wiki/Paul%20Hackett%20%28American%20football%29
Paul Hackett (American football)
Paul Roger Hackett (born July 5, 1947) is a former American football coach. He served as head football coach of University of Pittsburgh from 1989 to 1992 and at the University of Southern California (USC) from 1998 to 2000. Hackett was quarterbacks coach or offensive coordinator for the San Francisco 49ers, Dallas Cowboys, Kansas City Chiefs, Cleveland Browns, New York Jets, Tampa Bay Buccaneers, and Oakland Raiders. Hackett began his college coaching career at his alma mater, the University of California, Davis, in 1969, assisting the freshmen in the first year and then directing them to a 13–0 mark over the next two seasons under College Football Hall of Fame coach Jim Sochor. He then was an assistant at University of California, Berkeley for four years (1972–1975), the first season as a graduate assistant, the next as the receivers coach and the final two as the quarterbacks coach. Then, at age 29, he moved to USC for five years (1976–1980) as an assistant coach under John Robinson. Hackett then began in the NFL as offensive coordinator for the Cleveland Browns (1981–82), followed by a stint as quarterbacks/receivers coach for the San Francisco 49ers (1983–85)—during which he coached Joe Montana in the 1984 Super Bowl victory—and as offensive coordinator for the Dallas Cowboys (1986–1988). From 1989 to 1992 Hackett was the head football coach at the University of Pittsburgh. He replaced Mike Gottfried, whom he had served as offensive coordinator, just prior to the 1989 John Hancock Bowl, which resulted in a Pittsburgh victory over Texas A&M. Hackett then moved back to the NFL as offensive coordinator for the Kansas City Chiefs from 1993 to 1997. He was instrumental in acquiring his quarterback from the 49ers, Joe Montana, to play for the Chiefs from 1993 to 1994. The Chiefs made the playoffs four of five seasons, ranking fifth in offense in his last year. Hackett moved back to college football as head coach at USC from 1998 until 2000, prior to Pete Carroll taking over. During the first season he guided the Trojans to the 1998 Sun Bowl, losing in a major upset to TCU. Hackett's final two years at the school were difficult, as the fans and alumni base turned against him. His 1999 and 2000 Trojans football teams were the first USC teams to have consecutive non-winning seasons since 1960 and 1961. The 2000 team was tied for last place in the Pacific-10 Conference. His winning percentage as USC coach was .514, compared to the school's then all-time win percentage of .691. USC fired Hackett on November 27, 2000; to do so, it spent $800,000 to buy out the remaining two years of his five-year, $3.5-million contract. Hackett felt he was clearly not given enough time to rebuild and develop his recruits, such as Carson Palmer. "In two years, I expect to see this team explode," he said. He was proved correct; by 2002, the Trojans were ranked fourth in the country with a team built primarily around Hackett's recruits. After leaving USC, Hackett again returned to the NFL, serving as the offensive coordinator for the New York Jets from 2001 to 2004. He was then the Tampa Bay quarterbacks coach from 2005 to 2007. From 2008 to 2010, Hackett worked as the quarterback coach for the Oakland Raiders, after which he retired from coaching. Hackett is married and has two sons, David and Nathaniel. Nathaniel Hackett has followed in his father's footsteps by coaching quarterbacks at the college and professional levels before being hired to lead the Denver Broncos in 2022. Head coaching record *Hackett only coached the John Hancock Bowl, replacing Mike Gottfried.**Final game of season coached by Sal Sunseri References External links Tampa Bay Buccaneers bio (2007) 1947 births Living people American football quarterbacks Cleveland Browns coaches California Golden Bears football coaches Dallas Cowboys coaches Kansas City Chiefs coaches New York Jets coaches Pittsburgh Panthers football coaches San Francisco 49ers coaches Tampa Bay Buccaneers coaches UC Davis Aggies football coaches UC Davis Aggies football players USC Trojans football coaches National Football League offensive coordinators Sportspeople from Burlington, Vermont
5645470
https://en.wikipedia.org/wiki/History%20of%20cartography
History of cartography
The history of cartography traces the development of cartography, or mapmaking technology, in human history. Maps have been one of the most important human inventions for millennia, allowing humans to explain and navigate their way through the world. The earliest surviving maps include cave paintings and etchings on tusk and stone, followed by extensive maps produced by ancient Babylon, Greece and Rome, China, and India. In their most simple form maps are two dimensional constructs, however since the age of Classical Greece maps have also been projected onto a three-dimensional sphere known as a globe. The Mercator Projection, developed by Flemish geographer Gerardus Mercator, was widely used as the standard two-dimensional projection of the earth for world maps until the late 20th century, when more accurate projections were formulated. Mercator was also the first to use and popularise the concept of the atlas as a collection of maps. Modern methods of transportation, the use of surveillance aircraft, and more recently the availability of satellite imagery have made documentation of many areas possible that were previously inaccessible. Free online services such as Google Earth have made accurate maps of the world more accessible than ever before. Etymology The English term cartography is modern, borrowed from the French cartographie in the 1840s, itself based on Middle Latin carta "map". Earliest known maps Candidates for the oldest surviving map include: A map-like representation of a mountain, river, valleys and routes around Pavlov in the Czech Republic, carved on a mammoth tusk, has been dated to 25,000 BC,. An Aboriginal Australian cylcon that may be as much as 20,000 BC years old is thought to depict the Darling River. The map etched on a mammoth bone at Mezhyrich is c.15,000 years old. Dots dating to 14,500 BC found on the walls of the Lascaux caves map out part of the night sky, including the three bright stars Vega, Deneb, and Altair (the Summer Triangle asterism), as well as the Pleiades star cluster. The Cuevas de El Castillo in Spain contain a dot map of the Corona Borealis constellation dating from 12,000 BC. A polished chunk of sandstone from a cave in Spanish Navarre, dated to 14,000 BC, may represent simple visual elements that may have aided in recognizing landscape features, such as hills or dwellings, superimposed on animal etchings. Alternatively, it may also represent a spiritual landscape, or simple incisings. Another ancient picture that resembles a map was created in the late 7th millennium BC in Çatalhöyük, Anatolia, modern Turkey. This wall painting may represent a plan of this Neolithic village; however, recent scholarship has questioned the identification of this painting as a map. Ancient Near East Maps in Ancient Babylonia were made by using accurate surveying techniques. For example, a 7.6 × 6.8 cm clay tablet found in 1930 at Ga-Sur, near contemporary Kirkuk, shows a map of a river valley between two hills. Cuneiform inscriptions label the features on the map, including a plot of land described as 354 iku (12 hectares) that was owned by a person called Azala. Most scholars date the tablet to the 25th to 24th century BC; Leo Bagrow dissents with a date of 7000 BC. Hills are shown by overlapping semicircles, rivers by lines, and cities by circles. The map also is marked to show the cardinal directions. An engraved map from the Kassite period (14th–12th centuries BC) of Babylonian history shows walls and buildings in the holy city of Nippur. In contrast, the Babylonian World Map, the earliest surviving map of the world (c. 600 BC), is a symbolic, not a literal representation. It deliberately omits peoples such as the Persians and Egyptians, who were well known to the Babylonians. The area shown is depicted as a circular shape surrounded by water, which fits the religious image of the world in which the Babylonians believed. Phoenician sailors made major advances in seafaring and exploration. It is recorded that the first circumnavigation of Africa was possibly undertaken by Phoenician explorers employed by Egyptian pharaoh Necho II c. 610–595 BC. In The Histories, written 431–425 BC, Herodotus cast doubt on a report of the Sun observed shining from the north. He stated that the phenomenon was observed by Phoenician explorers during their circumnavigation of Africa (The Histories, 4.42) who claimed to have had the Sun on their right when circumnavigating in a clockwise direction. To modern historians, these details confirm the truth of the Phoenicians' report, and even suggest the possibility that the Phoenicians knew about the spherical Earth model. However, nothing certain about their knowledge of geography and navigation has survived. The historian Dmitri Panchenko theorizes that it was the Phoenician circumnavigation of Africa that inspired the theory of a spherical Earth by the 5th century BC. Ancient Greece In reviewing the literature of early geography and early conceptions of the earth, all sources lead to Homer, who is considered by many (Strabo, Kish, and Dilke) as the founding father of Geography. Regardless of the doubts about Homer's existence, one thing is certain: he never was a mapmaker. The depiction of the Earth conceived by Homer, which was accepted by the early Greeks, represents a circular flat disk surrounded by a constantly moving stream of Ocean, an idea which would be suggested by the appearance of the horizon as it is seen from a mountaintop or from a seacoast. Homer's knowledge of the Earth was very limited. He and his Greek contemporaries knew very little of the Earth beyond Egypt as far south as the Libyan desert, the south-west coast of Asia Minor, and the northern boundary of the Greek homeland. Furthermore, the coast of the Black Sea was only known through myths and legends that circulated during his time. In his poems there is no mention of Europe and Asia as geographical concepts. That is why the big part of Homer's world that is portrayed on this interpretive map represents lands that border on the Aegean Sea. It is worth noting that even though Greeks believed that they were in the middle of the earth, they also thought that the edges of the world's disk were inhabited by savage, monstrous barbarians and strange animals and monsters; Homer's Odyssey mentions a great many of them. Additional statements about ancient geography may be found in Hesiod's poems, probably written during the 8th century BC. Through the lyrics of Works and Days and Theogony he shows to his contemporaries some definite geographical knowledge. He introduces the names of such rivers as Nile, Ister (Danube), the shores of the Bosporus, and the Euxine (Black Sea), the coast of Gaul, the island of Sicily, and a few other regions and rivers. His advanced geographical knowledge not only had predated Greek colonial expansions, but also was used in the earliest Greek world maps, produced by Greek mapmakers such as Anaximander and Hecataeus of Miletus, and Ptolemy using both observations by explorers and a mathematical approach. Early steps in the development of intellectual thought in ancient Greece belonged to Ionians from their well-known city of Miletus in Asia Minor. Miletus was placed favourably to absorb aspects of Babylonian knowledge and to profit from the expanding commerce of the Mediterranean. The earliest ancient Greek who is said to have constructed a map of the world is Anaximander of Miletus (c. 611–546 BC), pupil of Thales. He believed that the earth was a cylindrical form, like a stone pillar and suspended in space. The inhabited part of his world was circular, disk-shaped, and presumably located on the upper surface of the cylinder. Anaximander was the first ancient Greek to draw a map of the known world. It is for this reason that he is considered by many to be the first mapmaker. A scarcity of archaeological and written evidence prevents us from giving any assessment of his map. What we may presume is that he portrayed land and sea in a map form. Unfortunately, any definite geographical knowledge that he included in his map is lost as well. Although the map has not survived, Hecataeus of Miletus (550–475 BC) produced another map fifty years later that he claimed was an improved version of the map of his illustrious predecessor. Hecatæus's map describes the earth as a circular plate with an encircling Ocean and Greece in the centre of the world. This was a very popular contemporary Greek worldview, derived originally from the Homeric poems. Also, similar to many other early maps in antiquity his map has no scale. As units of measurements, this map used "days of sailing" on the sea and "days of marching" on dry land. The purpose of this map was to accompany Hecatæus's geographical work that was called Periodos Ges, or Journey Round the World. Periodos Ges was divided into two books, "Europe" and "Asia", with the latter including Libya, the name of which was an ancient term for all of known Africa. The work follows the assumption of the author that the world was divided into two continents, Asia and Europe. He depicts the line between the Pillars of Hercules through the Bosporus, and the Don River as a boundary between the two. Hecatæus is the first known writer who thought that the Caspian flows into the circumference ocean—an idea that persisted long into the Hellenic period. He was particularly informative on the Black Sea, adding many geographic places that already were known to Greeks through the colonization process. To the north of the Danube, according to Hecatæus, were the Rhipæan (gusty) Mountains, beyond which lived the Hyperboreans—peoples of the far north. Hecatæus depicted the origin of the Nile River at the southern circumference ocean. His view of the Nile seems to have been that it came from the southern circumference ocean. This assumption helped Hecatæus solve the mystery of the annual flooding of the Nile. He believed that the waves of the ocean were a primary cause of this occurrence. It is worth mentioning that a similar map based upon one designed by Hecataeus was intended to aid political decision-making. According to Herodotus, it was engraved upon a bronze tablet and was carried to Sparta by Aristagoras during the revolt of the Ionian cities against Persian rule from 499 to 494 BC. Anaximenes of Miletus (6th century BC), who studied under Anaximander, rejected the views of his teacher regarding the shape of the earth and instead, he visualized the earth as a rectangular form supported by compressed air. Pythagoras of Samos (c. 560–480 BC) speculated about the notion of a spherical earth with a central fire at its core. He is sometimes incorrectly credited with the introduction of a model that divides a spherical earth into five zones: one hot, two temperate, and two cold—northern and southern. This idea, known as the zonal theory of climate, is more likely to have originated at the time of Aristotle. Scylax, a sailor, made a record of his Mediterranean voyages in c. 515 BC. This is the earliest known set of Greek periploi, or sailing instructions, which became the basis for many future mapmakers, especially in the medieval period. The way in which the geographical knowledge of the Greeks advanced from the previous assumptions of the Earth's shape was through Herodotus and his conceptual view of the world. This map also did not survive and many have speculated that it was never produced. A possible reconstruction of his map is displayed below. Herodotus traveled very extensively, collecting information and documenting his findings in his books on Europe, Asia, and Libya. He also combined his knowledge with what he learned from the people he met. Herodotus wrote his Histories in the mid-5th century BC. Although his work was dedicated to the story of long struggle of the Greeks with the Persian Empire, Herodotus also included everything he knew about the geography, history, and peoples of the world. Thus, his work provides a detailed picture of the known world of the 5th century BC. Herodotus rejected the prevailing view of most 5th-century BC maps that the earth is a circular plate surrounded by Ocean. In his work he describes the earth as an irregular shape with oceans surrounding only Asia and Africa. He introduces names such as the Atlantic Sea, and the Erythrean Sea (which translates as the Red Sea). He also divided the world into three continents: Europe, Asia, and Africa. He depicted the boundary of Europe as the line from the Pillars of Hercules through the Bosphorus and the area between the Caspian Sea and the Indus River. He regarded the Nile as the boundary between Asia and Africa. He speculated that the extent of Europe was much greater than was assumed at the time and left Europe's shape to be determined by future research. In the case of Africa, he believed that, except for the small stretch of land in the vicinity of Suez, the continent was in fact surrounded by water. However, he definitely disagreed with his predecessors and contemporaries about its presumed circular shape. He based his theory on the story of Pharaoh Necho II, the ruler of Egypt between 609 and 594 BC, who had sent Phoenicians to circumnavigate Africa. Apparently, it took them three years, but they certainly did prove his idea. He speculated that the Nile River started as far west as the Ister River (Danube) in Europe and cut Africa through the middle. He was the first writer to assume that the Caspian Sea was separated from other seas and he recognised northern Scythia as one of the coldest inhabited lands in the world. Similar to his predecessors, Herodotus also made mistakes. He accepted a clear distinction between the civilized Greeks in the centre of the earth and the barbarians on the world's edges. In his Histories we can see very clearly that he believed that the world became stranger and stranger when one traveled away from Greece, until one reached the ends of the earth, where humans behaved as savages. While various previous Greek philosophers presumed the earth to be spherical, Aristotle (384–322 BC) is credited with proving the Earth's sphericity. His arguments may be summarized as follows: The lunar eclipse is always circular Ships seem to sink as they move away from view and pass the horizon Some stars can be seen only from certain parts of the Earth. Hellenistic Mediterranean A vital contribution to mapping the reality of the world came with a scientific estimate of the circumference of the earth. This event has been described as the first scientific attempt to give geographical studies a mathematical basis. The man credited for this achievement was Eratosthenes (275–195 BC), a Greek scholar who lived in Hellenistic North Africa. As described by George Sarton, historian of science, "there was among them [Eratosthenes's contemporaries] a man of genius but as he was working in a new field they were too stupid to recognize him". His work, including On the Measurement of the Earth and Geographica, has only survived in the writings of later philosophers such as Cleomedes and Strabo. He was a devoted geographer who set out to reform and perfect the map of the world. Eratosthenes argued that accurate mapping, even if in two dimensions only, depends upon the establishment of accurate linear measurements. He was the first to calculate the Earth's circumference (within 0.5 percent accuracy). His great achievement in the field of cartography was the use of a new technique of charting with meridians, his imaginary north–south lines, and parallels, his imaginary west–east lines. These axis lines were placed over the map of the earth with their origin in the city of Rhodes and divided the world into sectors. Then, Eratosthenes used these earth partitions to reference places on the map. He also divided Earth into five climatic regions which was proposed at least as early as the late sixth or early fifth century BC by Parmenides: a torrid zone across the middle, two frigid zones at extreme north and south, and two temperate bands in between. He was likely also the first person to use the word "geography". Roman Empire Pomponius Mela Pomponius Mela is unique among ancient geographers in that, after dividing the earth into five zones, of which two only were habitable, he asserts the existence of antichthones, inhabiting the southern temperate zone inaccessible to the folk of the northern temperate regions from the unbearable heat of the intervening torrid belt. On the divisions and boundaries of Europe, Asia and Africa, he repeats Eratosthenes; like all classical geographers from Alexander the Great (except Ptolemy) he regards the Caspian Sea as an inlet of the Northern Ocean, corresponding to the Persian Gulf and the Red Sea on the south. Marinus of Tyre Marinus of Tyre was a Hellenized Phoenician geographer and cartographer.<ref>“Notes on Ancient Times in Malaya” Roland Braddell. Journal of the Malayan Branch of the Royal Asiatic Society, Vol. 23, No. 3 (153) 1947 (1950), p. 9</ref> He founded mathematical geography and provided the underpinnings of Ptolemy's influential Geographia. Marinus's geographical treatise is lost and known only from Ptolemy's remarks. He introduced improvements to the construction of maps and developed a system of nautical charts. His chief legacy is that he first assigned to each place a proper latitude and longitude. His zero meridian ran through the westernmost land known to him, the Isles of the Blessed around the location of the Canary or Cape Verde Islands. He used the parallel of Rhodes for measurements of latitude. Ptolemy mentions several revisions of Marinus's geographical work, which is often dated to AD 114 although this is uncertain. Marinus estimated a length of 180,000 stadia for the equator, roughly corresponding to a circumference of the Earth of 33,300 km, about 17% less than the actual value. He also carefully studied the works of his predecessors and the diaries of travelers. His maps were the first in the Roman Empire to show China. He also invented equirectangular projection, which is still used in map creation today. A few of Marinus's opinions are reported by Ptolemy. Marinus was of the opinion that the World Ocean was separated into an eastern and a western part by the continents of Europe, Asia and Africa. He thought that the inhabited world stretched in latitude from Thule (Norway) to Agisymba (around the Tropic of Capricorn) and in longitude from the Isles of the Blessed (around the Canaries) to Shera (China). Marinus also coined the term Antarctic, referring to the opposite of the Arctic Circle. Ptolemy Ptolemy (90–168), a Hellenized Egyptian,George Sarton (1936). "The Unity and Diversity of the Mediterranean World", Osiris 2, p. 406–463 [429]. thought that, with the aid of astronomy and mathematics, the earth could be mapped very accurately. Ptolemy revolutionized the depiction of the spherical earth on a map by using perspective projection, and suggested precise methods for fixing the position of geographic features on its surface using a coordinate system with parallels of latitude and meridians of longitude. Ptolemy's eight-volume atlas Geographia is a prototype of modern mapping and GIS. It included an index of place-names, with the latitude and longitude of each place to guide the search, scale, conventional signs with legends, and the practice of orienting maps so that north is at the top and east to the right of the map—an almost universal custom today. Yet with all his important innovations, however, Ptolemy was not infallible. His most important error was a miscalculation of the circumference of the earth. He believed that Eurasia covered 180° of the globe, which convinced Christopher Columbus to sail across the Atlantic to look for a simpler and faster way to travel to India. Had Columbus known that the true figure was much greater, it is conceivable that he would never have set out on his momentous voyage. Tabula Peutingeriana In 2007, the Tabula Peutingeriana, a 12th-century replica of a 5th-century road map, was placed on the UNESCO Memory of the World Register and displayed to the public for the first time. Although the scroll is well preserved and believed to be an accurate copy of an authentic original, it is on media that is now so delicate that it must be protected at all times from exposure to daylight. China The earliest known maps to have survived in China date to the 4th century BC. In 1986, seven ancient Chinese maps were found in an archeological excavation of a Qin State tomb in what is now Fangmatan, in the vicinity of Tianshui City, Gansu province. Before this find, the earliest extant maps that were known came from the Mawangdui Han tomb excavation in 1973, which found three maps on silk dated to the 2nd century BC in the early Han Dynasty. The 4th-century BC maps from the State of Qin were drawn with black ink on wooden blocks. These blocks fortunately survived in soaking conditions due to underground water that had seeped into the tomb; the quality of the wood had much to do with their survival. After two years of slow-drying techniques, the maps were fully restored. The territory shown in the seven Qin maps overlap each other. The maps display tributary river systems of the Jialing River in Sichuan province, in a total measured area of 107 by 68 km. The maps featured rectangular symbols encasing character names for the locations of administrative counties. Rivers and roads are displayed with similar line symbols; this makes interpreting the map somewhat difficult, although the labels of rivers placed in order of stream flow are helpful to modern day cartographers. These maps also feature locations where different types of timber can be gathered, while two of the maps state the distances in mileage to the timber sites. In light of this, these maps are perhaps the oldest economic maps in the world since they predate Strabo's economic maps. In addition to the seven maps on wooden blocks found at Tomb 1 of Fangmatan, a fragment of a paper map was found on the chest of the occupant of Tomb 5 of Fangmatan in 1986. This tomb is dated to the early Western Han, so the map dates to the early 2nd century BC. The map shows topographic features such as mountains, waterways and roads, and is thought to cover the area of the preceding Qin Kingdom. Earliest geographical writing In China, the earliest known geographical Chinese writing dates back to the 5th century BC, during the beginning of the Warring States (481–221 BC). This was the Yu Gong or Tribute of Yu chapter of the Shu Jing or Book of Documents. The book describes the traditional nine provinces, their kinds of soil, their characteristic products and economic goods, their tributary goods, their trades and vocations, their state revenues and agricultural systems, and the various rivers and lakes listed and placed accordingly. The nine provinces in the time of this geographical work were very small in size compared to their modern Chinese counterparts. The Yu Gong's descriptions pertain to areas of the Yellow River, the lower valleys of the Yangzi, with the plain between them and the Shandong Peninsula, and to the west the most northern parts of the Wei River and the Han River were known (along with the southern parts of modern-day Shanxi province). Earliest known reference to a map (圖 tú) The oldest reference to a map in China comes from the 3rd century BC. This was the event of 227 BC where Crown Prince Dan of Yan had his assassin Jing Ke visit the court of the ruler of the State of Qin, who would become the first leader to unify China, Qin Shi Huang (r. 221–210 BC). Jing Ke was to present the ruler of Qin with a district map painted on a silk scroll, rolled up and held in a case where he hid his assassin's dagger. Handing to him the map of the designated territory was the first diplomatic act of submitting that district to Qin rule. Jing then tried and failed to kill him. From then on, maps were frequently mentioned in Chinese sources. Han Dynasty The three Han Dynasty maps found at Mawangdui differ from the earlier Qin State maps. While the Qin maps place the cardinal direction of north at the top of the map, the Han maps are orientated with the southern direction at the top. The Han maps are also more complex, since they cover a much larger area, employ a large number of well-designed map symbols, and include additional information on local military sites and the local population. The Han maps also note measured distances between certain places, but a formal graduated scale and rectangular grid system for maps would not be used—or at least described in full—until the 3rd century (see Pei Xiu below). Among the three maps found at Mawangdui was a small map representing the tomb area where it was found, a larger topographical map showing the Han's borders along the subordinate Kingdom of Changsha and the Nanyue kingdom (of northern Vietnam and parts of modern Guangdong and Guangxi), and a map which marks the positions of Han military garrisons that were employed in an attack against Nanyue in 181 BC. An early text that mentioned maps was the Rites of Zhou. Although attributed to the era of the Zhou Dynasty, its first recorded appearance was in the libraries of Prince Liu De (c. 130 BC), and was compiled and commented on by Liu Xin in the 1st century AD. It outlined the use of maps that were made for governmental provinces and districts, principalities, frontier boundaries, and even pinpointed locations of ores and minerals for mining facilities. Upon the investiture of three of his sons as feudal princes in 117 BC, Emperor Wu of Han had maps of the entire empire submitted to him. From the 1st century AD onwards, official Chinese historical texts contained a geographical section (Diliji 地理纪), which was often an enormous compilation of changes in place-names and local administrative divisions controlled by the ruling dynasty, descriptions of mountain ranges, river systems, taxable products, etc. From the 5th century BC Shu Jing forward, Chinese geographical writing provided more concrete information and less legendary element. This example can be seen in the 4th chapter of the Huainanzi (Book of the Master of Huainan), compiled under the editorship of Prince Liu An in 139 BC during the Han Dynasty (202 BC–202 AD). The chapter gave general descriptions of topography in a systematic fashion, given visual aids by the use of maps (di tu) due to the efforts of Liu An and his associate Zuo Wu. In Chang Chu's Hua Yang Guo Chi (Historical Geography of Szechuan) of 347, not only rivers, trade routes, and various tribes were described, but it also wrote of a 'Ba June Tu Jing' ('Map of Szechuan'), which had been made much earlier in 150. Local mapmaking such as the one of Sichuan mentioned above, became a widespread tradition of Chinese geographical works by the 6th century, as noted in the bibliography of the Sui Shu. It is during this time of the Southern and Northern Dynasties that the Liang Dynasty (502–557) cartographers also began carving maps into stone steles (alongside the maps already drawn and painted on paper and silk). Pei Xiu, the 'Ptolemy of China' In the year 267, Pei Xiu (224–271) was appointed as the Minister of Works by Emperor Wu of Jin, the first emperor of the Jin Dynasty. Pei is best known for his work in cartography. Although map making and use of the grid existed in China before him, he was the first to mention a plotted geometrical grid and graduated scale displayed on the surface of maps to gain greater accuracy in the estimated distance between different locations. Pei outlined six principles that should be observed when creating maps, two of which included the rectangular grid and the graduated scale for measuring distance. Western historians compare him to the Greek Ptolemy for his contributions in cartography. However, Howard Nelson states that, although the accounts of earlier cartographic works by the inventor and official Zhang Heng (78–139) are somewhat vague and sketchy, there is ample written evidence that Pei Xiu derived the use of the rectangular grid reference from the maps of Zhang Heng. Later Chinese ideas about the quality of maps made during the Han Dynasty and before stem from the assessment given by Pei Xiu. Pei Xiu noted that the extant Han maps at his disposal were of little use since they featured too many inaccuracies and exaggerations in measured distance between locations. However, the Qin State maps and Mawangdui maps of the Han era were far superior in quality than those examined by Pei Xiu. It was not until the 20th century that Pei Xiu's 3rd-century assessment of earlier maps' dismal quality would be overturned and disproven. The Qin and Han maps did have a degree of accuracy in scale and pinpointed location, but the major improvement in Pei Xiu's work and that of his contemporaries was expressing topographical elevation on maps. Sui Dynasty In the year 605, during the Sui Dynasty (581–618), the Commercial Commissioner Pei Ju (547–627) created a famous geometrically gridded map. In 610 Emperor Yang of Sui ordered government officials from throughout the empire to document in gazetteers the customs, products, and geographical features of their local areas and provinces, providing descriptive writing and drawing them all onto separate maps, which would be sent to the imperial secretariat in the capital city. Tang Dynasty The Tang Dynasty (618–907) also had its fair share of cartographers, including the works of Xu Jingzong in 658, Wang Mingyuan in 661, and Wang Zhongsi in 747. Arguably the greatest geographer and cartographer of the Tang period was Jia Dan (730–805), whom Emperor Dezong of Tang entrusted in 785 to complete a map of China with her recently former inland colonies of Central Asia, the massive and detailed work completed in 801, called the Hai Nei Hua Yi Tu (Map of both Chinese and Barbarian Peoples within the (Four) Seas). The map was and in dimension, mapped out on a grid scale of equaling 100 li (unit) (the Chinese equivalent of the mile/kilometer). Jia Dan is also known for having described the Persian Gulf region with great detail, along with lighthouses that were erected at the mouth of the Persian Gulf by the medieval Iranians in the Abbasid period (refer to article on Tang Dynasty for more). Song Dynasty During the Song Dynasty (960–1279) Emperor Taizu of Song ordered Lu Duosun in 971 to update and 're-write all the Tu Jing in the world', which would seem to be a daunting task for one individual, who was sent out throughout the provinces to collect texts and as much data as possible. With the aid of Song Zhun, the massive work was completed in 1010, with some 1566 chapters. The later Song Shi historical text stated (Wade-Giles spelling): Like the earlier Liang Dynasty stone-stele maps (mentioned above), there were large and intricately carved stone stele maps of the Song period. For example, the squared stone stele map of an anonymous artist in 1137, following the grid scale of 100 li squared for each grid square. What is truly remarkable about this map is the incredibly precise detail of coastal outlines and river systems in China (refer to Needham's Volume 3, Plate LXXXI for an image). The map shows 500 settlements and a dozen rivers in China, and extends as far as Korea and India. On the reverse, a copy of a more ancient map uses grid coordinates in a scale of 1:1,500,000 and shows the coastline of China with great accuracy. The famous 11th-century scientist and polymath statesman Shen Kuo (1031–1095) was also a geographer and cartographer. His largest atlas included twenty three maps of China and foreign regions that were drawn at a uniform scale of 1:900,000. Shen also created a three-dimensional raised-relief map using sawdust, wood, beeswax, and wheat paste, while representing the topography and specific locations of a frontier region to the imperial court. Shen Kuo's contemporary, Su Song (1020–1101), was a cartographer who created detailed maps in order to resolve a territorial border dispute between the Song Dynasty and the Liao Dynasty. Yuan Dynasty (Mongol Empire) In the Mongol Empire, the Mongol scholars with the Persian and Chinese cartographers or their foreign colleagues created maps, geographical compendium as well as travel accounts. Rashid-al-Din Hamadani described his geographical compendium, "Suvar al-aqalim", constituted volume four of the Collected chronicles of the Ilkhanate in Persia. His works says about the borders of the seven climes (old world), rivers, major cities, places, climate, and Mongol yams (relay stations). The Great Khan Khubilai's ambassador and minister, Bolad, had helped Rashid's works in relation to the Mongols and Mongolia. Thanks to Pax Mongolica, the easterners and the westerners in Mongol dominions were able to gain access to one another's geographical materials. The Mongols required the nations they conquered to send geographical maps to the Mongol headquarters. One of medieval Persian work written in northwest Iran can clarify the historical geography of Mongolia where Genghis Khan was born and united the Mongol and Turkic nomads as recorded in native sources, especially the Secret History of the Mongols. Map of relay stations, called "yam", and strategic points existed in the Yuan Dynasty. The Mongol cartography was enriched by traditions of ancient China and Iran which were now under the Mongols. Because the Yuan court often requested the western Mongol khanates to send their maps, the Yuan Dynasty was able to publish a map describing the whole Mongol world in c.1330. This is called "Hsi-pei pi ti-li tu". The map includes the Mongol dominions including 30 cities in Iran such as Ispahan and the Ilkhanid capital Soltaniyeh, and Russia (as "Orash") as well as their neighbors, e.g. Egypt and Syria. Ming Dynasty The Da Ming hunyi tu map, dating from about 1390, is in multicolour. The horizontal scale is 1:820,000 and the vertical scale is 1:1,060,000. In 1579, Luo Hongxian published the Guang Yutu atlas, including more than 40 maps, a grid system, and a systematic way of representing major landmarks such as mountains, rivers, roads and borders. The Guang Yutu incorporates the discoveries of naval explorer Zheng He's 15th-century voyages along the coasts of China, Southeast Asia, India and Africa. The Mao Kun map published in 1628 is thought to be based on a strip map dated to the voyages of Zheng He. Qing Dynasty From the 16th and 17th centuries, several examples survive of maps focused on cultural information. Gridlines are not used on either Yu Shi's Gujin xingsheng zhi tu (1555) or Zhang Huang's Tushu bian (1613); instead, illustrations and annotations show mythical places, exotic foreign peoples, administrative changes and the deeds of historic and legendary heroes. Also in the 17th century, an edition of a possible Tang Dynasty map shows clear topographical contour lines. Although topographic features were part of maps in China for centuries, a Fujian county official Ye Chunji (1532–1595) was the first to base county maps using on-site topographical surveying and observations. The Korean-made Kangnido is based on two Chinese maps and describes the Old World. Modern (PRC) After the 1949 revolution, the Institute of Geography under the aegis of the Chinese Academy of Sciences became responsible for official cartography and emulated the Soviet model of geography throughout the 1950s. With its emphasis on fieldwork, sound knowledge of the physical environment and the interrelation between physical and economic geography, the Russian influence counterbalanced the many pre-liberation Western-trained Chinese geography specialists who were more interested in the historical and culture aspects of cartography. As a consequence, China's main geographical journal, the Dili Xuebao (地理学报) featured many articles by Soviet geographers. As Soviet influence waned in the 1960s, geographic activity continued as part of the process of modernisation until it came to a stop with the 1967 Cultural Revolution. India Indian cartographic traditions covered the locations of the Pole star and other constellations of use. These charts may have been in use by the beginning of the Common Era for purposes of navigation. Detailed maps of considerable length describing the locations of settlements, sea shores, rivers, and mountains were also made. The 8th-century scholar Bhavabhuti conceived paintings which indicated geographical regions. Italian scholar Francesco Lorenzo Pullè reproduced a number of ancient Indian maps in his magnum opus La Cartografia Antica dell'India. Out of these maps, two have been reproduced using a manuscript of Lokaprakasa, originally compiled by the polymath Ksemendra (Kashmir, 11th century), as a source. The other manuscript, used as a source by Pullè, is titled Samgrahani. The early volumes of the Encyclopædia Britannica also described cartographic charts made by the Dravidian people of India. Maps from the Ain-e-Akbari, a Mughal document detailing India's history and traditions, contain references to locations indicated in earlier Indian cartographic traditions. Another map describing the kingdom of Nepal, four feet in length and about two and a half feet in breadth, was presented to Warren Hastings. In this map the mountains were elevated above the surface, and several geographical elements were indicated in different colors. Islamic cartographic schools Arab and Persian cartography In the Middle Ages, Muslim scholars continued and advanced on the mapmaking traditions of earlier cultures. Most used Ptolemy's methods; but they also took advantage of what explorers and merchants learned in their travels across the Muslim world, from Spain to India to Africa, and beyond in trade relationships with China, and Russia. An important influence in the development of cartography was the patronage of the Abbasid caliph, al-Ma'mun, who reigned from 813 to 833. He commissioned several geographers to remeasure the distance on earth that corresponds to one degree of celestial meridian. Thus his patronage resulted in the refinement of the definition of the mile used by Arabs (mīl in Arabic) in comparison to the stadion used by Greeks. These efforts also enabled Muslims to calculate the circumference of the earth. Al-Mamun also commanded the production of a large map of the world, which has not survived, though it is known that its map projection type was based on Marinus of Tyre rather than Ptolemy. Also in the 9th century, the Persian mathematician and geographer, Habash al-Hasib al-Marwazi, employed spherical trigonometry and map projection methods in order to convert polar coordinates to a different coordinate system centred on a specific point on the sphere, in this the Qibla, the direction to Mecca. Abū Rayhān Bīrūnī (973–1048) later developed ideas which are seen as an anticipation of the polar coordinate system. Around 1025, he describes a polar equi-azimuthal equidistant projection of the celestial sphere. However, this type of projection had been used in ancient Egyptian star-maps and was not to be fully developed until the 15 and 16th centuries. In the early 10th century, Abū Zayd al-Balkhī, originally from Balkh, founded the "Balkhī school" of terrestrial mapping in Baghdad. The geographers of this school also wrote extensively of the peoples, products, and customs of areas in the Muslim world, with little interest in the non-Muslim realms. The "Balkhī school", which included geographers such as Estakhri, al-Muqaddasi and Ibn Hawqal, produced world atlases, each one featuring a world map and twenty regional maps. Suhrāb, a late 10th-century Muslim geographer, accompanied a book of geographical coordinates with instructions for making a rectangular world map, with equirectangular projection or cylindrical equidistant projection. The earliest surviving rectangular coordinate map is dated to the 13th century and is attributed to Hamdallah al-Mustaqfi al-Qazwini, who based it on the work of Suhrāb. The orthogonal parallel lines were separated by one degree intervals, and the map was limited to Southwest Asia and Central Asia. The earliest surviving world maps based on a rectangular coordinate grid are attributed to al-Mustawfi in the 14th or 15th century (who used invervals of ten degrees for the lines), and to Hafiz-i Abru (died 1430). Ibn Battuta (1304–1368?) wrote "Rihlah" (Travels) based on three decades of journeys, covering more than 120,000 km through northern Africa, southern Europe, and much of Asia. Regional cartography Islamic regional cartography is usually categorized into three groups: that produced by the "Balkhī school", the type devised by Muhammad al-Idrisi, and the type that are uniquely found in the Book of curiosities. The maps by the Balkhī schools were defined by political, not longitudinal boundaries and covered only the Muslim world. In these maps the distances between various "stops" (cities or rivers) were equalized. The only shapes used in designs were verticals, horizontals, 90-degree angles, and arcs of circles; unnecessary geographical details were eliminated. This approach is similar to that used in subway maps, most notable used in the "London Underground Tube Map" in 1931 by Harry Beck. Al-Idrīsī defined his maps differently. He considered the extent of the known world to be 160° in longitude, and divided the region into ten parts, each 16° wide. In terms of latitude, he portioned the known world into seven 'climes', determined by the length of the longest day. In his maps, many dominant geographical features can be found. Book on the appearance of the Earth Muhammad ibn Mūsā al-Khwārizmī's ("Book on the appearance of the Earth") was completed in 833. It is a revised and completed version of Ptolemy's Geography, consisting of a list of 2402 coordinates of cities and other geographical features following a general introduction. Al-Khwārizmī, Al-Ma'mun's most famous geographer, corrected Ptolemy's gross overestimate for the length of the Mediterranean Sea (from the Canary Islands to the eastern shores of the Mediterranean); Ptolemy overestimated it at 63 degrees of longitude, while al-Khwarizmi almost correctly estimated it at nearly 50 degrees of longitude. Al-Ma'mun's geographers "also depicted the Atlantic and Indian Oceans as open bodies of water, not land-locked seas as Ptolemy had done. " Al-Khwarizmi thus set the Prime Meridian of the Old World at the eastern shore of the Mediterranean, 10–13 degrees to the east of Alexandria (the prime meridian previously set by Ptolemy) and 70 degrees to the west of Baghdad. Most medieval Muslim geographers continued to use al-Khwarizmi's prime meridian. Other prime meridians used were set by Abū Muhammad al-Hasan al-Hamdānī and Habash al-Hasib al-Marwazi at Ujjain, a centre of Indian astronomy, and by another anonymous writer at Basra. Al-Biruni Abu Rayhan al-Biruni (973–1048) gave an estimate of 6,339.6 km for the Earth radius, which is only 17.15 km less than the modern value of 6,356.7523142 km (WGS84 polar radius "b"). In contrast to his predecessors who measured the Earth's circumference by sighting the Sun simultaneously from two different locations, Al-Biruni developed a new method of using trigonometric calculations based on the angle between a plain and mountain top which yielded more accurate measurements of the Earth's circumference and made it possible for it to be measured by a single person from a single location. (cf. ) Al-Biruni's method's motivation was to avoid "walking across hot, dusty deserts" and the idea came to him when he was on top of a tall mountain in India (present day Pind Dadan Khan, Pakistan). From the top of the mountain, he sighted the dip angle which, along with the mountain's height (which he calculated beforehand), he applied to the law of sines formula. This was the earliest known use of dip angle and the earliest practical use of the law of sines. Around 1025, Al-Biruni was the first to describe a polar equi-azimuthal equidistant projection of the celestial sphere. In his Codex Masudicus (1037), Al-Biruni theorized the existence of a landmass along the vast ocean between Asia and Europe, or what is today known as the Americas. He deduced its existence on the basis of his accurate estimations of the Earth's circumference and Afro-Eurasia's size, which he found spanned only two-fifths of the Earth's circumference, and his discovery of the concept of specific gravity, from which he deduced that the geological processes that gave rise to Eurasia must've also given rise to lands in the vast ocean between Asia and Europe. He also theorized that the landmass must be inhabited by human beings, which he deduced from his knowledge of humans inhabiting the broad north–south band stretching from Russia to South India and Sub-Saharan Africa, theorizing that the landmass would most likely lie along the same band. He was the first to predict "the existence of land to the east and west of Eurasia, which later on was discovered to be America and Japan". Tabula Rogeriana The Arab geographer, Muhammad al-Idrisi, produced his medieval atlas, Tabula Rogeriana or The Recreation for Him Who Wishes to Travel Through the Countries, in 1154. He incorporated the knowledge of Africa, the Indian Ocean and the Far East gathered by Arab merchants and explorers with the information inherited from the classical geographers to create the most accurate map of the world in pre-modern times. With funding from Roger II of Sicily (1097–1154), al-Idrisi drew on the knowledge collected at the University of Cordoba and paid draftsmen to make journeys and map their routes. The book describes the earth as a sphere with a circumference of but maps it in 70 rectangular sections. Notable features include the correct dual sources of the Nile, the coast of Ghana and mentions of Norway. Climate zones were a chief organizational principle. A second and shortened copy from 1192 called Garden of Joys is known by scholars as the Little Idrisi. On the work of al-Idrisi, S. P. Scott commented: Al-Idrisi's atlas, originally called the Nuzhat in Arabic, served as a major tool for Italian, Dutch and French mapmakers from the 16th century to the 18th century. Piri Reis map of the Ottoman Empire The Ottoman cartographer Piri Reis published navigational maps in his Kitab-ı Bahriye. The work includes an atlas of charts for small segments of the mediterranean, accompanied by sailing instructions covering the sea. In the second version of the work, he included a map of the Americas. The Piri Reis map drawn by the Ottoman cartographer Piri Reis in 1513, is one of the oldest surviving maps to show the Americas. Polynesian stick charts The Polynesian peoples who explored and settled the Pacific islands in the first two millennia AD used maps to navigate across large distances. A surviving map from the Marshall Islands uses sticks tied in a grid with palm strips representing wave and wind patterns, with shells attached to show the location of islands. Other maps were created as needed using temporary arrangements of stones or shells. Medieval Europe Medieval maps and the Mappa Mundi Medieval maps of the world in Europe were mainly symbolic in form along the lines of the much earlier Babylonian World Map. Known as Mappa Mundi (cloths or charts of the world) these maps were circular or symmetrical cosmological diagrams representing the Earth's single land mass as disk-shaped and surrounded by ocean. Italian cartography and the birth of portolan charts Roger Bacon's investigations of map projections and the appearance of portolano and then portolan charts for plying the European trade routes were rare innovations of the period. The Majorcan school is contrasted with the contemporary Italian cartography school. The Carta Pisana portolan chart, made at the end of the 13th century (1275–1300), is the oldest surviving nautical chart (that is, not simply a map but a document showing accurate navigational directions). Majorcan cartographic school and the "normal" portolan chart The Majorcan cartographic school was a predominantly Jewish cooperation of cartographers, cosmographers and navigational instrument-makers in late 13th to the 14th and 15th-century Majorca. With their multicultural heritage the Majorcan cartographic school experimented and developed unique cartographic techniques most dealing with the Mediterranean, as it can be seen in the Catalan Atlas. The Majorcan school was (co-)responsible for the invention (c.1300) of the "Normal Portolan chart". It was a contemporary superior, detailed nautical model chart, gridded by compass lines. Era of modern cartography Iberian cartography in the Age of Exploration In the Renaissance, with the renewed interest in classical works, maps became more like surveys once again, while European exploration of the Americas and their subsequent effort to control and divide those lands revived interest in scientific mapping methods. Peter Whitfield, the author of several books on the history of maps, credits European mapmaking as a factor in the global spread of western power: "Men in Seville, Amsterdam or London had access to knowledge of America, Brazil, or India, while the native peoples knew only their own immediate environment" (Whitfield). Jordan Branch and his advisor, Steven Weber, propose that the power of large kingdoms and nation states of later history are an inadvertent byproduct of 15th-century advances in map-making technologies. During the 15th and 16th centuries, Iberian powers (Kingdom of Castile and Kingdom of Portugal) were at the vanguard of European overseas exploration and mapping the coasts of the Americas, Africa, and Asia, in what came known as the Age of Discovery (also known as the Age of Exploration). Spain and Portugal were magnets for the talent, science and technology from the Italian city-states. Portugal's methodical expeditions started in 1419 along West Africa's coast under the sponsorship of Prince Henry the Navigator, with Bartolomeu Dias reaching the Cape of Good Hope and entering the Indian Ocean in 1488. Ten years later, in 1498, Vasco da Gama led the first fleet around Africa to India, arriving in Calicut and starting a maritime route from Portugal to India. Soon, after Pedro Álvares Cabral reaching Brazil (1500), explorations proceed to Southeast Asia, having sent the first direct European maritime trade and diplomatic missions to Ming China and to Japan (1542). In 1492, when a Spanish expedition headed by Genoese explorer Christopher Columbus sailed west to find a new trade route to the Far East but inadvertently found the Americas. Columbus's first two voyages (1492–93) reached the Bahamas and various Caribbean islands, including Hispaniola, Puerto Rico and Cuba. The Spanish cartographer and explorer Juan de la Cosa sailed with Columbus. He created the first known cartographic representations showing both the Americas. The post-1492 era is known as the period of the Columbian Exchange, a dramatically widespread exchange of animals, plants, culture, human populations (including slaves), communicable disease, and ideas between the American and Afro-Eurasian hemispheres following the Voyages of Christopher Columbus to the Americas. The Magellan-Elcano circumnavigation was the first known voyage around the world in human history. It was a Spanish expedition that sailed from Seville in 1519 under the command of Portuguese navigator Ferdinand Magellan in search of a maritime path from the Americas to the East Asia across the Pacific Ocean. Following Magellan's death in Mactan (Philippines) in 1521, Juan Sebastián Elcano took command of the expedition, sailing to Borneo, the Spice Islands and back to Spain across the Indian Ocean, round the Cape of Good Hope and north along the west coast of Africa. They arrived in Spain three years after they left, in 1522. : Portuguese cartographer Pedro Reinel made the oldest known signed Portuguese nautical chart. 1492: Cartographer Jorge de Aguiar made the oldest known signed and dated Portuguese nautical chart. 1537: Much of Portuguese mathematician and cosmographer Pedro Nunes' work related to navigation. He was the first to understand why a ship maintaining a steady course would not travel along a great circle, the shortest path between two points on Earth, but would instead follow a spiral course, called a loxodrome. These lines, also called rhumb lines, maintain a fixed angle with the meridians. In other words, loxodromic curves are directly related to the construction of the Nunes connection, also called navigator connection. In his Treatise in Defense of the Marine Chart (1537), Nunes argued that a nautical chart should have its parallels and meridians shown as straight lines. Yet he was unsure how to solve the problems that this caused, a situation that lasted until Mercator developed the projection bearing his name. The Mercator Projection is the system which is still used. First maps of the Americas 1500: The Spanish cartographer and explorer Juan de la Cosa created the first known cartographic representations showing both the Americas as well as Africa and Eurasia. 1502: Unknown Portuguese cartographer made the Cantino planisphere, the first nautical chart to implicitly represent latitudes. 1504: Portuguese cartographer Pedro Reinel made the oldest known nautical chart with a scale of latitudes. 1519 : Portuguese cartographers Lopo Homem, Pedro Reinel and Jorge Reinel made the group of maps known today as the Miller Atlas or Lopo Homem – Reinéis Atlas. 1530: Alonzo de Santa Cruz, Spanish cartographer, produced the first map of magnetic variations from true north. He believed it would be of use in finding the correct longitude. Santa Cruz also designed new nautical instruments, and was interested in navigational methods. Padrón Real of the Spanish Empire The Spanish House of Trade, founded 1504 in Seville, had a large contingent of cartographers, as Spain's overseas empire expanded. The master map or Padrón Real was mandated by the Spanish monarch in 1508 and updated subsequently as more information became available with each ship returning to Seville. Diogo Ribeiro, a Portuguese cartographer working for Spain, made what is considered the first scientific world map: the 1527 Padrón real. The layout of the map (Mapamundi) is strongly influenced by the information obtained during the Magellan-Elcano trip around the world. Diogo's map delineates very precisely the coasts of Central and South America. The map shows, for the first time, the real extension of the Pacific Ocean. It also shows, for the first time, the North American coast as a continuous one (probably influenced by the Esteban Gómez's exploration in 1525). It also shows the demarcation of the Treaty of Tordesillas. Two prominent cosmographers (as mapmakers were then known) of the House of Trade were Alonso de Santa Cruz and Juan López de Velasco, who directed mapmaking under Philip II, without ever going to the New World. Their maps were based on information they received from returning navigators. Using repeatable principles that underpin mapmaking, their map making techniques could be employed anywhere. Philip II sought extensive information about his overseas empire, both in written textual form and in the production of maps. German cartography 15th century: The German monk Nicolaus Germanus wrote a pioneering Cosmographia. He added the first new maps to Ptolemy's Geographica. Germanus invented the Donis map projection where parallels of latitude are made equidistant, but meridians converge toward the poles. 1492: German merchant Martin Behaim (1459–1507) made the oldest surviving terrestrial globe, but it lacked the Americas. 1507: German cartographer Martin Waldseemüller's World map (Waldseemüller map) was the first to use the term America for the Western continents (after explorer Amerigo Vespucci). 1603: German Johann Bayer's star atlas (Uranometria) was published in Augsburg in 1603 and was the first atlas to cover the entire celestial sphere. Netherlandish (Dutch and Flemish) schools Notable representatives of the Netherlandish school of cartography and geography (1500s–1600s) include: Franciscus Monachus, Gemma Frisius, Gaspard van der Heyden, Gerard Mercator, Abraham Ortelius, Christophe Plantin, Lucas Waghenaer, Jacob van Deventer, Willebrord Snell, Hessel Gerritsz, Petrus Plancius, Jodocus Hondius, Henricus Hondius II, Hendrik Hondius I, Willem Blaeu, Joan Blaeu, Johannes Janssonius, Andreas Cellarius, Gerard de Jode, Cornelis de Jode, Claes Visscher, Nicolaes Visscher I, Nicolaes Visscher II, and Frederik de Wit. Leuven, Antwerp, and Amsterdam were the main centres of the Netherlandish school of cartography in its golden age (the 16th and 17th centuries, approximately 1570–1670s). The Golden Age of Dutch cartography (also known as the Golden Age of Netherlandish cartography) that was inaugurated in the Southern Netherlands (current Belgium; mainly in Leuven and Antwerp) by Mercator and Ortelius found its fullest expression during the seventeenth century with the production of monumental multi-volume world atlases in the Dutch Republic (mainly in Amsterdam) by competing mapmaking firms such as Lucas Waghenaer, Joan Blaeu, Jan Janssonius, Claes Janszoon Visscher, and Frederik de Wit. Southern Netherlands Gerardus Mercator, the German-Netherlandish cartographer and geographer with a vast output of wall maps, bound maps, globes and scientific instruments but his greatest legacy was the mathematical projection he devised for his 1569 world map. The Mercator projection is an example of a cylindrical projection in which the meridians are straight and perpendicular to the parallels. As a result, the map has a constant width and the parallels are stretched east–west as the poles are approached. Mercator's insight was to stretch the separation of the parallels in a way which exactly compensates for their increasing length, thus preserving shapes of small regions, albeit at the expense of global distortion. Such a conformal map projection necessarily transforms rhumb lines, sailing courses of a constant bearing, into straight lines on the map thus greatly facilitating navigation. That this was Mercator's intention is clear from the title: Nova et Aucta Orbis Terrae Descriptio ad Usum Navigantium Emendate Accommodata which translates as "New and more complete representation of the terrestrial globe properly adapted for use in navigation". Although the projection's adoption was slow, by the end of the seventeenth century it was in use for naval charts throughout the world and remains so to the present day. Its later adoption as the all-purpose world map was an unfortunate step. Mercator spent the last thirty years of his life working on a vast project, the Cosmographia; a description of the whole universe including the creation and a description of the topography, history and institutions of all countries. The word atlas makes its first appearance in the title of the final volume: "Atlas sive cosmographicae meditationes de fabrica mundi et fabricati figura". This translates as Atlas OR cosmographical meditations upon the creation of the universe, and the universe as created, thus providing Mercator's definition of the term atlas. These volumes devote slightly less than one half of their pages to maps: Mercator did not use the term solely to describe a bound collection of maps. His choice of title was motivated by his respect for Atlas "King of Mauretania" Abraham Ortelius generally recognized as the creator of the world's first modern atlas, the Theatrum Orbis Terrarum (Theatre of the World). Ortelius's Theatrum Orbis Terrarum (1570) is considered the first true atlas in the modern sense: a collection of uniform map sheets and sustaining text bound to form a book for which copper printing plates were specifically engraved. It is sometimes referred to as the summary of sixteenth-century cartography.Harwood, Jeremy (2006). To the Ends of the Earth: 100 Maps that Changed the World, p. 83Goffart, Walter (2003). Historical Atlases: The First Three Hundred Years, 1570–1870, p. 1 Northern Netherlands Triangulation had first emerged as a map-making method in the mid sixteenth century when Gemma Frisius set out the idea in his Libellus de locorum describendorum ratione (Booklet concerning a way of describing places).Stachurski, Richard (2009). Longitude by Wire: Finding North America, p. 10Bagrow, Leo (2010). History of Cartography, p. 159Bellos, Alex (2014). The Grapes of Math: How Life Reflects Numbers and Numbers Reflect Life, p. 74 Dutch cartographer Jacob van Deventer was among the first to make systematic use of triangulation, the technique whose theory was described by Gemma Frisius in his 1533 book. The modern systematic use of triangulation networks stems from the work of the Dutch mathematician Willebrord Snell (born Willebrord Snel van Royen), who in 1615 surveyed the distance from Alkmaar to Bergen op Zoom, approximately 70 miles (110 kilometres), using a chain of quadrangles containing 33 triangles in all.Harwood, Jeremy (2006). To the Ends of the Earth: 100 Maps that Changed the World, p. 107 The two towns were separated by one degree on the meridian, so from his measurement he was able to calculate a value for the circumference of the earth – a feat celebrated in the title of his book Eratosthenes Batavus (The Dutch Eratosthenes), published in 1617. Snell's methods were taken up by Jean Picard who in 1669–70 surveyed one degree of latitude along the Paris Meridian using a chain of thirteen triangles stretching north from Paris to the clocktower of Sourdon, near Amiens. The first printed atlas of nautical charts (De Spieghel der Zeevaerdt or The Mirror of Navigation / The Mariner's Mirror) was produced by Lucas Janszoon Waghenaer in Leiden in 1584. This atlas was the first attempt to systematically codify nautical maps. This chart-book combined an atlas of nautical charts and sailing directions with instructions for navigation on the western and north-western coastal waters of Europe. It was the first of its kind in the history of maritime cartography, and was an immediate success. The English translation of Waghenaer's work was published in 1588 and became so popular that any volume of sea charts soon became known as a "waggoner" (an atlas book of engraved nautical charts with accompanying printed sailing directions), the Anglicized form of Waghenaer's surname.Kirby, David; Hinkkanen, Merja-Liisa (2000). The Baltic and the North Seas, p. 61–62Harwood, Jeremy (2006). To the Ends of the Earth: 100 Maps that Changed the World, p. 88Thrower, Norman J. W. (2008). Maps and Civilization: Cartography in Culture and Society, Third Edition, p. 84 The constellations around the South Pole were not observable from north of the equator, by the ancient Babylonians, Greeks, Chinese, Indians, or Arabs. During the Age of Exploration, expeditions to the southern hemisphere began to result in the addition of new constellations. The modern constellations in this region were defined notably by Dutch navigators Pieter Dirkszoon Keyser and Frederick de Houtman,Dekker, Elly (1987). On the Dispersal of Knowledge of the Southern Celestial Sky. (Der Globusfreund, 35–37, pp.  211–30) who in 1595 traveled together to the East Indies (first Dutch expedition to Indonesia). These 12 newly Dutch-created southern constellations (that including Apus, Chamaeleon, Dorado, Grus, Hydrus, Indus, Musca, Pavo, Phoenix, Triangulum Australe, Tucana and Volans) first appeared on a 35-cm diameter celestial globe published in 1597/1598 in Amsterdam by Dutch cartographers Petrus Plancius and Jodocus Hondius. The first depiction of these constellations in a celestial atlas was in Johann Bayer's Uranometria of 1603. In 1660, German-born Dutch cartographer Andreas Cellarius' star atlas (Harmonia Macrocosmica) was published by Johannes Janssonius in Amsterdam. The Dutch dominated the commercial cartography (corporate cartography) during the seventeenth century through the publicly traded companies (such as the Dutch East India Company and the Dutch West India Company) and the competing privately held map-making houses/firms. In the book Capitalism and Cartography in the Dutch Golden Age (University of Chicago Press, 2015), Elizabeth A. Sutton explores the fascinating but previously neglected history of corporate (commercial) cartography during the Dutch Golden Age, from ca. 1600 to 1650. Maps were used as propaganda tools for both the Dutch East India Company (VOC) and the Dutch West India Company (WIC) in order to encourage the commodification of land and an overall capitalist agenda. In the long run the competition between map-making firms Blaeu and Janssonius resulted in the publication of an 'Atlas Maior' or 'Major Atlas'. In 1662 the Latin edition of Joan Blaeu's Atlas Maior appeared in eleven volumes and with approximately 600 maps. In the years to come French and Dutch editions followed in twelve and nine volumes respectively. Purely judging from the number of maps in the Atlas Maior, Blaeu had outdone his rival Johannes Janssonius. And also from a commercial point of view it was a huge success. Also due to the superior typography the Atlas Maior by Blaeu soon became a status symbol for rich citizens. Costing 350 guilders for a non-coloured and 450 guilders for a coloured version, the atlas was the most precious book of the 17th century. However, the Atlas Maior was also a turning point: after that time the role of Dutch cartography (and Netherlandish cartography in general) was finished. Janssonius died in 1664 while a great fire in 1672 destroyed one of Blaeu's print shops. In that fire a part of the copperplates went up in flames. Fairly soon afterwards Joan Blaeu died, in 1673. The almost 2,000 copperplates of Janssonius and Blaeu found their way to other publishers. French cartography Historian David Buisseret has traced the roots of the flourishing of cartography in the 16th and 17th centuries in Europe. He noted five distinct reasons: 1) admiration of antiquity, especially the rediscovery of Ptolemy, considered to be the first geographer; 2) increasing reliance on measurement and quantification as a result of the scientific revolution; 3) refinements in the visual arts, such as the discovery of perspective, that allowed for better representation of spatial entities; 4) development of estate property; and 5) the importance of mapping to nation-building. The reign of Louis XIV is generally considered to represent the beginning of cartography as a science in France. The evolution of cartography during the transition between the 17th and 18th centuries involved advancements on a technical level, as well as those on a representative level. According to Marco Petrella, the map developed "from a tool used to affirm the administrative borders of the reign and its features…into a tool which was necessary to intervene in territory and thus establish control of it." Because unification of the kingdom necessitated well-kept records of land and tax bases, Louis XIV and members of the royal court pushed the development and progression of the sciences, especially cartography. Louis XIV established the Académie des Sciences in 1666, with the expressed purpose of improving cartography and sailing charts. It was found that all the gaps of knowledge in geography and navigation could be accounted for in the further exploration and study of astronomy and geodesy. Colbert also attracted many foreign scientists to the Académie des Sciences to support the pursuit of scientific knowledge. Under the auspices of the Sun King and Jean-Baptiste Colbert, members of the Académie des Sciences made many breakthrough discoveries within the realm of cartography in order to ensure accuracy of their works. Among the more prominent work done with the Académie was that done by Giovanni Domenico Cassini, who perfected a method of determining longitude by the observation of movement of Jupiter ’s satellites. Cassini, along with the aid and support of mathematician Jean Picard, developed a system of uniting the provincial topographical information into a comprehensive map of the country, through a network of surveyed triangles. It established a practice that was eventually adopted by all nations in their project to map the areas under their domain. For their method of triangulation, Picard and Cassini used the meridian arc of Paris-Amiens as their starting point. Jean-Baptiste Colbert, the secretary of home affairs and prominent member of Louis XIV's royal court, set out to develop the resource base of the nation and to develop a system of infrastructure that could restore the French economy. He wanted to generate income for the high expenses incurred by Louis XIV. What Colbert lacked in his pursuit of the development of the economy was a map of the entire country. France, like all other countries of Europe, operated on local knowledge. Within France, there were local systems of measuring weight and taxes; a uniform notion of land surveying did not exist. The advancements made by the members of the Académie des Sciences proved instrumental as a tool to aid reform within the nation. Cartography was an important element in two major reforms undertaken by Colbert: the reform of the royal forest, a project undertaken beginning in 1661, and naval reform, initiated in 1664. In 1663–1664 Colbert tried to collect information from the provinces in order to accurately assess the income within the kingdom, necessary information for economic and tax reform. Colbert asked the provincial representatives of the king, the intendants, to gather existing maps of territory within the provinces and check them for accuracy. If they were found not to be accurate, the Royal Geographer, Nicolas Sanson, was to edit them, basing his information on the reports prepared by the intendants. The operation did not succeed because the Académie des Sciences did not believe it had a strong enough basis in cartographic methodology. The importance of cartography to the mechanisms of the state, however, continued to grow. Paris as the center of cartography The seventeenth century marked the emergence of France as the center of the map trade in Europe, with much of the production and distribution of maps taking place in the capital Paris. In conjunction with the support of scientific development, the royal court encouraged the work of arts and artisans. This royal patronage attracted artists to Paris. As a result, many mapmakers, such as Nicolas Sanson and Alexis-Hubert Jaillot, moved to the national capital from the peripheries of the provinces. Many of the agents of cartography, including those involved in the creation, production and distribution of maps in Paris, came to live in the same section of the capital city. Booksellers congregated on rue St-Jacques along the left bank of the Seine, while engravers and cartographers lived along the quai de l’Horloge on the (See Figure 1). Regulations enacted by the informed the location of the libraries. These regulations included that each bookseller-printer was to have one shop, which had to be located in the University quarter or on the quai de l’Horloge. These restrictions enabled authorities to more easily inspect their businesses to enforce other regulations such as: printer need to register the number of presses they owned, and any books printed had to be registered and approved by the royal court before sales. Opticians were also located ton he . Their tools – squares, rules, compasses and dividers – were essential to the practice of cartography. Many of the cartographers who worked in Paris never set foot outside the city; they did not gather firsthand knowledge for their maps. They were known as the . An example of a cartographer who relied on other sources was Jean-Baptiste Bourgignon d’Anville, who compiled his information from ancient and modern sources, verbal and pictorial, published and even unpublished sources. Dieppe school of cartographers The Dieppe maps are a series of world maps produced in Dieppe, France, in the 1540s, 1550s and 1560s. They are large hand-produced maps, commissioned for wealthy and royal patrons, including Henry II of France and Henry VIII of England. The Dieppe school of cartographers included Pierre Desceliers, Johne Rotz, Guillaume Le Testu, Guillaume Brouscon and Nicolas Desliens. Nicolas-Louis de Lacaille and the charting of far southern skies First modern topographic map of France In the 1670s the astronomer Giovanni Domenico Cassini began work on the first modern topographic map in France. It was completed in 1789 or 1793 by his grandson Cassini de Thury. 18th-century developments The Vertical Perspective projection was first used by the German map publisher Matthias Seutter in 1740. He placed his observer at ~12,750 km distance. This is the type of projection used today by Google Earth. The changes in the use of military maps was also part of the modern Military revolution, which changed the need for information as the scale of conflict increases as well. This created a need for maps to help with "... consistency, regularity and uniformity in military conflict." The final form of the equidistant conic projection was constructed by the French astronomer Joseph-Nicolas Delisle in 1745. The Swiss mathematician Johann Lambert invented several hemispheric map projections. In 1772 he created the Lambert conformal conic and Lambert azimuthal equal-area projections. The Albers equal-area conic projection features no distortion along standard parallels. It was invented by Heinrich Albers in 1805. In 1715 Herman Moll published the Beaver Map, one of the most famous early maps of North America, which he copied from a 1698 work by Nicolas de Fer. In 1763–1767 Captain James Cook mapped Newfoundland. In 1777 Colonel Joseph Frederick Wallet DesBarres created a monumental four volume atlas of North America, Atlantic Neptune. In the United States in the 18th and 19th centuries, explorers mapped trails and army engineers surveyed government lands. Two agencies were established to provide more detailed, large-scale mapping. They were the U.S. Geological Survey and the United States Coast and Geodetic Survey (now the National Geodetic Survey under the National Oceanic and Atmospheric Association). 19th-century developments During his travels in Spanish America (1799–1804) Alexander von Humboldt created the most accurate map of New Spain (now Mexico) to date. Published as part of his Essai politique sur le royaume de la Nouvelle-Espagne (1811) (Political Essay on the Kingdom of New Spain), Humboldt's Carte du Mexique (1804) was based on existing maps of Mexico, but with Humboldt's careful attention to latitude and longitude. Landing at the Pacific coast port of Acapulco in 1803, Humboldt did not leave the port area for Mexico City until he produced a map of the port; when leaving he drew a map of the east coast port of Veracruz, as well as a map of the central plateau of Mexico. Given royal authorization from the Spanish crown for his trip, crown officials in Mexico were eager to aid Humboldt's research. He had access to José Antonio de Alzate y Ramírez's Mapa del Arzobispado de México (1768), which he deemed "very bad", as well as the seventeenth-century map of greater Mexico City by savant Don Carlos de Sigüenza y Góngora. John Disturnell, a businessman and publisher of guidebooks and maps, published Mapa de los Estados Unidos de Méjico, which was used in the negotiations between the U.S. and Mexico in the Treaty of Guadalupe Hidalgo (1848), following the Mexican–American War, based on the 1822 map by U.S. cartographer Henry Schenck Tanner. This map has been described as showing U.S. Manifest Destiny; a copy of the map was offered for sale in 2016 for $65,000. Map making at that time was important for both Mexico and the United States. The Greenwich prime meridian became the international standard reference for cartographers in 1884. 20th-century developments During the 20th century, maps became more abundant due to improvements in printing and photography that made production cheaper and easier. Airplanes made it possible to photograph large areas at a time. Two-Point Equidistant projection was first drawn up by Hans Maurer in 1919. In this projection the distance from any point on the map to either of the two regulating points is accurate. The loximuthal projection was constructed by Karl Siemon in 1935 and refined by Waldo Tobler in 1966. Since the mid-1990s, the use of computers in map making has helped to store, sort, and arrange data for mapping in order to create map projections. Contemporary developments Software development Nowadays map-making heavily relies on computer software to develop and provide a variety of services, a trend that already started at the end of the previous century. For instance, self-location, browser search of places, business, products, and area, and distance calculation. At the present time, computer-based software is dominated by big companies that offer their services to a worldwide public, such as Google Maps, Apple Maps, Bing Maps, National Geographic Maps, ESRI Geographic Information System (GIS), CartoDB, Mapbox, Waze, etc. Many other state-based, regional and smaller initiatives, and companies offer their services. The list of online map services is quite long and is growing every day. Historical map collections Recent development also include the integration of ancient maps and modern scholar research combined with modern computer software to elaborate periodical history maps. Initiatives such as Euratlas History Maps (which covers the whole of Europe from the year 1 AD to the present), Centennia Historical Atlas (which covers Europe from the year 1000AD to the present), Geacron, and many others who work in what is called historical cartography. These maps include evolution of countries, provinces and cities, wars and battles, the history of border changes, etc. Today historical cartography is thriving. The specialization of map services is ever growing. New map projections are still being developed, university map collections, such as Perry–Castañeda Library Map Collection at the University of Texas, offer better and more diverse maps and map tools every day, making available for their students and the broad public ancient maps that in the past were difficult to find. David Rumsey Historical Map Collection is nowadays a worldwide known initiative. Self-publishing tools and collaborative mapping Never in the past there were many "edit-yourself" map tools and software available for non-specialist. Map blogs and self-publishing are common. In 2004, Steve Coast created OpenStreetMap, a collaborative project to create a free editable map of the world. The creation and growth of OpenStreetMap has been motivated by restrictions on use or availability of map information across much of the world, and the advent of inexpensive portable satellite navigation devices. Organizations In 1921, the International Hydrographic Organization (IHO) was set up, and it constitutes the authority on hydrographic surveying and nautical charting. The current defining document is the Special publication S-23, Limits of Oceans and Seas, 3rd edition, 1953. The second edition dated back to 1937, and the first to 1928. A fourth edition draft was published in 1986 but so far several naming disputes (such as the one over the Sea of Japan) have prevented its ratification. History of cartography's technological changesMore at Cartography § Technological changes'' In cartography, technology has continually changed in order to meet the demands of new generations of mapmakers and map users. The first maps were manually constructed with brushes and parchment and therefore varied in quality and were limited in distribution. The advent of the compass, printing press, telescope, sextant, quadrant and vernier allowed for the creation of far more accurate maps and the ability to make accurate reproductions. Professor Steven Weber of the University of California, Berkeley, has advanced the hypothesis that the concept of the "nation state" is an inadvertent byproduct of 15th-century advances in map-making technologies. Advances in photochemical technology, such as the lithographic and photochemical processes, have allowed for the creation of maps that have fine details, do not distort in shape and resist moisture and wear. This also eliminated the need for engraving which further shortened the time it takes to make and reproduce maps. In the mid-to-late 20th century, advances in electronic technology have led to further revolution in cartography. Specifically computer hardware devices such as computer screens, plotters, printers, scanners (remote and document) and analytic stereo plotters along with visualization, image processing, spatial analysis and database software, have democratized and greatly expanded the making of maps, particularly with their ability to produce maps that show slightly different features, without engraving a new printing plate. See also digital raster graphic and History of web mapping. Aerial photography and satellite imagery have provided high-accuracy, high-throughput methods for mapping physical features over large areas, such as coastlines, roads, buildings, and topography. See also s (India) Related histories Notes References External links Imago Mundi journal of History of Cartography The History of Cartography journal published by the University of Chicago Press Euratlas Historical Maps History maps from year zero AD The History of Cartography Project at the University of Wisconsin, a comprehensive research project in the history of maps and mapping Three volumes of The History of Cartography are available free in PDF format The history of cartography at the School of Mathematics and Statistics, University of St. Andrews, Scotland Mapping History – a learning resource from the British Library Modern Medieval Map Myths: The Flat World, Ancient Sea-Kings, and Dragons Concise Bibliography of the History of Cartography, Newberry Library Newberry Library Cartographic Catalog : map catalog and bibliography of the history of cartography American Geographical Society Library Digital Map Collection David Rumsey Historical map collection licensed under a Creative Commons License See Maps for more links to historical maps; however, most of the largest sites are listed at the sites linked below. Eratosthenes Map of the Earth, and Measuring of its Circumference at Convergence Ancient World Maps A listing of over 5000 websites describing holdings of manuscripts, archives, rare books, historical photographs, and other primary sources for the research scholar Historical Atlas in Persuasive Cartography, The PJ Mode Collection, Cornell University Library Old Maps Online cartography sv:Kartografins historia
80601
https://en.wikipedia.org/wiki/Chryses%20of%20Troy
Chryses of Troy
In Greek mythology, Chryses (; Greek, Χρύσης Khrúsēs, meaning "golden") was a Trojan priest of Apollo at Chryse, near the city of Troy. Family According to a tradition mentioned by Eustathius of Thessalonica, Chryses and Briseus (father of Briseis) were brothers, sons of a man named Ardys (otherwise unknown). Mythology During the Trojan War (prior to the actions described in Homer's Iliad), Agamemnon took Chryses' daughter Chryseis (Astynome) from Moesia as a war prize. When Chryses attempted to ransom her, Agamemnon refused to return her. Chryses prayed to Apollo, and he, in order to defend the honor of his priest, sent a plague sweeping through the Greek armies. Agamemnon was forced to give Chryseis back in order to end it. The significance of Agamemnon's actions lies not in his kidnapping Chryseis (such abductions were commonplace in ancient Greece), but in his refusal to release her upon her father's request. References Sources Bibliotheca, Hyginus, R. Scott Smith, and Stephen M. Trzaskoma. Apollodorus' Library and Hyginus' Fabulae: two handbooks of Greek mythology. Cambridge: Hackett, 2007. Trojans
30560539
https://en.wikipedia.org/wiki/EastWest%20Institute
EastWest Institute
The EastWest Institute (EWI), originally known as the Institute for East-West Security Studies and officially the Institute for EastWest Studies, Inc., was an international not-for-profit, non-partisan think tank focusing on international conflict resolution through a variety of means, including track 2 diplomacy and track 1.5 diplomacy (conducted with the direct involvement of official actors), hosting international conferences, and authoring publications on international security issues. The organization employed networks in political, military, and business establishments in the United States, Europe, and the former Soviet Union. EWI was founded by John Edwin Mroz and Ira D. Wallach in 1980 as an independent, global organization that promotes peace by creating trusted settings for candid, global discourse among leaders to tackle intractable security and stability challenges. Mroz served as president and CEO of the institute for 34 years until his death, in 2014. EWI has a long-standing track record of convening dialogue and backchannel diplomacy to develop sustainable solutions for major political, economic and security issues. The organization’s initial success was rooted during the Cold War—in fact, EWI hosted the first ever military-to-military dialogue between NATO and Warsaw Pact countries. From its roots as a European-American initiative to bridge the divisions between Europe and Eurasia, Mroz built the institute into one of the world’s pre-eminent non-governmental change-agent institutions. After four decades of distinctive service, the organization discontinued operations effective January 31, 2021. This decision was taken at the conclusion of a four-month strategic assessment in light of increasing challenges resulting from the global pandemic and related financial challenges facing many nonprofit organizations. EWI's initiatives focused on a number of different areas including cybersecurity, preventive diplomacy, strategic trust-building (which encompasses Russia-United States relations and China-United States Relations), Economic Security, and Regional Security (focusing on specific areas such as Southwest Asia). History The Institute for East-West Security Studies was founded in 1980, when then CEO John Edwin Mroz and Ira D. Wallach set out to study means of addressing areas of political dispute across the Iron Curtain. In 1984, EWI hosted the first track 2 military-to-military discussions between the NATO and Warsaw Pact countries. These talks, focusing heavily on the establishment of confidence-building measures (CBMs) between the two parties, ultimately resulted in an agreement requiring each side to alert the other of troop movements. After the fall of the Berlin Wall and the eruption of conflicts in Southeastern Europe, EWI worked to foster economic stability in the region, encouraging cross-border cooperation and training leaders for democratic states. In the 2000s (decade), EWI's operations expanded geographically to China, Southwest Asia and the Middle East, focusing on issues like cybersecurity, economic security, and countering violent extremism. Since 2008, EWI has partnered with the China Association for International Friendly Contact to organize forums, termed the U.S.-China Sanya Initiative, between retired People's Liberation Army officers and retired U.S. military personnel. The Sanya Initiative is supported by the China-United States Exchange Foundation (CUSEF), a Hong Kong-based nonprofit established by billionaire Tung Chee-hwa. In May 2009, EWI released its Joint Threat Assessment on Iran, produced by senior U.S. and Russian experts convened by the institute. The assessment, which concluded that the planned system would not protect against an Iranian nuclear threat, helped inform the Obama administration's decision to scrap the ballistic missile defense plan proposed by the Bush administration and replace it with a plan of its own. In 2016, the Institute helped set up an information portal which allows operators of critical infrastructure to share security information internationally. Initiatives Strategic Trust-building EWI's Strategic Trust-building Initiative includes its work with Russia, China, and the United States. Through its work with Russia, EWI has sought to "build a sustainable relationship of trust between Russia, its G-8 partners, and the world’s new rising powers." This program was responsible for establishing the 2009 Joint Threat Assessment on Iran. The China program, which was initiated in 2006, seeks to foster China's integration into the international sphere as a productive partner. An example of EWI's China work is the establishment of annual three-party talks between Republican Party, Democratic Party, and Chinese Communist Party leaders. The Strategic Trust-building Initiative also incorporates EWI's work in Weapons of Mass Destruction issues. The WMD program, which began in 2006, aims to reduce political obstacles to the elimination of the threat of nuclear weapons. EWI organized a series of events and meetings in 2007 and 2008 to address stalled arms discussions in the international community. Regional Security This program addresses specific regional problems requiring the attention of the international community. Current issues include: security and stability in Afghanistan & Southwest Asia, and Euro-Atlantic security. Regional Security is directly involved with the Parliamentarians Network for Conflict Prevention—a network, founded by EWI in October 2008, that has since grown to include 150 parliamentarians from more than 50 countries. Members of the network work to translate ideas into policy as well as advocate for a greater allocation of resources for preventive action. In 2010, EWI created the Amu Darya Basin Network, which links experts, researchers and policy makers from Central Asia, Afghanistan and Europe to create a place for key stakeholders to discuss trans-boundary water issues, forge agreements and share knowledge. The Amu Darya Basin Network highlights the need for local ownership and input in the management of shared waters, and for engagement in the region in more concrete ways. In addition, the regional security program initiated the Abu Dhabi Process, a set of meetings focused on regional cooperation between Afghanistan and Pakistan. The process emphasised that there is no military solution to the conflict in Afghanistan and has seen meeting take place in Abu Dhabi, Kabul and Islamabad among places. Economic Security EWI's Economic Security Initiative, launched in fall 2011, focuses on increasing resilience and response capabilities in regions threatened by water, food and energy scarcity. This program works with global investors to address dilemmas of growth and sustainability, and also focuses on the security of the digital economy. An example of this work includes the annual Worldwide Security Conference, first held in 2003, which assembles experts from governments, the private sector, NGOs, and academia to explore issues such as countering violent extremism, securing infrastructure, and energy security. The Worldwide Cybersecurity Initiative is a part of the ESI. It aims to reduce vulnerabilities in governmental and private cybersecurity policies by developing consensus proposals for new agreements and policy reform. The institute's chief method of achieving this goal has been the hosting of the Worldwide Cybersecurity Summit, an annual meeting of governmental and corporate actors in the field, first held in May 2010 at Dallas, Texas, which established policy recommendations for securing international cyber infrastructure. Publications References External links Foreign policy and strategy think tanks International political organizations International security NATO relations Think tanks established in 1980 International organizations based in the United States
5180882
https://en.wikipedia.org/wiki/Supply%20management%20%28procurement%29
Supply management (procurement)
The term supply management, also called procurement, describes the methods and processes of modern corporate or institutional buying. This may be for the purchasing of supplies for internal use referred to as indirect goods and services, purchasing raw materials for the consumption during the manufacturing process, or for the purchasing of goods for inventory to be resold as products in the distribution and retail process. In many organizations, acquisition or buying of services is called contracting, while that of goods is called purchasing or procurement. The supply management function of an organization is responsible for various aspects of these acquisitions: Working with business leaders who have identified a business need or requirement to identify, source, contract, and procure the needed good or service from qualified suppliers Managing supplier performance Implementing technologies, processes, policies, and procedures to support the purchasing process (Supplier Relationship Management). The supplier relationship management process: a process for providing the structure for how relationships with suppliers will be developed and maintained. Economic theories of supply and demand Supply management is generally regarded as a systematic business process that includes more functions than traditional buying, such as coordinating inbound and internal pre-production logistics and managing inventory. Supply management deals primarily with the oversight and management of materials and services inputs, management of the suppliers who provide those inputs, and support of the process of acquiring those inputs. The performance of supply management departments and supply management professionals is commonly measured in terms of amount of money saved for the organization. However, managing risk is another aspect of supply management, with the risk of non-availability at the required time of quality goods and services for an organization's survival and growth. Groups and certifications Numerous professional organizations have formed to address the need for higher levels of supply management skill and expertise. One of the largest of these is the Institute for Supply Management, a United States not-for-profit association that includes more than 40,000 members. It is affiliated with the International Federation of Purchasing and Supply Management, a union of local and national purchasing associations with approximately 200,000 members. For companies seeking to fulfill diversity supplier spend commitments, the National Minority Supplier Development Council with 37 affiliated regional councils, was established in 1972 to assist in promoting supplier development of Asian, Black, Hispanic and Native American-owned businesses, and providing management training and capacity-building to minority business enterprises and corporate program staff. Many certification programs are relevant to the supply management profession. Some are offered through non-profit associations, such as the Certified Professional in Supply Management (CPSM) through the Institute for Supply Management. There are also for-profit companies who offer certification programs, such as Next Level Purchasing Association, who offers the Senior Professional in Supply Management® (SPSM®) Certification. Supply management is different from supply chain management, though it can be considered a component of supply chain management. Conversely, where the supply management function is established as a C-level strategic effort, supply chain management is but one component of an overall strategic supply management approach. Supply management is a complementary discipline that encompasses the alignment of organizations, processes, and systems for strategic sourcing, contract management, supplier management, spend analysis to continuously improve global supply for best-value performance in support of the strategic objectives of the business. Supply management software Supply management software comprises all of the different solutions which automate the source-to-settle process and include Spend Analysis, eSourcing, Contracts, Supply Base Management, eProcurement, eCatalogs (for Supplier Enablement), and Accounts Payable or ePayables solutions. Software which helps automate the management of complex services like business travel and temporary labor are also included in this software segment One report that focuses on a sub-set of the space is a Gartner research report: "Sourcing applications provide a systematic and scalable means for organizations to manage the full sourcing process, including finalizing purchase specifications, selecting suppliers and negotiating prices....Most sourcing solution vendors bundle spend analysis, contract management and supplier performance management tools into their suites." The Gartner report summarizes, "Best-of-breed providers with suites delivered via software as a service dominate the strategic sourcing application market, while ERP companies with integrated offerings are gaining traction by providing tactical sourcing support." Gartner estimates the sourcing software market at close to a half-billion dollars in 2007 with an annual growth rate of 5%. According to Gartner, the research firm, leading providers of supply and contract management software include SAP, Ariba, Zycus, GEP Worldwide, BravoSolution, Ivalua, Inc., AECsoft, Rosslyn Analytics and Emptoris. Supply management is one of the processes included in procure-to-pay, source-to-pay, and similar ERP software implementations. References External links Supply Management magazine online Procurement
22693274
https://en.wikipedia.org/wiki/Oklahoma%20Christian%20University
Oklahoma Christian University
Oklahoma Christian University (OC) is a private Christian university in Oklahoma City, Oklahoma. It was founded in 1950 by members of the Churches of Christ. History Oklahoma Christian University was originally named Central Christian College founded in 1950 by members of the Churches of Christ. It opened as a two-year college with 97 students in Bartlesville on the former estate of Henry Vernon "H.V." Foster, a prominent oil businessman. L.R. Wilson was the college's first president, having founded Florida Christian College four years before. Harold Fletcher, who became an OC emeritus professor of music, was the first faculty member hired for the new college. James O. Baird became the school's second president in 1954. Soon after, plans were made to move the campus to Oklahoma City. Groundbreaking occurred on the far north edge of Oklahoma City in 1957 and the university was relocated in 1958. It was renamed Oklahoma Christian College in 1959 and began offering the bachelor's degree, with its first senior class graduating in 1962. Full accreditation was obtained from the North Central Association of Colleges and Schools in 1965. In the 1990s, the school restructured its academic departments into separate colleges and the name of the institution was changed initially to Oklahoma Christian University of Science and Arts before being truncated to "Oklahoma Christian University". In 1981, OC became the sponsor of The Christian Chronicle. In 2014, OC began the innovative Ethos spiritual development program where students can earn "kudos" for attending any of the 26 small chapels which foster community, discipleship, and servanthood. Technology In August 2001, OC became one of the few college campuses in the US, and the only in the state, to provide a campus-wide wireless Internet service and a personal laptop computer to every full-time student. In 2008, Oklahoma Christian University began providing Apple's MacBook to all full-time students and faculty. Included with each MacBook was the choice of an iPhone or an iPod touch. Beginning with the fall 2010 semester, students also had the option of choosing an iPad for an additional charge. OC now provides innovative information technology support for a "Bring Your Own Device" model. In 2013, OC's mobile computing program was honored as an Apple Distinguished Program. Academics All bachelor's degrees at OC require the completion of at least 126 semester hours. Not less than 30 hours must be earned in courses numbered 3000 or above, including at least 10 hours in the major field. Bachelor's degrees require completion of a core curriculum of 60 semester hour consisting of "Basic Skills" (14 hours), Bible (16 hours), "Basic Perspectives" (27 hours) and a 3-hour senior philosophy seminar The university offers an honors program for highly motivated and skilled students. Honors program participants must have a high school GPA of 3.5 or higher, a minimum score on the ACT of 28 or SAT of 1250, evidence of writing skills, and be selected by interview. Through its Office of International Studies, OC offers semester-long study programs in Europe, based in the university's Das Millicanhaus in Vienna, Austria. OC also has shorter study abroad options in Asia and Honduras, plus additional options through the Council for Christian Colleges and Universities (CCCU). Faculty OC employs 94 full-time faculty members, more than 70 percent of whom hold a terminal degree in their respective fields. The undergraduate student-to-faculty ratio is 13-to-1. 83 percent of classes contain fewer than 30 students. Presidents L. R. Wilson – 1950–1954 James O. Baird – 1954–1974 J. Terry Johnson – 1974–1996 Kevin Jacobs – 1996–2001 Alfred Branch – 2001–2002 Mike O'Neal – 2002–2012 John deSteiguer – 2012–present Athletics Oklahoma Christian competes in the Lone Star Conference. Prior to this, in 2012, Oklahoma Christian joined the NCAA Division II Heartland Conference as part of its candidacy for full membership in NCAA Division II. OC also joined the National Christian College Athletic Association in 2012. The Eagles and Lady Eagles field varsity teams in baseball, softball, men's and women's basketball, men's and women's cross country, men's and women's golf, men's and women's soccer, men's and women's swimming, men's and women's track and field and women's volleyball. Campus Oklahoma Christian University is located west of U.S. Interstate 35 just south of the north Oklahoma City suburb of Edmond. While it is widely believed to be inside Edmond city limits, the campus is actually in Oklahoma City. The campus is bounded by East Memorial Road to the south, Smiling Hills Boulevard to the north, S. Boulevard/N. Eastern Avenue to the west, and Benson Road and N. Bryant Road to the east. The main entrance to the campus is on Memorial Road and is marked by a large pond with a fountain. The campus contains more than 30 major buildings, with the majority built in an International and Mid-Century modern-influenced architectural style, unified through the use of red brick with light-colored stone ornamentations. The main entrance leads directly to the center of the campus. Prominently located in this area is the Williams-Branch Center for Biblical Studies (1987), which contains Scott Chapel. Directly north of Scott Chapel is the Mabee Learning Center (1966), which houses the Tom & Ada Beam Library, the Honors Program and the Department of Language and Literature. The Beam Library is where Safe at Home chapel meets. The side-by-side asphalt and crushed granite running paths span a distance of 3.1 miles around the campus and have lighting, landscaping and security phones. Located between the Williams-Branch Center and the library's front entrance is the Thelma Gaylord Forum (1987), a heavily landscaped public space and amphitheatre intended as a relaxing study area and a site for outdoor performances and events. East of the Mabee Learning Center are four of OC's earliest buildings, dating from 1959. Benson Hall housed the business office for many years, but returned to its original use as the main administrative building in 2013. Cogswell-Alexander Hall contains the registrar's office and information technology offices. Gaylord Hall is the site of the admissions and financial aid offices. Vose Hall contains science laboratories and classrooms. These buildings center around the university's original quadrangle and fountain. North of the original quadrangle is the Davisson American Heritage (DAH) Building (1970), which houses the Department of History and Political Science, the Department of Psychology and Family Studies and the School of Education. North of DAH is the Noble Science Wing (2011) and Herold Science Hall, site of OC's student undergraduate research program, and the Prince Engineering Center (1988), is the location of OC's School of Engineering and its ABET-certified mechanical, electrical and computer Engineering programs. Located east of the main entrance is the 1,268-seat Baugh Auditorium, the main campus venue for performances and convocations. McIntosh Conservatory, an open meeting and performance space, links Baugh Auditorium with the Garvey Center (1978), consisting of Mabee Hall and Kresge Hall. Contained within the buildings are the Mabee Communications Center and the Fletcher Center for Music. Included in these areas are classroom, offices and studios for OC's Communications and Music departments. Also contained within this complex is the 275-seat Judd Theatre, designed for thrust or proscenium theatre productions, and the 190-seat Adams Recital Hall, an elegant and traditional space for solo and small group music performances. East of Baugh Auditorium is the Harvey Business Center (1980), housing the School of Business Administration and OC's Information Technology Services. Also in this area of campus is the building originally designed for "Enterprise Square USA", an interactive museum dedicated to the promotion of American citizenship and free enterprise which operated from 1982 to 2002. OC's alumni and advancement offices currently operate out of this facility. The areas on the west side of the campus are largely devoted to student residences and recreation. The Gaylord University Center (1976/1997) contains the cafeteria, a snack bar, bookstore, health center, recreation areas and the Student Life and Student Government Association offices. North of the Gaylord University Center is the Payne Athletic Center (1970), site of a campus fitness facility, Olympic-size swimming pool, the Physical Education and Athletics Department offices and the "Eagles' Nest" gymnasium – OC's home court for basketball competitions. In 2007, The Oklahoman named the Eagles' Nest as one of the top-100 athletic venues in state history. Some of the newest additions to the campus lie between these buildings and the dormitories to the west. Lawson Commons, an outdoor mall area, contains McGraw Pavilion, a unique covered outdoor event space, and the Freede Centennial Tower, a clock tower that stands as a focal point on campus and commemorates the 2007 Oklahoma state centennial. In October 2009, the campus received a gift of more than 1,300 trees in five varieties through a partnership between the Tree Bank Foundation and the Apache Foundation that were planted across the campus. In 2013, OC opened the Boker-Wedel Eagle Trail, a 5 km path around the campus. The side-by-side asphalt and crushed granite running paths span a distance of 3.1 miles around the campus and have lighting, landscaping and security phones. The trail connects with the growing Edmond running trails system and will eventually connect with the Oklahoma City running trails system. In April 2016, the university unveiled Hartman Place, a scripture garden and waterfall to be used as a place of devotion and reflection. One of the features of Hartman Place is a space designated for students to write, using chalk on slate, remembrances of loved ones they have lost. OC provides almost 1,800 on-campus living spaces in 11 residence halls and nine apartment complexes. Dormitories are located on the western end of the campus. Apartment complexes, available to upperclass and married students, are located across Benson Road on the east end of campus The northernmost portions of the campus contain outdoor venues for soccer, softball (Tom Heath Field at Lawson Plaza), track and field (Vaughn Track), baseball (Dobson Field) and intramural sports. OC policies The university is guided by six "defining values": Faith, Scholarship, Integrity, Stewardship, Liberty and Leadership. OC retains a commitment to traditional biblical principles as expressed through the "Oklahoma Christian Covenant", which emphasizes that the "values and behavior of this Christian community are derived from the Bible". The covenant is described by the university as: Attendance at OC is open to all students, regardless of religious affiliation, who agree to abide by the ideals of the covenant. Full-time faculty and staff are required to be active members of a church of Christ. Attendance at daily chapel services (with a set number of allowed absences) is mandatory for all full-time students. OC has an exemption from Title IX regulations prohibiting discrimination based on gender identity or sexual orientation. OC's policies have a wide base of support, with trustees who work at organizations including Bank of America, OU College of Medicine, Kaiser-Francis Oil Company, and BancFirst. Cascade College OC operated Cascade College, a branch campus in Portland, Oregon, from 1994 until it closed in May 2009. Like OC, Cascade's full-time faculty and the majority of its students were members of Churches of Christ. In 1992, the Oklahoma Christian University Board of Trustees assumed the operation of the former Columbia Christian College after it suffered serious financial difficulties and lost accreditation. A year after Columbia closed, the new branch campus opened in 1994 as Cascade College. The North Central Association agreed that the accreditation of Oklahoma Christian, Oklahoma City, could extend to Cascade if close ties and supervision were maintained. In October 2008, the OC Board of Trustees announced that Cascade College would close after the spring 2009 semester. Dr. Bill Goad was the last president of Cascade and is now OC's executive vice president. Notable alumni Cliff Aldridge – former Republican member of the Oklahoma State Senate Jim Beaver – film and television actor, co-star of Deadwood and Supernatural Andrew K. Benton – seventh president of Pepperdine University Dan Branch (1980) – former member of the Texas House of Representatives from the Dallas area Sherri Coale (1987) – head coach, University of Oklahoma women's basketball Patrice Douglas (1983) – former member of the Oklahoma Corporation Commission Joe Clifford Faust (1980) – science fiction author and freelance writer Allison Garrett (1984), VP for academic affairs (2007–2012); current president at Emporia State University Rhein Gibson – professional golfer and Guinness World Record holder Roderick Green (2002) – paralympic athlete Molefi Kete Asante (under his birth name, Arthur Lee Smith, Jr.; 1964) – scholar of African studies and African American studies at Temple University; founder of the first Ph.D. program in African-American studies Greg Lee – actor, host of PBS series Where in the World is Carmen Sandiego? and voice of Mayor/Principal Bob White on Doug. Roy Ratcliff (1970) – Christian minister, ministered to Jeffrey Dahmer Tess Teague (2012) – former Oklahoma State Representative Sam Winterbotham (1999) – head coach, University of Tennessee men's tennis References External links Oklahoma Christian Athletics website Universities and colleges in Oklahoma City Universities and colleges affiliated with the Churches of Christ Private universities and colleges in Oklahoma Educational institutions established in 1950 1950 establishments in Oklahoma Council for Christian Colleges and Universities
11189005
https://en.wikipedia.org/wiki/Super-server
Super-server
A super-server or sometimes called a service dispatcher is a type of daemon run generally on Unix-like systems. Usage A super-server starts other servers when needed, normally with access to them checked by a TCP wrapper. It uses very few resources when in idle state. This can be ideal for workstations used for local web development, client/server development or low-traffic daemons with occasional usage (such as ident and SSH). Performance The creation of an operating system process embodying the sub-daemon is deferred until an incoming connection for the sub-daemon arrives. This results in a delay to the handling of the connection (in comparison to a connection handled by an already-running process). Whether this delay is incurred repeatedly for every incoming connection depends on the design of the particular sub-daemon; simple daemons usually require a separate sub-daemon instance (i.e. a distinct, separate operating system process) be started for each and every incoming connection. Such a request-per-process design is more straightforward to implement, but for some workloads, the extra CPU and memory overhead of starting multiple operating system processes may be undesirable. Alternatively, a single sub-daemon operating system process can be designed to handle multiple connections, allowing similar performance to a "stand alone" server (except for the one-off delay for the first connection to the sub-daemon). Implementations inetd launchd systemd ucspi-tcp xinetd References Internet Protocol based network software Servers (computing)
18540320
https://en.wikipedia.org/wiki/Sheffield%20Software%20Engineering%20Observatory
Sheffield Software Engineering Observatory
The Sheffield Software Engineering Observatory (Observatory) was founded in 2005 by an EPSRC grant at the University of Sheffield. The Observatory is a multi-disciplinary collaboration between the Department of Computer Science and the Institute of Work Psychology at the University of Sheffield. Overview Its aim is to understand the processes that makes for good software engineering practice, and how are these needs to combine human and technical factors. The Software Engineering Observatory is an empirical software engineering research facility where researchers can use a variety of methodologies to study software developers working on real industrial projects. The software developers are students, both undergraduate and postgraduate and up to 20 group projects were undertaken each year. Thus, researchers can investigate how software developers work in teams, deal with industrial clients and handle the plethora of problems that arise in group projects with tight time-scales. A key feature is that the Observatory allows multiple teams to work on identical projects concurrently in competition with each other, which allows comparisons to be made of different software development processes. The Observatory enables researchers to gather data that are relevant to many of the key issues in contemporary software engineering, which will be of interest to both academics and practitioners. The implications of the results so far are that effective software managers must not just understand the technical aspects of the work that their staff are doing, but must also understand their staff as individuals and how they can best work together in teams. Research areas The Observatory’s research agenda includes: Assessing, through controlled experiments, the relative merits of software development methods and methodologies in terms of both the quality of output and the well-being of the developers. Devising empirically-based models of the processes that developers are observed to use Identifying the factors that make for good team-based software development, including leadership, the personality, skill, gender and ethnic mix of teams, and how task conflict can contribute, constructively, to enhanced performance. Investigating the relative importance of (a) the methodology adopted by the team and degree of fidelity to it, (b) the individual’s participant’s motivation and knowledge, and (c) team processes in accounting for variability in the performance of the group. The data from these experiments will be made available to bona-fide researchers in empirical software engineering. History The Observatory was founded in 2005, however prior to that a number of PhD students designed experiments and collected data on the software engineering process. These were all based on the pioneering taught courses devised at the University of Sheffield References External links Software Observatory homepage epiGenesys - a University of Sheffield company University of Sheffield Software engineering organizations
84966
https://en.wikipedia.org/wiki/Turnus
Turnus
Turnus () was the legendary King of the Rutuli in Roman history, and the chief antagonist of the hero Aeneas in Virgil's Aeneid. According to the Aeneid, Turnus is the son of Daunus and the nymph Venilia and is brother of the nymph Juturna. Historical tradition While there is a limited amount of information in historical sources about Turnus, some key details about Turnus and the Rutuli differ significantly from the account in the Aeneid. The only source predating the Aeneid is Marcus Portius Cato's Origines. Turnus is also mentioned by Livy in his Ab Urbe Condita and by Dionysius of Halicarnassus in his (Rômaïkê Archaiologia, "Roman Antiquities"), both of which come later than the Aeneid. Turnus is mentioned in the Book of Jasher, along with Angeas of Africa. In all of these historical sources, Turnus' heritage is unclear. Dionysius calls him Tyrrhenus, which means "Etruscan", while other sources suggest a Greek ancestry. In all of these sources, Turnus and his Rutulians are settled in Italy prior to the arrival of the Trojans and are involved in the clash between the Latins and the Trojans, but there is a great deal of discrepancy in details. It appears that Virgil drew on a variety of historical sources for the background of Turnus in the Aeneid. Virgil's Aeneid Prior to Aeneas' arrival in Italy, Turnus was the primary potential suitor of Lavinia, the only daughter of Latinus, King of the Latin people. Upon Aeneas' arrival, however, Lavinia is promised to the Trojan prince. Juno, determined to prolong the suffering of the Trojans, prompts Turnus to demand a war with the new arrivals. King Latinus is greatly displeased with Turnus, but steps down and allows the war to commence. During the War between the Latins and the Trojans (along with several other Trojan allies, including King Evander's Arcadians), Turnus proves himself to be brave but hot-headed. In Book IX, he nearly takes the fortress of the Trojans after defeating many opponents, but soon gets into trouble and is only saved from death by Juno. In Book X, Turnus slays the son of Evander, the young prince Pallas. As he gloats over the killing, he takes as a spoil of war Pallas' sword belt and puts it on. Enraged, Aeneas seeks out the Rutulian King with full intent of killing him. Virgil marks the death of Pallas by mentioning the inevitable downfall of Turnus. To prevent his death at the hands of Aeneas, Juno conjures a ghost apparition of Aeneas, luring Turnus onto a ship and to his safety. Turnus takes great offense at this action, questioning his worth and even contemplating suicide. In Book XII, Aeneas and Turnus duel to the death; Aeneas gains the upper hand amidst a noticeably Iliad-esque chase sequence (Aeneas pursues Turnus ten times round, between the walls of Latium and the lines of men, much as in the duel between Achilles and Hector), wounding Turnus in the thigh. Turnus begs Aeneas either to spare him or give his body back to his people. Aeneas considers but upon seeing the belt of Pallas on Turnus, he is consumed by rage and finishes him off. The last line of the poem describes Turnus' unhappy passage into the Underworld. Turnus' supporters include: his sister and minor river/ fountain deity, Juturna; Latinus's wife, Amata; the deposed king of the Etruscans, Mezentius; and Queen Camilla of the Volsci, allies in Turnus’ fight against Aeneas, the Trojans, and their allies. In later literature In the Middle English poem Sir Gawain and the Green Knight, the unknown poet cites as a parallel to Brutus of Troy's founding of Britain, that of an unidentified "Ticius" to Tuscany. Although some scholars have tried to argue that "Titius" is derived from Titus Tatius, Otis Chapman has proposed that "Ticius" is a scribal error for what the poet intended to read as Turnus. On top of manuscript stylometric evidence, Chapman notes that in a passage in Ranulf Higdon's Polychronicon, Turnus is also named as King of Tuscany. This suggests that legends in the age after Virgil came to identify Turnus "as a legendary figure like Aeneas, Romulus, "Langeberde", and Brutus". In Book IX of John Milton's Paradise Lost, the story of Turnus and Lavinia is mentioned in relation to God's anger at Adam and Eve. Interpretation Turnus can be seen as a "new Achilles," due to his Greek ancestry and his fierceness. According to Barry Powell, he may also represent Mark Antony or local peoples who must submit to Rome's empire. Powell adds that in the dispute between Turnus and Aeneas, Turnus may have the moral upper hand, having been arranged to marry Lavinia first. However, Turnus must be stopped since he is running counter to the force of destiny. References External links Roman mythology Characters in the Aeneid Characters in works by Geoffrey of Monmouth
27336594
https://en.wikipedia.org/wiki/Rockchip
Rockchip
Rockchip (Fuzhou Rockchip Electronics Co., Ltd.) is a Chinese fabless semiconductor company based in Fuzhou, Fujian province. Rockchip has been providing SoC products for tablets & PCs, streaming media TV boxes, AI audio & vision, IoT hardware since founded in 2001. It has offices in Shanghai, Beijing, Shenzhen, Hangzhou and Hong Kong. It designs system on a chip (SoC) products, using the ARM architecture licensed from ARM Holdings for the majority of its projects. Rockchip has been ranked one of the TOP50 Fabless Company IC Suppliers Worldwide. The company established cooperation with Google, Microsoft and Intel. On 27 May 2014, Intel announced an agreement with Rockchip to adopt the Intel architecture for entry-level tablets. Rockchip is a supplier of SoCs to Chinese white-box tablet manufacturers as well as supplying OEMs such as Asus, HP, Samsung and Toshiba. Products Featured Products RK3399 is the flagship SoC of Rockchip, Dual Cortex-A72 and Quad Cortex-A53 and Mali-T860MP4 GPU, providing high computing and multi-media performance, rich interfaces and peripherals. And software supports multiple APIs: OpenGL ES 3.2, Vulkan 1.0, OpenCL 1.1/1.2, OpenVX 1.0, AI interfaces support TensorFlow Lite/AndroidNN API. RK3399 Linux source code and hardware documents are on GitHub and Wiki opensource website. RK3566 is a successor to the RK3288 and outperforms it significantly, with quad core Arm A55 CPUs and an Arm Mali G52 GPU. Boards based on it are expected to be on sale in early 2021 from manufacturers like Pine64. RK3288 is a high performance IoT platform, Quad-core Cortex-A17 CPU and Mali-T760MP4 GPU, 4K video decoding and 4K display out. It is applied to products of various industries including Vending Machine, Commercial Display, Medical Equipment, Gaming, Intelligent POS, Interactive Printer, Robot and Industrial Computer. RK3288 Linux source code and hardware documents are on GitHub and Wiki opensource website. RK3326 and PX30 were announced in 2018, designed for Smart AI solutions. PX30 is a variant of RK3326 targeting IoT market, supporting dual VOP. They are with Arm's new generation of CPU Cortex-A35 and GPU G31. RK3308 is another released chipset targeting Smart AI solutions. It is an entry-level chipset aimed at mainstream devices. The chip has multiple audio input interfaces, and greater energy efficiency, featuring an embedded VAD (Voice Activation Detection). The announcement of RV1108 indicated Rockchip's move to AI/computer vision territory. With CEVA DSP embedded, RV1108 powers smart cameras including 360° Video Camera, IPC, Drone, Car Camcoder, Sport DV, VR, etc. It also has been deployed for new retail and intelligent marketing applications with integrated algorithms. Early Products RK26xx series - Released 2006. RK27xx series - Rockchip was first known for their RK27xx series that was very efficient at MP3/MP4 decoding and was integrated in many low-cost personal media player (PMP) products. RK28xx series The RK2806 was targeted at PMPs. The RK2808A is an ARM926EJ-S derivative. Along with the ARM core a DSP coprocessor is included. The native clock speed is 560 MHz. ARM rates the performance of the ARM926EJ-S at 1.1 DMIPS/MHz the performance of the Rockchip 2808 when executing ARM instructions is therefore 660 DMIPS roughly 26% the speed of Apple's A4 processor. The DSP coprocessor can support the real-time decoding of 720p video files at bitrates of up to 2.5 Mbit/s. This chip was the core of many Android and Windows Mobile-based mobile internet devices. The RK2816 was targeted at PMP devices, and MIDs. It has the same specifications as the RK2806 but also includes HDMI output, Android support, and up to 720p hardware video acceleration. RK29xx series The Rockchip RK291x is a family of SoCs based on the ARM Cortex-A8 CPU core. They were presented for the first time at CES 2011. The RK292x are single core SoCs based on ARM Cortex-A9 and were first introduced in 2012. The RK2918 was the first chip to decode Google WebM VP8 in hardware. It uses a dynamically configurable companion core to process various codecs. It encodes and decodes H.264 at 1080p, and can decode many standard video formats including Xvid, H.263, AVS, MPEG4, RV, and WMV. It includes a Vivante GC800 GPU that is compatible with OpenGL ES 2.0 and OpenVG. The RK2918 is compatible with Android Froyo (2.2), Gingerbread (2.3), HoneyComb (3.x) and Ice Cream Sandwich (4.0). Unofficial support for Ubuntu and other Linux flavours exists. As of 2013, it was targeted at E-readers. The RK2906 is basically a cost-reduced version of the RK2918, also targeted at E-readers as of 2013. The Rockchip RK2926 and RK2928 feature a single core ARM Cortex A9 running at a speed up to 1.0 GHz. It replaces the Vivante GC800 GPU of the older RK291x series with an ARM Mali-400 GPU. As of 2013, the RK2926 was targeted at tablets, while the RK2928 was targeted at tablets and Android TV dongles and boxes. The RK3066 is a high performance dual-core ARM Cortex-A9 mobile processor similar to the Samsung Exynos 4 Dual Core chip. In terms of performance, the RK3066 is between the Samsung Exynos 4210 and the Samsung Exynos 4212. As of 2013, it was targeted at tablets and Android TV dongles and boxes. It has been a popular choice for both tablets and other devices since 2012. The RK3068 is a version of the RK3066 specifically targeted at Android TV dongles and boxes. Its package is much smaller than the RK3066. The RK3028 is a low-cost dual-core ARM Cortex-A9-based processor clocked at 1.0 GHz with ARM Mali-400 GPU. It is pin-compatible with the RK2928. It is used in a few kids tablets and low-cost Android HDMI TV dongles. The RK3026 is an updated ultra-low-end dual-core ARM Cortex-A9-based tablet processor clocked at 1.0 GHz with ARM Mali-400 MP2 GPU. Manufactured at 40 nm, it is pin-compatible with the RK2926. It features 1080p H.264 video encoding and 1080p decoding in multiple formats. Supporting Android 4.4, it has been adopted for low-end tablets in 2014. The RK3036 is a low-cost dual-core ARM Cortex-A7-based processor released in Q4 2014 for smart set-top boxes with support for H.265 video decoding. RK31xx series The RK3188 was the first product in the RK31xx series, announced for production in the 2nd quarter of 2013. The RK3188 features a quad-core ARM Cortex-A9 clocked up to 1.6 GHz frequency. It is targeted at tablets and Android TV dongles and boxes, and has been a popular choice for both tablets and other devices requiring good performance. 28 nm HKMG process at GlobalFoundries Quad-core ARM Cortex-A9, up to 1.6 GHz 512 KB L2 cache Mali-400 MP4 GPU, up to 600 MHz (typically 533 MHz) supporting OpenGL ES 1.1/2.0, Open G 1.1 High performance dedicated 2D processor DDR3, DDR3L, LPDDR2 support Dual-panel display up to 2048x1536 resolution The RK3188T is a lower-clocked version of the RK3188, with the CPU cores running at a maximum speed of 1.4 GHz instead of 1.6 GHz. The Mali-400MP4 GPU is also clocked at a lower speed. As of early 2014, many devices advertised as using a RK3188 with a maximum clock speed of 1.6 GHz actually have a RK3188T with clock speed limited to 1.4 GHz. Operating system ROMs specifically made for the RK3188 may not work correctly with a RK3188T. The RK3168, first shown in April 2013, is a dual-core Cortex A9-based CPU, also manufactured using the 28 nm process. It is targeted at low-end tablets. The chip has seen only limited use as of May 2014. The RK3126 is an entry-level tablet processor introduced in Q4 2014. Manufactured using a 40 nm process, it features a quad-core Cortex-A7 CPU up to 1.3 GHz and a Mali-400 MP2 GPU. It is pin-compatible with RK3026 and RK2926. 40 nm process Quad-core ARM Cortex-A7, up to 1.3 GHz Mali-400 MP2 GPU High performance dedicated 2D processor DDR3, DDR3L memory interface 1080p multi-format video decoding and 1080p video encoding for H.264 The RK3128 is a higher-end variant of RK3126, also to be introduced in Q4 2014, that features more integrated external interfaces, including CVBS, HDMI, Ethernet MAC, S/PDIF, Audio DAC, and USB. It targets more fully featured tablets and set-top boxes. RK32xx series Rockchip has announced the RK3288 for production in the second quarter of 2014. Recent information suggests that the chip uses a quad-core ARM Cortex-A17 CPU, although technically ARM Cortex-A12, which as of October 1, 2014, ARM has decided to also refer to as Cortex-A17 because the latest production version of Cortex-A12 performs at a similar performance level as Cortex-A17. 28 nm HKMG process. Quad-core ARM Cortex-A17, up to 1.8 GHz Quad-core ARM Mali-T760 MP4 (also incorrectly called Mali-T764) GPU clocked at 600 MHz supporting OpenGL ES 1.1/2.0/3.0/3.1, OpenCL 1.1, Renderscript, Direct3D 11.1 High performance dedicated 2D processor 1080P video encoding for H.264 and VP8, MVC 4K H.264 and 10 bits H.265 video decode, 1080p multi-video decode Supports 4Kx2K H.265 resolution Dual-channel DDR3, DDR3L, LPDDR2, LPDDR3 Up to 3840x2160 display output, HDMI 2.0 RK3288 controversy Early reports including Rockchip first suggested in summer 2013 that the RK3288 was originally designed using a quad-core ARM Cortex-A12 configuration. Rockchip's primary foundry partner GlobalFoundries announced a partnership with ARM to optimize the ARM Cortex-A12 for their 28 nm-SLP process. This is the same process used for earlier Rockchip chips such as the RK3188, and matches the choice of Cortex-A12 cores in the design of the RK3288. In January 2014, official marketing materials listed the CPU cores as ARM Cortex-A17. At the CES electronics show in January 2014, someone apparently corrected the CPU specification as being ARM Cortex-A12 instead of Cortex-A17 on one of the panels of their show booth. However, since then, official specifications from Rockchip's website and marketing materials as well specifications used by device manufacturers have continued to describe the CPU as a quad-core ARM Cortex-A17. Recent testing of early RK3288-based TV boxes (August/September 2014) provided evidence that the RK3288 technically contains Cortex-A12 cores, since the "ARM 0xc0d" CPU architecture reported by CPU-Z for Android is the reference for Cortex-A12, while the original Cortex-A17 is referred to as "ARM 0xc0e". However, on the ARM community website, ARM clarified the situation on October 1, 2014, saying that Cortex-A12, for which Rockchip is one of the few known customers, will be called Cortex-A17 from now on, and that all references to Cortex-A12 have been removed from ARM's website. ARM explained that the latest production revision of Cortex-A12 now performs close to the level of Cortex-A17 because the improvements of the Cortex-A17 now also have been applied to the latest version of Cortex-A12. In this way, Rockchip now gets the official blessing from ARM for listing the cores inside the RK3288 as Cortex-A17. The first Android TV stick based on RK3288 was launched in November 2014 ("ZERO Devices Z5C Thinko"). RK33xx series Rockchip announced RK3368, the first member of the RK33xx family, at the CES show in January 2015. The RK3368 is a SoC targeting tablets and media boxes featuring a 64-bit octa-core Cortex-A53 CPU and an OpenGL ES 3.1-class GPU. 64bits Octa-Core Cortex-A53, up to 1.5 GHz High-performance PowerVR SGX6110 GPU with support for OpenGL 3.1 and OpenGL ES 3.0 4Kx2K H.264/H.265 real-time video playback HDMI 2.0 with 4Kx2K @ 60 fps display output The RK3399, also known as OP1 announced by ARM at Mobile World Congress in February 2016, features six 64 bit CPUs, including 2 Cortex-A72 and 4 Cortex-A53. The RK3399 is used for the development of the open source Panfrost driver for ARM Mali GPU Midgard series. Consumer devices include Asus Chromebook Flip C101PA-DB02, Asus Chromebook Tablet CT100, Samsung Chromebook Plus, and Pine64 Pinebook Pro. SBCs include 96Boards RK1808, Boardcon EM3399, Firefly RK3399, Khadas Edge, Lenovo Leez LP710, NanoPi M4B, Rock Pi 4, Pine64 RockPro64, Orange Pi 4, and Zidoo M9. SOMs include BeiQi RK3399Pro AIoT (Compatible 96boards), Boardcon PICO3399 SO-DIMM, and Geniatech SOM3399 RK3399 (Compatible 96boards). The RK3399Pro is a version of the RK3399 that includes a 2.4 TOPS NPU. SBCs include Rock Pi N10, Toybrick RK3399Pro, and VMARC RK3399Pro SoM Ficus2 Evaluation Board. SOM example is VMARC RK3399Pro SoM. RK35xx series The RK3566 is expected to be available in Q2 2020, with the following specifications: CPU – Quad-core Arm Cortex-A55 @ 1.8 GHz GPU – Arm Mali-G52 2EE NPU – 0.5 TOPS with support for INT8/ INT16 Multi-Media 8M ISP 2.0 with 3F HDR (Line-based/Frame-based/DCG) Support MIPI-CSI2,4-lane 1080p60 H.265, H.264 encoding 4K H.264/H.265/VP9 60fps video decoder DVP interface with BT.656/BT.1120 Memory – 32-bit DDR3L/LPDDR3/DDR4/LPDDR4/LPDDR4X Storage – eMMC 4.51, NAND Flash, SFC NOR flash, SATA 3.0, SD card via SDIO Display Support Dual Display MIPI-DSI/RGB interface LVDS/eDP/DP HDMI 2.0 Audio – 2x 8-ch I2S, 2x 2-ch I2S, PDM, TDM, SPDIF Networking –  2x RGMII interfaces (Gigabit Ethernet) with TSO (TCP segmentation offload ) network acceleration USB – USB 2.0 OTG and USB 2.0 host; USB3.0 HOST Other peripherals PCIe 3x SDIO 3.0 interface for Wi-Fi and SD card 6x I2C, 10x UART, 4x SPI, 8x PWM, 2xCAN interface RK3566-based SBC example is Pine64 Quartz64. RK3568-based SBC example is Firefly Station P2, and SOM example is Core-3568J AI Core Board. The RK3588 succeeds the RK3399Pro as flagship SoC. It's expected to be available in Q3/Q4 2020. CPU – 4x Cortex-A76 and 4x Cortex-A55 cores in dynamIQ configuration GPU – Arm "Natt" GPU NPU 2.0 (Neural Processing Unit) Multimedia – 8K video decoding support, 4K encoding support Display – 4K video output, dual-display support Process – 8 nm LP Open-source commitment Rockchip provides open source software on GitHub and maintains a wiki Linux SDK website. To offer free downloads of SoC hardware documents and software development resources as well as third-party development kits info. The chipsets available are RK3399, RK3288, RK3328 and RK3036. Markets and competition In the market for SoCs for tablets, Rockchip faces competition with Allwinner Technology, MediaTek, Intel, Actions Semiconductor, Spreadtrum, Leadcore Technology, Samsung Semiconductor, Qualcomm, Broadcom, VIA Technologies and Amlogic. After establishing a position early in the developing Chinese tablet SoC market, in 2012 it faced a challenge by Allwinner. In 2012, Rockchip shipped 10.5 million tablet processors, compared to 27.5 million for Allwinner. However, for Q3 2013, Rockchip was forecast to ship 6 million tablet-use application processors in China, compared to 7 million for Allwinner who mainly shipped single-core products. Rockchip was reported to be the number one supplier of tablet-use application processors in China in Q4 2013, Q1 2014 and Q2 2014. Chinese SoC suppliers that do not have cellular baseband technology are at a disadvantage compared to companies such as MediaTek that also supply the smartphone market as white-box tablet makers increasingly add phone or cellular data functionality to their products. Intel Corporation made investments into the tablet processor market, and was heavily subsidizing its entry into the low-cost tablet market as of 2014. Cooperation with Intel In May 2014, Intel announced an agreement with Rockchip to jointly deliver an Intel-branded mobile SoC platform based on Intel's Atom processor and 3G modem technology. Under the terms of the agreement, the two companies will deliver an Intel-branded mobile SoC platform. The quad-core platform will be based on an Intel Atom processor core integrated with Intel's 3G modem technology, and is expected to be available in the first half of 2015. Both Intel and Rockchip will sell the new part to OEMs and ODMs, primarily into each company's existing customer base. As of October 2014, Rockchip was already offering Intel's XMM 6321, for low-end smartphones. It has two chips: a dual-core application processor (either with Intel processor cores or ARM Cortex-A5 cores) with integrated modem (XG632) and an integrated RF chip (AG620) that originates from the cellular chip division of Infineon Technologies (which Intel acquired some time ago). The application processor may also originate from Infineon or Intel. Rockchip has not earlier targeted the smartphone space in a material way. List of Rockchip SoCs ARMv7-A processors ARMv8-A processors Tablet processors with integrated modem See also List of Rockchip products List of Qualcomm Snapdragon processors Samsung Exynos Rockchip RK3288 Chromebook List of applications of ARM cores ARM Cortex-A53 Allwinner Technology Amlogic Actions Semiconductor Leadcore Technology MediaTek Nufront Spreadtrum References External links Rockchip Wiki Linux SDK Github Rockchip-linux Website Fuzhou Rockchip Electronics Company website Rockchip Korea Company website Rockchip Korea Company website RK3288 SoC specification, 22 February 2014 RK3368 SoC specification, 19 April 2015 Fabless semiconductor companies Semiconductor companies of China ARM architecture Embedded microprocessors System on a chip Companies based in Fuzhou Computer companies established in 2001 Privately held companies of China Chinese brands Microprocessors made in China Chinese companies established in 2001 Electronics companies established in 2001
34271636
https://en.wikipedia.org/wiki/Game-Maker
Game-Maker
Game-Maker (aka RSD Game-Maker) is an MS-DOS-based suite of game design tools, accompanied by demonstration games, produced between 1991 and 1995 by the Amherst, New Hampshire based Recreational Software Designs and sold through direct mail in the US by KD Software. Game-Maker also was sold under various names by licensed distributors in the UK, Korea, and other territories including Captain GameMaker (Screen Entertainment, UK) and Create Your Own Games With GameMaker! (Microforum, Canada). Game-Maker is notable as one of the first complete game design packages for DOS-based PCs, for its fully mouse-driven graphical interface, and for its early support for VGA graphics, Sound Blaster sound, and full-screen four-way scrolling. Primary distribution for Game-Maker was through advertisements in the back of PC and game magazines such as Computer Gaming World and VideoGames & Computer Entertainment. At release Game-Maker was priced at $89, and shipped on 5.25" diskette with seven or eight demonstration or tutorial games. Later releases were less expensive, and shipped on CD-ROM with dozens of sample games and a large selection of extra tools and resources. After some consultation with the user base, on July 12, 2014 original coder Andy Stone released the Game-Maker 3.0 source code on GitHub, under the MIT license. Construction Game-Maker consists of a text-mode wrapper, tying together a collection of WYSIWYG design tools. The tools produce proprietary resources that are compiled together and parsed with RSD's custom XFERPLAY game engine. The design tools include: Palette Designer - for designing and editing custom 256-color .PAL palette files (for sprites, color #255 is clear) Block Designer - for designing 20x20 pixel .BBL background tiles and .CBL/.MBL animation frames for characters and monsters Character Maker - for animating and sequencing .CHR character sprites Monster Maker - for animating and sequencing .MON "monster" (i.e., non-player) sprites Map Maker - for designing 100x100 tile .MAP files (10 screens tall; 6-1/4 screens across) Graphics Image Reader - for importing visuals from .GIF files, produced with external painting programs Sound Designer - for designing PC speaker .SND files, assigning Sound Blaster .VOC samples, and formatting .CMF music files Integrator - for compiling and organizing resources together into a playable .GAM file Game-Maker involves no scripting language; all design tools use a mouse-driven 320x200 VGA display, with a shared logic and visual theme. Users draw background tiles pixel by pixel in an enlarged window, and can pull tiles from the palette to arrange in a "sandbox" area. A further menu allows users to set physical properties—solidity, gravity, animation, various counter values—for each block. The user draws maps by pulling blocks from the palette and painting with them using simple paintbrush, line, shape, and fill tools. Characters can have up to 15 keyboard commands, plus idle, death, and injury animations. They can hold an inventory and money, earn score, gain and lose hit points and lives, and track several counters—often used for keys and similar functions. Monsters have simple animations and movements, and can also change behavior in response to the player. Playable games can be exported complete with a portable version of the XFERPLAY engine, sound drivers, and configuration files. All games record high scores and (in later versions) attract mode replays. All games also feature instant save and load, and support standard PC joysticks. In later versions of the software, games also can incorporate several formats including ASCII text data, CompuServe .GIF files, and Autodesk Animator .FLI animations into multimedia presentations during menus and between levels. Although Game-Maker includes no tools for developing these files, the formats are standardized enough to allow the user a choice of standalone utilities. In addition, image data produced with outside programs such as Deluxe Paint is easily imported and split into background tiles or sprites. Game engine Through RSD's proprietary XFERPLAY engine, all Game-Maker games run in 256-color full-screen VGA, at an eccentric 312x196 resolution (switching to the more standard 320x200 for menu screens). Game-Maker games are also distinguished by their eccentric 20x20 tile and sprite size (as opposed to the more standard 8x8 or 16/16 dimensions), populating a standard 100x100 tile (2000x2000 pixel) map size. Transition between scenes is achieved through a slow fade to or from black. All games share a common interface, with a menu screen offering six options: Play, Read Instructions, Read Storyline, See Credits, See Highest Scores, and Quit. Pressing F2 brings up an inventory screen, while F5 and F6 bring up save and load screens. Although most of these menus can be customized with .GIF backgrounds, their basic layout, labeling, and content are constant across all games. All games track player score, and display a high score table upon the game's end (whether through completion or failure). Later versions of Game-Maker allow multimedia sequences between levels, including .GIF images, .FLI animations, and ASCII text files. The engine allows one player at a time, with the screen automatically scrolling in any of the four cardinal directions when the character comes within 1/3 screen width or height of the screen's edge. All Game-Maker games lack an on-screen display (of hit points, score, lives, etc.), though much of this information can be tracked in the inventory screen. History Game-Maker developed from a series of modification tools for a top-down competitive maze game called Labyrinth, designed by Andrew Stone in January 1991. Although the engine is different, Labyrinth shared code and file formats with the later XFERPLAY engine and graphical resources with several later first-party games. Whereas Labyrinth grew out of Andrew's interest in NetHack and Piers Anthony novels, one of Andrew's first goals was to expand his tools and engine to permit side-scrolling action-adventure games. "In fact, making something like Metroid was sort of the bar I set myself for version 1.0. Which is why I added the secret passage features, and gravity, early on." In July 1991 Andrew and his father G. Oliver Stone incorporated Recreational Software Designs to pursue Game-Maker as a business venture—with Oliver as president and Andrew as CEO. Through Oliver's business acumen RSD made deals with KD Software and GameLynk to distribute Game-Maker and host its online community. Through 1992-1994 RSD placed a series of full-sized ads (and some smaller sizes) in major computer magazines, and in 1994 they sub-leased a booth at the Consumer Electronics Show in Chicago. At the time of Game-Maker's release the software was revolutionary both in concept and technology; although there were earlier game creation systems, Game-Maker was the first general-purpose graphical GCS for the dominant DOS/Windows-based PC. Throughout the design process Andrew was adamant that Game-Maker's tools remain entirely visual, involving absolutely no programming from the end user. Its engine also supported full-screen four-way VGA scrolling, and later full-screen double buffered redraws, well before these were the standard. Several updates followed over the next three years, adding Sound Blaster support, improving the design interface, and refining the game engine—yet many features kept being pushed back. Although his brother Oliver Jr. spent a summer on the project, and wrote the code for the sound and Monster editor, Andrew handled the bulk of the coding and updates — a task that, thanks to the lack of standardized drivers or libraries at that time, became all-encompassing and difficult to maintain. Over the software's lifetime Andrew found himself so "waylaid by video driver and [engine] problems" that he was unable to focus as much as he wanted on adding and refining features. By the mid-1990s the advent of 3D video cards and the introduction of Windows 95 meant that to keep up with the marketplace Game-Maker would need great changes both in concept and in coding. Furthermore the continued lack of standardization meant a large investment in coding ever more complicated drivers and libraries—work that would be thrown away as soon as standards were established. Despite plans for a radical professional-quality update, RSD ceased support for Game-Maker around 1995. In a 2011 interview Andrew mused about Game-Maker, stating that by his own principles he was surprised he hadn't released the source code years earlier. Later, on July 1, 2014, Andrew posted to the Game-Maker Facebook page, asking for community input on releasing the code. On July 12 he posted the Game-Maker 3.0 source to GitHub, under the MIT license, suggesting that although people were free to use the code how they liked, "if there is interest in preserving the old games you guys made then porting Game-Maker to modern OSes is the first step." Release history Game-Maker 1.0: Includes one 1.44 MB microfloppy disk containing the full set of RSD tools plus the games Sample, Terrain, Houses, Animation, Pipemare, Nebula, and Penguin Pete. Also included, beginning in version 1.04, is a separate diskette containing the GameLynk game Barracuda: Secret Mission 1. All 1.X iterations of Game-Maker include a square-bound 75-page user manual and several leaflets about the use of the software. Later versions (1.04, 1.05) also include leaflets explaining recent changes and updating the user manual. Game-Maker 2.0: Includes both 1.2 MB floppy and 1.44 MB microfloppy disks containing the full set of RSD tools plus the games Tutor (a replacement for Animation), Sample, Terrain, Houses, Pipemare, Nebula, and Penguin Pete. Both versions 2.0 and 2.02 include a square-bound 94-page user manual and several leaflets about the use of the software. The latter version also includes a leaflet explaining recent changes and updating the user manual. Game-Maker 3.0, floppy: A three-microfloppy (1.44 MB) package contains the full set of RSD tools, the in-house developed games Tutor, Sample, and Nebula, and three licensed games developed by the independent designer A-J Games: Zark, The Patchwork Heart, and Peach the Lobster. Both packages of version 3.0 include a square-bound 104-page user manual and several leaflets about the use of the software. Game-Maker 3.0, CD-ROM: this package includes the contents of the floppy package, plus first-party games Pipemare, Penguin Pete, Houses, and Terrain; A-J Games productions Glubada Pond, Crullo: Adventures of a Donut, Cireneg's Rings, and Linear Volume; two games by Sheldon Chase of KD Software, Woman Warrior and the Outer Limits and Woman Warrior and the Attack from Below; and the GameLynk game Barracuda: Secret Mission 1. In addition, the CD-ROM includes a large collection of images, sounds, music, animations, and gameware elements, and a Shareware directory holding demo versions of fourteen games by various independent designers. Create Your Own Games With GameMaker!: In 1995, the Canadian company Microforum rebranded and repackaged the CD-ROM version of Game-Maker 3.0 for release to a worldwide market. This version includes a spiral-bound user manual. The disc contents are the same as the original RSD release. Game distribution During Game-Maker's lifetime, users could distribute their games through the Gamelynk (aka Night Owl, later Frontline) BBS in Kennebunkport, Maine or through the Game-Maker Exchange program — an infrequent mailing to registered users, compiling submitted games to a floppy disc with occasional commentary from RSD president G. Oliver Stone. Many user-generated games also wound up on public bulletin boards, and thereby found wide distribution and eventual salvation on shovelware CD-ROMS. RSD's initial terms of use were rather restrictive. To quote from a pamphlet titled "Distributing Your GAME-MAKER Games" and dated May 9, 1993: The pamphlet goes on to detail standalone games, promotional games, and shareware and BBS distribution. For standalone games (which is to say, games that are meant as an end unto themselves), RSD asks a royalty of $500 for the first 200 games sold or distributed, then a small fee for each subsequent copy. The higher the number, the smaller the fee. For promotional software (distributed as part of a promotional kit), RSD asks $1000 for the first 1000 copies and then smaller fees for every copy up to 25,000. Beyond that, RSD asks no additional charge. Shareware and BBS distribution is a curious case. Although RSD prohibits free distribution, the license does allow a loophole for shareware so long as the author requests the user to pay a minimum registration or license fee of $5.00, then makes a quarterly payment of 10% of all collected fees. These restrictions were rarely enforced; as a June 15, 1993 pamphlet titled "Distributing Games" suggests, freeware games were common and tolerated despite the license agreement: Open format Despite the limitations on distribution, Game-Maker's design format is notoriously open. From its outset Game-Maker was designed as a collaborative tool, with the intent that users not only trade design tips but pick apart and freely sample from each other's work. A series of full-page magazine ads, run in the early 1990s, spends nearly as many words selling Game-Maker as a modification tool, along the lines of Galoob's Game Genie accessory, as it does describing the software's design features, promising that users can "modify and enhance Game-Maker games". "Is a game too easy? Increase the speed. Too boring? Add danger, sounds and monsters. Too plain? Dress up the graphics, add animation. Too short? Add new levels." This "remix" philosophy stems partly from the Stones' own collaborative family dynamic, and — as with the insistence on an entirely visual, code-free interface — partly from concern about overwhelming the end user. "[W]e realized that it would be pretty hard for a ten to twelve-year-old to do it all himself so there were practical considerations." To that end Game-Maker games are distributed as an unprotected bundle of resource files, both specialized (i.e., Game-Maker's unique graphic and animation formats) and common (including CompuServe .GIF, Creative .VOC, Autodesk .FLI, and ASCII text files), making it a simple task to identify and edit most Game-Maker games. The decision was a defiant one on the part of programmer G. Andrew Stone, who argued that any user concerned about protecting, rather than sharing, his work should take on that burden himself. As it happens, one of the earliest games distributed with Game-Maker was GameLynk's Barracuda: Secret Mission 1, a user-derived project that is most distinguished by its presentation whereby its file structure is hidden by LHarc compression and the portable Deluxe Paint Animation player is tacked onto the Game-Maker executable to provide intro and exit animations. Limitations Through its history several aspects of Game-Maker's engine, design interface, and feature set have experienced scrutiny from its user base. One of Game-Maker's more notorious qualities is its exclusive use of Creative's proprietary .VOC and .CMF sound and music formats, and its absence of integrated design tools for those formats (or recommendations as to external tools), leaving users to work out their own solutions — or often not. The use of .CMF was a last-minute decision; Andy had been working on a .MOD-style tracker format, but development was indefinitely delayed. As a temporary measure his brother Ollie plugged in code provided by Creative Labs. Other common frustrations include a lack of multi-key mapping for character behaviors, such as pressing Z + a directional arrow to jump in the direction pressed (a problem stemming from a lack of standardized keyboard electrical layouts at that time); the extreme simplicity of monster behaviors (partially due to a desire to eliminate programming from the design tools); a lack of persistent flags for game events (partially due to memory constraints); and the lack of on-screen displays for health, lives, and other counters (due to Andrew's emphasis on full-screen rendering). Monsters are a particular point of contention. Compared to characters, monsters have only limited interaction with their environments. For instance, monsters are not affected by gravity or other physics—and have no contextual AI to speak of, aside from a limited awareness of the character. Monsters also lack variable counters, such as hit points. Instead each monster (including NPCs, character shots, and some kinds of power-up) has a fixed "power level" between 0 and 255, and a collision between unequal monsters is resolved by destroying the weaker monster. The engine therefore does not lend itself to graduated damage (i.e., sword 1 does twice the damage as sword 2). Rather, collisions are all binary; either a weapon works, or it doesn't. Workarounds For advanced users, many of the engine's limitations have workarounds. One can approximate gravity's effect on a monster by defining a heavy diagonal path; the monster will move horizontally until it reaches a ledge, at which point it will fall until it hits the ground again. Similarly, although monsters lack hit counters, the user can create chains of identical (or successively injured-looking) monsters to approximate the same effect. In later years users have found ways to subvert or play along with the system's properties to achieve effects, mechanisms, and even genres unaccounted for in the engine's basic features—including extensive in-engine cutscenes, boss sequences, AM2-style sprite scalers, RPG style battles, parallax scrolling, shooting galleries, and destructible terrain. Influence As one of the first complete game design suites for IBM-based PCs, and the only one devoted to action games during the early '90s Shareware boom, Game-Maker "anticipated the thriving indie game community we have today with countless game engines, web sites and indie game companies." Several of its users went on to later note in indie or commercial game development, such as renowned Seiklus author cly5m, Slender: The Eight Pages designer Mark Hadley, Liight programmer Roland Ludlam, Warhammer Online background artist Justin Meisse, and Bionic Commando associate producer James W. Morris. Some games produced with RSD's tools, such as Jeremy LaMar's Blinky series, have become cult favorites. Others, like A-J's Quest, Die Blarney!, and Matt Bell's Paper Airplane, reached a wide circulation during the 1990s Shareware boom, appearing on many CD compilations. Game-Maker seems also to have made an impression in the Benelux, with references in various academic papers, coverage in the largest game magazine in the region, and dissection by the local demoscene. Notable games A-J's Quest (A-J Games, 1992) — A widely distributed side-scrolling platformer that was also incorporated into an early slideshow demo of Game-Maker. Barracuda (GameLynk, 1992) — An action simulation game, involving deep sea diving. The shareware episode, distributed on a separate floppy disk with early versions of Game-Maker, incorporates external functions such as LHarc auto-compression and the portable Deluxe Paint Animation player, into the game's presentation. Blinky 2 (Jeremy LaMar, 1994) — A top-down action game inspired by The Legend of Zelda: A Link to the Past. Through its distribution on AOL Kids, Blinky 2 and its sequel achieved a small cult status. Blinky 3 (Jeremy LaMar, 1995) — A side-scrolling platformer featuring multiple characters and a branching level structure. Of the two distributed Blinky games, Blinky 3 has received the bulk of attention. Nebula (RSD, 1991) — Game-Maker programmer Andy Stone's Metroid-influenced action-adventure platformer, that was always first to demonstrate any new features to the software. Parsec Man 3D (Mark Hadley, 1994) — A minimalist free-floating/platform shooter by Slender: The Eight Pages designer Mark Hadley, that uses red/cyan anaglyphic 3D glasses to add both atmosphere and functional design. Parsec Man was also distributed on the Game-Maker 3.0 CD-ROM. Paper Airplane (Matt Bell, 1993) — A side-scrolling strategic action game with puzzle-solving elements; perhaps the most widely distributed Game-Maker game. Peach the Lobster (A-J Games, 1994) — A Sonic the Hedgehog influenced side-scrolling platformer that was incorporated into a late-era slideshow demo of Game-Maker. Pipemare (RSD, 1991) — G. Oliver Stone's top-down action maze game, which provides much of the iconography for the Game-Maker software and packaging. Sample (RSD, 1991) — Joan Stone's simple 3/4 view adventure game that formed the basis for dozens of user-created games. References External links The Game-Maker Archive Wiki RSD Game-Maker Facebook group Game-Maker source code project at GitHub Game-Maker demo at archive.org RSD Game-Maker at Game Creation Tools Classification RSD Game-Maker subreddit Lost Media Wiki page on RSD Game-Maker 1991 software DOS software Formerly proprietary software Free game engines Free software Software using the MIT license Video game development software Video game engines Windows software
35299199
https://en.wikipedia.org/wiki/Kmscon
Kmscon
Kmscon is a virtual console that runs in userspace and intends to replace the Linux console, a terminal built into the Linux kernel. Kmscon uses the KMS driver for its output, it is multiseat-capable, and supports internationalized keyboard input and UTF-8 terminal output. The input support is implemented using X keyboard extension (XKB). Development of Kmscon stopped in March 2015. There was a successor project called systemd-consoled, but this project was also later dropped in July 2015. Features Kmscon supports printing the full set of Unicode glyphs and is not limited by console encoding as the Linux console. While the only hard dependency is udev, kmscon can optionally be compiled to use Mesa for hardware acceleration of the console, and the pango library for improved font rendering. The adoption of XKB for input allows kmscon to accept the full range of available keyboard layouts for the X.Org Server and Wayland compositors for input and makes it possible to use the same layout both in graphical environment and in terminal. Multiseat support The VT system in the Linux kernel dates to 1993 and does not implement out-of-the-box multiseat support. It supports up to 63 VTs, but only one VT can be active at any given time. This necessitates additional steps to configure multiseat support. kmscon/systemd-consoled will enable multiseat out-of-the-box. If one seat's display server is running on VT 7 and another seat's display server is running on VT 8, then only one of these two seats can be used at a time. To use the other seat, a VT switch must be initiated. To make all seats usable at the same time, there are a few options: Associate all display servers with the same VT: any user can switch VTs and in that case all users switch to the new VT. This makes VT switching (and thus fast user switching) impractical. X.Org Server command-line option -sharevts Don't associate any display server with a VT: fast user switching is impossible in this case. Text-based console logins are only possible if an input and display device are reserved for this purpose. Associate only one of the display servers with a VT: the other display servers can't do VT switching, but the display server associated with a VT can. VT switching on that one seat won't affect the other seats. This is the approach favored and assumed by systemd. command-line option vt7 for user 1 and -novtswitch for all other users. Development In 2011, Jesse Barnes wrote in his blog about the possible userspace DRM-based implementation of the virtual terminal, that would dissolve the need for the Linux framebuffer and virtual terminal (VT) subsystems in the Linux kernel. Motivated by this blog post, David Herrmann implemented the basic functionality of virtual terminal. In October 2013, terminal emulator state machine () library, a state machine for DEC VT100–VT520 compatible terminal emulators, was split out of kmscon and made available separately. It was amended with wlterm, an example Wayland terminal emulator. See also Comparison of terminal emulators List of terminal emulators References Free system software Freedesktop.org Linux kernel-related software Software using the ISC license Technical communication tools Terminal emulators Wayland (display server protocol)
68181794
https://en.wikipedia.org/wiki/List%20of%20grandfather%20clauses
List of grandfather clauses
A grandfather clause (or grandfather policy or grandfathering) is a provision in which an old rule continues to apply to some existing situations while a new rule will apply to all future cases. Those exempt from the new rule are said to have grandfather rights or acquired rights, or to have been grandfathered in. Frequently, the exemption is limited; it may extend for a set time, or it may be lost under certain circumstances. For example, a grandfathered power plant might be exempt from new, more restrictive pollution laws, but the exception may be revoked and the new rules would apply if the plant were expanded. Often, such a provision is used as a compromise or out of practicality, to allow new rules to be enacted without upsetting a well-established logistical or political situation. This extends the idea of a rule not being retroactively applied. List of examples Technology Sirius XM Satellite Radio lifetime subscriptions are no longer being offered to new subscribers. Sirius XM Satellite Radio still honors lifetime subscriptions to people who have bought them in the past, under certain conditions. When an app store closes, it means that no new software or applications can be bought or downloaded. However, anyone who had bought software prior to the store shutdown can continue to re-download it indefinitely. Examples include the DSi store, Wii Shop Channel, and the Playstation store for PSP. Tablet computers (and similar devices) with screen sizes below eight inches which ran Windows Phone 8 or Windows Phone 8.1 featured a traditional Windows desktop with legacy app support, alongside the ability to run the newer Metro-style apps. However, newer devices with sub-8-inch screens that ran the successor Windows 10 Mobile operating system had no legacy app capability, and, as such, could only run apps downloaded from the Windows Store. Most (if not all) owners of the older Windows 8 devices of that screen size class who upgrade to Windows 10 can still retain the old-style desktop and legacy app support, despite this upgrade. The Windows operating system has had several exceptions in the past. When Windows NT 4.0 was ending extended support between 2004 and 2006, anyone using the OS no longer got security updates and were encouraged to upgrade to a newer OS or PC. However, Microsoft allowed some companies and businesses to pay for additional support to give them time to migrate to newer operating systems like Windows 2000 or Windows XP. Paid support for NT 4.0 ended on December 31, 2006. When extended support for Windows 7 ended on January 14, 2020, anyone using the OS no longer got security updates and were encouraged to upgrade to a newer OS or PC. However, Microsoft allowed companies and businesses that had the Professional and Enterprise volume licensed editions of Windows 7 to pay for additional support to give them time to migrate to newer operating systems like Windows 8.1 or Windows 10. Paid support for Windows 7 will end on January 10, 2023. When support for Windows 10 version 1607 ended on April 10, 2018, anyone using Windows 10 1607 no longer got security updates and were encouraged to upgrade to a newer version of Windows 10 like version 1809. However, Microsoft allowed Intel Clover Trail devices (which were no longer supported on newer versions of Windows 10) to still receive security updates until January 10, 2023. When support for Windows 10 version 1703 ended on October 9, 2018, anyone using Windows 10 1703 no longer got security updates and were encouraged to upgrade to a newer version of Windows 10, such as version 1809. However, Microsoft allowed Surface Hub devices (which were no longer supported on newer versions of Windows 10) to still receive security updates for a longer period of time. Initially, the extended support for Surface Hub devices running Windows 10 1703 was to end on October 14, 2025. However, on May 15, 2018, Microsoft moved the extended support date to March 9, 2021 after it announced that the second-generation Surface Hub 2S would be released in 2019. Surface Hub devices running Windows 10 1703 are unsupported as of March 9, 2021. Science Some parts of bacterial taxonomy are "locked" in a problematic state due to prevailing use in medical and industrial communities. A more proper name accepted by the microbiologist might not be used at all. Law Many jurisdictions prohibit ex post facto laws, and grandfather clauses can be used to prevent a law from having retroactive effects. For example: In the UK, the offence of indecent assault is still charged in respect of crimes committed before the offence was abolished and replaced with sexual assault (among others) by the Sexual Offences Act 2003. Section 1 of Article Two of the United States Constitution appears on a cursory reading to stipulate that presidential candidates must be natural born citizens of the United States. However, there is a further category of persons eligible for that office: those who were citizens of the United States at the time of the adoption of that constitution. Without that provision, it would have required a strained reading to construe that all actual presidents born in the colonial era were born in the United States, because the United States did not exist prior to July 4, 1776, the date on which the Declaration of Independence was adopted. Technically, there are some arguments that state anyone who was a United States citizen as of May 7, 1992 (the adoption of this present constitution, as defined) would be eligible. Many acts requiring registration to practice a particular profession incorporated transition or "grandfather sections" allowing those who had already practised for a specified time (often three or four years) to be registered under the act even if they did not have the training or qualifications required for new applications for registration. Examples are the Nurses Registration Act 1901 in New Zealand and the Nurses Registration Act 1919 in England and Wales. In 1949, standards were passed requiring certain fire safety improvements in schools. However, older schools, such as the Our Lady of the Angels School in Illinois, were not required to be retrofitted to meet the requirements, leading to the deadly Our Lady of the Angels School fire in which 92 students and 3 teachers died. In 1951, the United States ratified the Twenty-second Amendment to the United States Constitution, preventing presidents from running for more than two full terms (or one full term, if they had served more than two years of another person's term). The text of the amendment specifically excluded the sitting president from its provisions, thus making Harry Truman eligible to run for president in 1952—and, theoretically, for every subsequent presidential election thereafter—even though he had served a full term and almost four years of a previous president's term. Truman was highly unpopular and lost the New Hampshire primary by nearly 55% to 44%. Eighteen days later the president announced he would not seek a second full term. In the 1980s, as states in America were increasing the permitted age of drinking to 21 years, many people who were under 21 but of legal drinking age before the change were still permitted to purchase and drink alcoholic beverages. Similar conditions applied when New Jersey and certain counties in New York raised tobacco purchase ages from 18 to 19 years in the early 2000s. In 2012, Macau increased the permitted age of entering casinos to 21. However, casino employees between the ages of 18 and 21 before the change were still permitted to enter their places of employment. This category was exhausted by the end of 2015. During the Federal Assault Weapons Ban, certain firearms made before the ban's enactment were legal to own. Automatic weapons that were manufactured and registered before the Firearm Owners Protection Act (enacted May 19, 1986) may legally be transferred to civilians. According to the Interstate Highway Act, private businesses are not allowed at rest areas along interstates. However, private businesses that began operations before January 1, 1960, were allowed to continue operation indefinitely. Michigan law MCL 287.1101–1123 forbade ownership or acquisition of large and dangerous exotic carnivores as pets. But animals already owned as pets at the time of enactment were grandfathered in, and permitted to be kept. The Federal Communications Commission (FCC) stated that, as of March 1, 2007, all televisions must be equipped with digital tuners, but stores that had TV sets with analog tuners only could continue to sell analog-tuner TV sets. In 1967, the FCC prohibited companies from owning both a radio and a television station in the same marketing area, but those already owned before the ruling were permanently grandfathered. For example, ABC already owned WABC-TV, 77 WABC and WABC-FM (now WPLJ), and so could continue to own all three stations after the law was passed. But then-current broadcasting companies that had a radio station in a city could not acquire an adjacent television station, and companies that owned a television station in a city could not acquire adjacent radio stations. In 1996, the law was overturned. Companies can now own up to eight radio stations and two television stations in a market, provided that they do not receive more than 33% of that market's advertising revenues. In 1984 Mississippi passed a law changing its official mode of capital punishment from the gas chamber to lethal injection. Under the new law, anyone sentenced after July 1, 1984, was to be executed by lethal injection; those condemned before that date were "grandfathered" into the gas chamber. Therefore, three more convicted murderers would die in the chamber—Edward Earl Johnson and Connie Ray Evans in 1987, and Leo Edwards in 1989. In 1998, the Mississippi Legislature changed the execution law to allow all death row inmates to be executed by lethal injection. In 1965, the Canadian government under Prime Minister Lester B. Pearson passed legislation that required senators to retire when they reached the age of 75. However, senators appointed before the legislation was passed were exempted from the mandatory retirement rule. When Quebec enacted the Charter of the French Language in 1977, making French the province's official language, the famous Montreal kosher-style deli Schwartz's was allowed to keep its name with the apostrophe, a feature not used in French in the possessive form. During Canada's federal Redistribution, a grandfather clause ensures that no province can have fewer seats after Redistribution than it did in 1985. In the early 2000s the Houston Police Department mandated to police academy graduating classes that tenured officers are required to carry a sidearm chambered in .40 S&W; tenured police officers prior to the mandate were grandfathered in where they still carried their existing sidearms. In 2013, Tennessee enacted a law requiring that products labeled as "Tennessee whiskey" be produced in the state, meet the legal definition of bourbon whiskey, and also use the Lincoln County Process. The law specifically allowed Benjamin Prichard's Tennessee Whiskey, which does not use the Lincoln County Process, to continue to be labeled as such. In 2014, Kentucky radically simplified its classification of cities, with the previous system of six population-based classes being replaced by a two-class system based solely on the type of government, effective January 1, 2015. In the old classification system, many cities had special privileges (notably in alcoholic beverage control, taxing powers, certain labor laws, and the ability to operate its own school system) based on their class; the new legislation contained elaborate provisions to ensure that no city lost a privilege due to the reclassification. Standards compliance Strict building codes to withstand frequent seismic activity were implemented in Japan in 1981. These codes applied only to new buildings, and existing buildings were not required to upgrade to meet the codes. One result of this was that during the great Kobe earthquake, many of the pre-1981 buildings were destroyed or written off, whereas most buildings built post-1981, in accordance with the new building codes, withstood the earthquake without structural damage. Wigwag-style railroad crossing signals were deemed inadequate in 1949 and new installations were banned in the United States. Existing wigwag signals were allowed to remain and 65 years later, there are still about 40 wigwag signals in use on railroads in the United States. The UK's national rail infrastructure management company Network Rail requires new locomotives and rolling stock to pass tests for electromagnetic compatibility (EMC) to ensure that they do not interfere with signalling equipment. Some old diesel locomotives, which have been in service for many years without causing such interference, are exempted from EMC tests and are said to have acquired grandfather rights. The Steel Electric-class ferryboats used by Washington State Ferries were in violation of several U.S. Coast Guard regulations, but because they were built in 1927, before the enactment of the regulations, they were allowed to sail. Those ferries were decommissioned in 2008. Tolled highways that existed before the Interstate Highway System are exempt from Interstate standards despite being designated as Interstate highways. Many such toll roads (particularly the Pennsylvania Turnpike) remain as such. However, tolled highways built since the Interstate system, such as the tolled section of PA Route 60 and PA Turnpike 576, must be built or upgraded to Interstate standards before receiving Interstate designation. Both highways are to be part of the Interstate system, with PA 60 now I-376 and PA Turnpike 576 to become I-576 in the near future. As well, U.S. Interstate Highway standards mandate a minimum 11-foot median; however, highways built before those standards have been grandfathered into the system. The Kansas Turnpike is the most notable example, as it has been retrofitted with a Jersey barrier along its entire 236-mile (380-km) length. The earliest Ontario dual highways do not meet current standards; however, it would be prohibitively expensive to immediately rebuild them all to updated guidelines, unless a reconstruction is warranted by safety concerns and traffic levels. As a result, substandard sections of freeways such as low overpasses and short acceleration/deceleration lanes are often retrofitted with guard rails, warning signage, lower speed limits, or lighting. The United States Federal Communications Commission has required all radio stations licensed in the United States since the 1930s to have four-letter call signs starting with a W (for stations east of the Mississippi River) or a K (for stations west of the Mississippi River). But stations with three-letter call signs and stations west of the Mississippi River starting with a W and east of the Mississippi River starting with a K—such as WRR in Dallas and WHB in Kansas City, plus KQV and KYW in Pennsylvania, all licensed before the 1930s—have been permitted to keep their call signs. In the western United States, KOA in Denver, KGA in Spokane, KEX in Portland, and KHJ and KFI in Los Angeles, among many others, have been permitted to keep their original or reassigned three-letter call signs. In addition, a new or existing station may adopt a three-letter call set if they have a sister radio or TV station in that market with those calls (examples include WJZ-FM Baltimore and WGY-FM Albany, New York). (Note that stations licensed in Louisiana and Minnesota, the two states with significant territory on both sides of the Mississippi, are allowed to use call signs starting with either W or K, regardless of their location with respect to the river.) In aviation, grandfather rights refers to the control that airlines exert over "slots" (that is, times allotted for access to runways). While the trend in airport management has been to reassert control over these slots, many airlines are able to retain their traditional rights based on current licences. In the UK, until 1992, holders of ordinary car driving licences were allowed to drive buses of any size, provided that the use was not commercial and that there was no element of "hire or reward" in the vehicles' use; in other words, no one was paying to be carried. The law was changed in 1992 so that all drivers of large buses had to hold a PCV (PSV) licence, but anyone who had driven large buses could apply for grandfather rights to carry on doing so. Some MOT test standards in the UK do not apply to vehicles first registered prior to the implementation of the legislation that introduced them. For example, vehicles first registered prior to January 1, 1973, are exempt from the requirement to use retro-reflective yellow/white vehicle registration plates and vehicles first registered prior to January 1, 1965, are exempt from seat belt standards/legislation unless they have been retrospectively fitted. In some U.S. states, the inspection/maintenance (I/M) programs for motor vehicle emission testing have a rolling chassis exemption, e.g. a motor vehicle model 25 years old or more is exempted from emission tests. Sports In 1920, when Major League Baseball introduced the prohibition of the spitball, the league recognized that some professional pitchers had nearly built their careers on using the spitball. The league made an exception for 17 named players, who were permitted to throw spitballs for the rest of their careers. Burleigh Grimes threw the last legal spitball in 1934. In 1958, Major League Baseball passed a rule that required any new fields built after that point to have a minimum distance of from home plate to the fences in left and right field, and to center. (Rule 1.04, Note(a)). This rule was passed to avoid situations like the Polo Grounds (which had extremely short distance down the lines, . to right and . to left.) and the Los Angeles Coliseum, (which had . down the left field line). However, Older ballparks that were built before 1958, such as Fenway Park and Wrigley Field, were grandfathered in and allowed to keep their original dimensions. Since the opening of Baltimore's Camden Yards (1992), the "minimum distance" rule has been ignored. Beginning in the 1979–80 season, the National Hockey League required all players to wear helmets. Nevertheless, if a player had signed his first professional contract before this ruling, he was allowed to play without a helmet if he so desired. Craig MacTavish was the last player to do so, playing without a helmet up until his retirement in 1997. Other notable players include Guy Lafleur and Rod Langway, who retired in 1991 and 1993, respectively. A similar rule was passed for NHL officials for the 1988–89 season; any official who started his career before the ruling could also go helmet-less if they so desired. Kerry Fraser was the last referee who was not required to wear a helmet, until the ratification of the new NHL Officials Association collective bargaining agreement on March 21, 2006, required all remaining helmet-less officials to wear one. The NHL created a similar rule in 2013 requiring visors for players with fewer than 25 games' experience. Major League Baseball rule 1.16 requires players who were not in the major leagues before 1983 to wear a batting helmet with at least one earflap. The last player to wear a flapless helmet was the Florida Marlins' Tim Raines in 2002 (career began in 1979). The last player eligible to do so was Julio Franco in 2007 (career began in 1982), although he opted to use a flapped helmet. The NFL outlawed the one-bar facemask for the 2004 season but allowed existing users to continue to wear them (even though by that time, the mask had mostly fallen out of favor, save for a handful of kickers/punters). Scott Player was the last player to wear the one-bar facemask in 2007. For many decades, American League (AL) umpires working behind home plate used large, balloon-style chest protectors worn outside the shirt or coat, while their counterparts in the National League wore chest protectors inside the shirt or coat, more akin to those worn by catchers. In 1977, the AL ruled that all umpires entering the league that year and in the future had to wear the inside protector, although umpires already in the league who were using the outside protector could continue to do so. The last umpire to regularly wear the outside protector was Jerry Neudecker, who retired after the 1985 season. (Since 2000, Major League Baseball has used the same umpire crews for both leagues.) In 1991, the Baseball Hall of Fame added a rule that anyone who was banned from the MLB, living or deceased, will immediately be declared ineligible for induction to the Baseball Hall of Fame. However, anyone who was already enshrined in the Baseball Hall of Fame when they were banned would continue to be enshrined. The rule has allowed players like Willie Mays, Mickey Mantle, and Roberto Alomar to remained enshrined in the Hall of Fame despite banned from Major League Baseball after their enshrinment (although Mays and Mantle were later reinstated to the Major League Baseball). The National Football League (NFL) currently prohibits corporate ownership of teams. Ownership groups can have no more than 24 members, and at least one partner must hold a 30% ownership stake. The league has exempted the Green Bay Packers from this rule; the team has been owned by a publicly owned, nonprofit corporation since 1923, decades before the league's current ownership rules were put in place in the 1980s. Similarly, in association football the , which operates the and , prohibits corporate ownership of more than 49% of teams and rebranding as well (see 50+1 rule). The Royal Dutch Football Association has a similar rule with regard to the . However, Bayer Leverkusen and PSV Eindhoven can maintain their property and names as they were founded in the first decades of the twentieth century by Bayer and Philips as their sport teams. Another Bundesliga side, VfL Wolfsburg, is allowed to remain under the ownership of Volkswagen as it was founded in 1945 as a club for VW workers. Five schools that are members of NCAA Division III, a classification whose members are generally not allowed to offer athletic scholarships, are specifically allowed to award scholarships in one or two sports, with at most one for each sex. Each of these schools had a men's team that participated in the NCAA University Division, the predecessor to today's Division I, before the NCAA adopted its current three-division setup in 1973. (The NCAA did not award national championships in women's sports until 1980–81 in Division II and Division III, and 1981–82 in Division I.) Three other schools were formerly grandfathered, but have either moved their Division I sports to Division III or discontinued them entirely. In 2006, NASCAR passed a rule that required teams to field no more than four cars. Since Roush Racing had five cars, they could continue to field five cars until the end of 2009. Jackie Robinson's #42 shirt, which he wore when he broke Major League Baseball's 20th-century color line and throughout his Hall of Fame career with the Brooklyn Dodgers, is the subject of two such grandfather clauses. In 1997, MLB prohibited all teams from issuing #42 in the future; current players wearing #42 were allowed to continue to do so. The New York Yankees' closer Mariano Rivera was the last active player to be grandfathered in, wearing #42 until he retired after the 2013 season. However, since 2009, all uniformed personnel (players, managers, coaches, umpires) are required to wear #42 (without names) on Jackie Robinson Day. In 2014, Robinson's alma mater of UCLA, where he played four sports from 1939 to 1941, retired #42 across its entire athletic program. (The men's basketball team had previously retired the number for Walt Hazzard.) Three athletes who were wearing the number at the time (in women's soccer, softball, and football) were allowed to continue wearing the number for the rest of their UCLA careers. The NFL introduced a numbering system for the 1973 season, requiring players to be numbered by position. Players who played in the NFL in 1972 and earlier were allowed to keep their old numbers if their number was outside of their range for their position, although New York Giants linebacker Brad Van Pelt wore number 10 despite entering the league in 1973. (Linebackers had to be numbered in the 50s at the time; since 1984 they may now wear numbers in the 50s or 90s. Van Pelt got away with it because he was the team's backup kicker his rookie season.) The last player to be covered by the grandfather clause was Julius Adams, a defensive end (1971–1985, 1987) for the New England Patriots, who wore number 85 through the 1985 season. He wore a different number during a brief return two years later. Similarly, the NFL also banned the use of the numbers 0 and 00 (both treated a single number) for uniforms around the same time, but players Jim Otto and Ken Burrough used the number throughout the 1970s. The National Hot Rod Association is enforcing a grandfather clause banning energy drink sponsors from entering the sport if they were not sponsoring cars as of April 24, 2008, pursuant to the five-year extension of its sponsorship with Coca-Cola, which is changing the title sponsorship from Powerade to Full Throttle Energy Drink. Even though tobacco advertising in car racing was banned, the Marlboro cigarette brand, owned by British American Tobacco in Canada and Philip Morris International elsewhere, is grandfathered in to sponsoring a car in the F1 series on the agreement that the name is not shown in places that banned it. NASCAR allows some grandfathered sponsorships by energy drink brands in the top-level Monster Energy Cup Series, while rules prohibit new sponsors in that category. Similar policies with regard to telecommunications companies were in effect when the series was sponsored by Sprint. Additionally, some insurance company sponsorships were grandfathered in when the second-level series now known as the Xfinity Series was sponsored by Nationwide Insurance. In 2013, the Professional Bull Riders made it mandatory that all contestants at their events who were born on or after October 15, 1994, ride with helmets. Those born before that date were grandfathered in and permitted to ride with their cowboy hats if so desired. In August 2014, the Baseball Hall of Fame and the Baseball Writers' Association of America (BBWAA) announced changes to the Hall of Fame balloting process effective with the election for the Hall's induction class of . The most significant change was reducing the time frame of eligibility for recently retired players from 15 years to 10. Three players on the 2015 BBWAA ballot who had appeared on more than 10 previous ballots—Don Mattingly, Lee Smith, and Alan Trammell—were exempted from this change, and remained eligible for 15 years (provided they received enough votes to stay on the ballot). In November 2015, Little League Baseball changed its age determination date from April 30 to August 31—a calendar date that falls after the completion of all of the organization's World Series tournaments—effective with the 2018 season. The rule was written so that players born between May 1 and August 31, 2005, who would otherwise have been denied their 12-year-old season in the flagship Little League division, would be counted as 12-year-olds in the 2018 season. In December 2016, French Rugby Federation (FFR) president Bernard Laporte announced that all future members of France national teams in rugby union and rugby sevens would be required to hold French passports. At the time, eligibility rules of World Rugby, the sport's international governing body, required only three years' residency for national team eligibility, and did not require citizenship. Players who had represented France prior to the FFR policy change remain eligible for national team selection. In December 2021, the International Ice Hockey Federation announced at the time of abandonment of the 2022 World Junior Ice Hockey Championships that they intend to conclude the event later in the year, which was subsequently announced to be in mid-August 2022. Subsequent to the original plans for a restart, the IIHF announced players who participated in the event at time of abandonment, but exceeds the age limit of 20 years in between, will be eligible to participate in the restarted tournament. References Law-related lists Legal terminology
51295363
https://en.wikipedia.org/wiki/Threat%20actor
Threat actor
A threat actor or malicious actor is either a person or a group of people that take part in an action that is intended to cause harm to the cyber realm including: computers, devices, systems, or networks. The term is typically used to describe individuals or groups that perform malicious acts against a person or an organization of any type or size. Threat actors engage in cyber related offenses to exploit open vulnerabilities and disrupt operations. Threat actors have different educational backgrounds, skills, and resources. The frequency and classification of cyber attacks changes rapidly. The background of threat actors helps dictate who they target, how they attack, and what information they seek. There are a number of threat actors including: cyber criminals, nation-state actors, ideologues, thrill seekers/trolls, insiders, and competitors. These threat actors all have distinct motivations, techniques, targets, and uses of stolen data. Background The development of cyberspace has brought both advantages and disadvantages to society. While cyberspace has helped further technological innovation, it has also brought various forms of cyber crime. Since the dawn of cyberspace, individual, group, and nation-state threat actors have engaged in cyber related offenses to exploit victim vulnerabilities. There are a number of threat actor categories who have different motives and targets. Cyber criminals Cyber criminals have two main objectives. First, they want to take infiltrate a system to access valuable data or items. Second, they want to ensure that they avoid legal consequence after infiltrating a system. Cyber criminal can be broken down into three sub-groups: mass scammers/automated hackers, criminal infrastructure providers, and big game hunters. Mass scammers and automated hackers include cyber criminals who attacks a system to gain monetary success. These threat actors use tools to infect organization computer systems. They then seek to gain financial compensation for victims to retrieve their data. Criminal infrastructure providers are a group of threat actors that aim to use tools to infect a computer system of an organization. Criminal infrastructure providers then sell the organization's infrastructure to an outside organization so they can exploit the system. Typically, victims of criminal infrastructure providers are unaware that their system has been infected. Big game hunters are another sub-group of cyber criminals that aim to attack one single, but high-value target. Big game hunters spend extra time learning about their target, including system architecture and other technologies used by their target. Victims can be targeted by email, phone attacks or by social engineering skills. Nation-state threat actors Nation-state threat actors aim to gain intelligence of national interest. Nation-state actors can be interested in a number of sectors, including nuclear, financial, and technology information. There are two ways nations use nation-state actors. First, some nations make use of their own governmental intelligence agencies. Second, some nations work with organizations that specialize in cyber crime. States that use outside groups can be tracked; however, states might not necessarily take accountability for the act conducted by the outside group. Nation-state actors can attack both other nations or other outside organizations, including private companies and non-governmental organizations. They typically aim to bolster their nation-state's counterintelligence strategy. Nation-state attacks can include: strategic sabotage or critical infrastructure attacks. Nation states considered an incredibly large group of threat actors in the cyber realm. Ideologues (hacktivists and terrorists) Threat actors that are considered ideologues include two groups of attackers: hackers and terrorists. These two groups of attackers can be grouped together because they are similar in goals. However, hactivists and terrorists differ in how they commit cyber crimes. Hacktivism is a term that was coined in the early days of the World Wide Web. It is derived from a combination of two words: hacking and activism. Hacktivists typically are individuals or entities that are ready to commit cyber crimes to further their own beliefs and ideologues. Many hactivists include anti-capitalists or anti-corporate idealists and their attacks are inspired by similar political and social issues. Terrorism includes individuals or groups of people that aim to cause terror to achieve their goals. The main difference between hacktivists and terrorists is their end goal. Hacktivists are willing to break security laws to spread their message while terrorists aim to cause terror to achieve their goals. Ideologues, unlike other types of threat actors, are typically not motivated by financial incentives. Thrill seekers and trolls A thrill seeker is a type of threat actor that attacks a system for the sole purpose of experimentation. Thrill seekers are interested in learning more about how computer systems and networks operate and want to see how much data they can infiltrate within a computer system. While they do not aim to cause major damage, they can cause problems to an organization's system. As time has gone on, thrill seekers have evolved into modern trolls. Similar to thrill seekers, a troll is a type of person or group that attacks a system for recreation. However, unlike thrill seekers, trolls aim to cause malice. Modern day trolls can cause misinformation and harm. Insiders and competitors Insiders are a type of threat actor that can either be an insider who sells network information to other adversaries, or it can be a disgruntled employee who feels like they need to retaliate because they feel like they have been treated unfairly. Insider attacks can be challenging to prevent; however, with a structured logging and analysis plan in place, insider threat actors can be detected after a successful attack. Business competitors can be another threat actor that can harm organizations. Competitors can gain access to organization secrets that are typically secure. Organizations can try to gain a stronger knowledge of business intelligence to protect themselves against a competition threat actor. Identified threat actors Internet Research Agency Organizations that Identify Threat Actors Government organizations United States (US) - National Institute for Standards and Technology (NIST) The National Institute for Standards and Technology (NIST) is a government agency that works on issues dealing with cyber security on the national level. NIST has written reports on cyber security guidelines, including guidelines on conducting risk assessments. NIST typically classifies cyber threat actors as national governments, terrorists, organized crime groups, hactivists, and hackers. European Union (EU) - The European Union Agency for Cybersecurity (ENISA) The European Union Agency for Cybersecurity is a European Union-based agency tasked in working on cyber security capabilities. The ENISA provides both research and assistance to information security experts within the EU. This organization published a cyber threat report up until 2019. The goal of this report is to identify incidents that have been published and attribute those attacks to the most likely threat actor. The latest report identifies nation-states, cyber criminals, hactivists, cyber terrorists, and thrill seekers. United Nations (UN) The United Nations General Assembly (UNGA) has also been working to bring awareness to issues in cyber security. The UNGA came out with a report in 2019 regarding the developments in the field of information and telecommunications in the context of international security. This report has identified the following threat actors: nation-states, cyber criminals, hactivists, terrorist groups, thrill seekers, and insiders. Canada - Canadian Centre for Cyber Security (CCCS) Canada defines threat actors as states, groups, or individuals who aim to cause harm by exploiting a vulnerability with malicious intent. A threat actor must be trying to gain access to information systems to access or alter data, devices, systems, or networks. Japan - National Center of Incident Readiness and Strategy (NISC) The Japanese government's National Center of Incident Readiness and Strategy (NISC) was established in 2015 to create a "free, fair and secure cyberspace" in Japan. The NICS created a cybersecurity strategy in 2018 that outlines nation-states and cybercrime to be some of the most key threats. It also indicates that terrorist usage of the cyberspace needs to be monitored and understood. Russia - Security Council of the Russian Federation The Security Council of the Russian Federation published the cyber security strategy doctrine in 2016. This strategy highlights the following threat actors as a risk to cyber security measures: nation-state actors, cyber criminals, and terrorists. Non-Government Organizations CrowdStrike CrowdStrike is a cybersecurity technology company and antivirus company that publishes an annual threat report. The 2021 Global Threat Report reports nation-states and cybercriminals as two major threats to cyber security. FireEye FireEye is a cybersecurity firm that is involved with detecting and preventing cyber attacks. It publishes a report on detected threat trends annually, containing results from their customers sensor systems. Their threat report lists state sponsored actors, cyber criminals and insiders as current threats. McAfee McAfee is an American global computer security software company. The company publishes a quarterly threat report that identifies key issues in cybersecurity. The October 2021 threat report outlines cybercriminals as one of the biggest threats in the field. Verizon Verizon is an American multinational telecommunications company that has provided a threat report based on past customer incidents. They ask the following questions when defining threat actors: "Who is behind the event? This could be the external “bad guy” who launches a phishing campaign or an employee who leaves sensitive documents in their seat back pocket". They outline nation state actors and cybercriminals as two types of threat actors in their report. Techniques Phishing Phishing is one method that threat actors use to obtain sensitive data, including usernames, passwords, credit card information, and social security numbers. Phishing attacks typically occur when a threat actor sends a message designed to trick a victim into either revealing sensitive information to the threat actor or to deploy malicious software on the victim's system. Cross-Site Scripting Cross-site scripting is a type of security vulnerability that can be found when a threat actor injects a client-side script into an otherwise safe and trusted web applications. The code then launches an infectious script onto a victim's system. This allows a threat actor to access sensitive data. SQL Injections SQL injection is a code injection technique used by threat actors to attack any data-driven applications. Threat actors can inject malicious SQL statements. This allows threat actors to extract, alter, or delete victim's information. Denial of Service Attacks A denial-of-service attack (DoS attack) is a cyber-attack in which a threat actor seeks to make an automated resource unavailable to its victims by temporarily or indefinitely disrupting services of a network host. Threat actors conduct a DoS attack by overwhelming a network with false requests to disrupt operations. References Safety analysis Hacker groups
22290
https://en.wikipedia.org/wiki/Open-source%20license
Open-source license
An open-source license is a type of license for computer software and other products that allows the source code, blueprint or design to be used, modified and/or shared under defined terms and conditions. This allows end users and commercial companies to review and modify the source code, blueprint or design for their own customization, curiosity or troubleshooting needs. Open-source licensed software is mostly available free of charge, though this does not necessarily have to be the case. Licenses which only permit non-commercial redistribution or modification of the source code for personal use only are generally not considered as open-source licenses. However, open-source licenses may have some restrictions, particularly regarding the expression of respect to the origin of software, such as a requirement to preserve the name of the authors and a copyright statement within the code, or a requirement to redistribute the licensed software only under the same license (as in a copyleft license). There has previously been debates about whether open source licenses, which permit copyholders to use, transfer, and modify software, have adequate consideration, should be viewed by the courts as legally enforceable contracts. While some academics have argued that open source licenses are not contracts because there is no consideration, others have argued that the significant societal value provided by the role that open source licenses play in promoting software development and improvement by facilitating access to source code offers adequate consideration. One popular set of open-source software licenses are those approved by the Open Source Initiative (OSI) based on their Open Source Definition (OSD). Open source licenses dictate the terms and conditions that come with the use of open source software (OSS). Open source licenses serve as a legal agreement between open source author and user: authors make OSS available for free, but with certain requirements the user must follow. Generally, open source license terms kick in upon distribution of your software — if you only use an open source component for an internal tool, for example, you probably won't be bound by requirements that would otherwise apply. Comparisons The Free Software Foundation has related but distinct criteria for evaluating whether or not a license qualifies software as free software. Most free software licenses are also considered open-source software licenses. In the same way, the Debian project has its own criteria, the Debian Free Software Guidelines, on which the Open Source Definition is based. In the interpretation of the FSF, open-source license criteria focus on the availability of the source code and the ability to modify and share it, while free software licenses focuses on the user's freedom to use the program, to modify it, and to share it. Source-available licenses ensure source code availability, but do not necessarily meet the user freedom criteria to be classified as free software or open-source software. Public domain Around 2004, lawyer Lawrence Rosen argued in the essay "Why the public domain isn't a license" that software could not truly be waived into the public domain and can't therefore be interpreted as very permissive open-source license, a position which faced opposition by Daniel J. Bernstein and others. In 2012, the dispute was finally resolved when Rosen accepted the CC0 as an open-source license, while admitting that contrary to his previous claims, copyright can be waived away, backed by Ninth Circuit decisions. See also Beerware Comparison of free and open-source software licenses Free-software license Jacobsen v. Katzer – a ruling which states that legal copyrights can have $0 value and thereby supports all licenses, both commercial and open-source List of free-content licenses Multi-licensing Open-source model Open-source software Proprietary software Software license References External links The Open Source Initiative An online version of Lawrence Rosen's book Open Source Licensing: Software Freedom and Intellectual Property Law (). Understanding Open Source Software – by Red Hat's Mark Webbink, Esq. — an overview of copyright and open source. Terms of service Free culture movement
1752944
https://en.wikipedia.org/wiki/List%20of%20EN%20standards
List of EN standards
European Standards (abbreviated EN, from the German name ("European Norm")) are technical standards drafted and maintained by CEN (European Committee for Standardization), CENELEC (European Committee for Electrotechnical Standardization) and ETSI (European Telecommunications Standards Institute). EN 1–999 EN 1: Flued oil stoves with vaporizing burners EN 2: Classification of fires EN 3: Portable fire extinguishers EN 14: Dimensions of bed blankets EN 19: Industrial valves – Marking of metallic valves EN 20: Wood preservatives. EN 26: Gas-fired instantaneous water heaters for the production of domestic hot water EN 40-1: Lighting columns - Part 1: Definitions and terms EN 40-2: Lighting columns - Part 2: General requirements and dimensions EN 40-3-1: Lighting columns - Part 3-1: Design and verification - Specification for characteristic loads EN 40-3-2: Lighting columns - Part 3-2: Design and verification - Verification by testing EN 40-3-3: Lighting columns - Part 3-3: Design and verification - Verification by calculation EN 40-4: Lighting columns - Part 4: Requirements for reinforced and prestressed concrete lighting columns EN 40-5: Lighting columns - Part 5: Requirements for steel lighting columns EN 40-6: Lighting columns - Part 6: Requirements for aluminium lighting columns EN 40-7: Lighting columns - Part 7: Requirements for fibre reinforced polymer composite lighting columns EN 54: Fire detection and fire alarm systems EN 71: Safety of toys EN 81: Safety of lifts EN 115: Safety of escalators & Moving walks EN 166: Personal eye protection. Specifications EN 196: Methods for testing cement (10 parts) EN 197-1: Cement – Part 1 : Composition, specifications and conformity criteria for common cements EN 197-2: Cement – Part 2 : Conformity evaluation EN 206-1: Concrete – Part 1: Specification, performance, production and conformity EN 207: Classification and specifications of filters and eye protection against laser EN 208: Classification of eye protection filters for laser alignment EN 228: Specifications for automotive petrol EN 250: Respiratory equipment. Open-circuit self-contained compressed air diving apparatus. Requirements, testing, marking EN 280: Mobile elevating work platforms. Design calculations, Stability criteria, Construction, Safety Examinations and tests EN 287-1: "Qualification test of welders — Fusion welding — Part 1: Steels" (2011) EN 294: Safety of machinery; safety distances to prevent danger zones from being reached by the upper limbs EN 298: Automatic gas burner control systems for gas burners and gas burning appliances with or without fans EN 301 549: European standard for digital accessibility EN 336: Structural timber — Sizes, permitted deviations EN 338: Structural timber — Strength classes EN 341: Personal protective equipment against falls from a height. Descender devices EN 352-2: Revised 2002 standards on hearing protectors. Safety requirements and testing, generally about earplugs. EN 353-1: Personal protective equipment against falls from a height. Guided type fall arresters including a rigid anchor EN 353-2: Personal protective equipment against falls from a height. Guided type fall arresters including a flexible anchor line EN 354: Personal protective equipment against falls from a height. Lanyards EN 355: Personal protective equipment against falls from a height. Energy absorbers EN 358: Personal protective equipment for work positioning and prevention of falls from a height. Belts for work positioning and restraint and work positioning lanyards EN 360: Personal protective equipment against falls from a height. Retractable type fall arresters EN 361: Personal protective equipment against falls from a height. Full body harnesses EN 362: Personal protective equipment against falls from a height. Connectors EN 363: Personal protective equipment against falls from a height. Fall arrest systems EN 374: Protective gloves against chemicals and micro-organisms EN 381: Protective clothing for chainsaw users, e.g. trousers, jackets, gloves, boots/gaiters. EN 386: Glued laminated timber - Performance requirements and minimum production requirements EN 388: Protective gloves against mechanical risks EN 390: Glued laminated timber. Sizes. Permissible deviations EN 391: Glued laminated timber - Delamination tests of glue lines EN 392: Glued laminated timber - Shear test of glue lines EN 397: Specification for industrial safety helmets EN 403: Respiratory protective devices for self-rescue. Filtering devices with hood for escape from fire. Requirements, testing, marking. EN 408: Structural timber and glued laminated timber — Determination of some physical and mechanical properties EN 417: Non-refillable metallic cartridges for liquefied petroleum gases EN 420: Protective gloves. General requirements and test methods EN 438: Decorative high-pressure laminates (HPL) sheets based on thermosetting resins. EN 440: "Welding consumables wire electrodes and deposits for gas shielded metal arc welding of non-alloy and fine grain steel - Classification" (1994) EN 450: Fly ash for concrete - Definitions, requirements and quality control EN 474: Earth-moving Machinery. Safety. General Requirements EN 518: Structural timber. Grading. Requirements for visual strength grading standards (replaced by EN 14081-1) EN 519: Structural timber. Grading. Requirements for machine strength graded timber and grading machines (Replaced by EN 14081-1) EN 567: Mountaineering equipment. Rope clamps. Safety requirements and test methods EN 590: Specification for automotive diesel EN 694: Fire-fighting hoses. Semi-rigid hoses for fixed systems EN 716: Children's cots and folding cots for domestic use EN 795: Protection against falls from a height. Anchor devices. Requirements and testing EN 805: Water supply. Requirements for systems and components outside buildings EN 813: Personal protective equipment for prevention of falls from a height. Sit harnesses EN 837: Pressure connections EN 840: Mobile waste containers. EN 877: Cast iron pipes and fittings, their joints and accessories for the evacuation of water from buildings. Requirements, test methods and quality assurance EN 926-1: Paragliding equipment - Paragliders - Part 1: Requirements and test methods for structural strength EN 926-2: Paragliding equipment - Paragliders - Part 2: Requirements and test methods for classifying flight safety characteristics EN 933-1: Test for geometrical properties of aggregates - Part 1: determination of particle size distribution - Sieving method EN 934-2: Admixtures for concrete, mortar and grout - Part 2: concrete admixtures - Definitions and requirements EN 980: Symbols for use in the labeling of medical devices EN 1000–1989 EN 1010-1: Safety of machinery. Safety requirements for the design and construction of printing and paper converting machines. Common requirements EN 1010-2: Safety of machinery. Safety requirements for the design and construction of printing and paper converting machines. Printing and varnishing machines including pre-press machinery EN 1010-3: Safety of machinery. Safety requirements for the design and construction of printing and paper converting machines. Cutting machines EN 1010-4: Safety of machinery. Safety requirements for the design and construction of printing and paper converting machines. Bookbinding, paper converting and finishing machines EN 1010-5: Safety of machinery - Safety requirements for the design and construction of printing and paper converting machines. Machines for the production of corrugated board EN 1069: Water slides of 2 m height and more EN 1078: Helmets for pedal cyclists and for users of skateboards and roller skates EN 1090: Execution of steel structures and aluminium structures (3 parts) EN 1092: Flanges and their joints. Circular flanges for pipes, valves, fittings and accessories, PN designated EN 1168: Precast concrete products - Hollow core slabs EN 1176-1: Playground equipment. General safety requirements and test methods EN 1177: Impact absorbing playground surfacing. Safety requirements and test methods EN 1325:2014: Value Management. Vocabulary. Terms and definitions EN 1325-1:1997 Value management, value analysis, functional analysis vocabulary. Value analysis and functional analysis (withdrawn, replaced by EN 1325:2014) EN 1337: Structural bearings EN 1399: Resilient floor coverings. Determination of resistance to stubbed and burning cigarettes EN 1401: Plastics piping systems for non-pressure underground drainage and sewerage - Unplasticized poly(vinyl chloride) (PVC-U) EN 1466: Child use and care articles - Carry cots and stands - Safety requirements and test methods EN 1496: Personal fall protection equipment. Rescue lifting devices EN 1679: Reciprocating internal combustion engines: Compression ignition engines EN 1809: Diving accessories. Buoyancy compensators. Functional and safety requirements, test methods. EN 1815: Resilient and textile floor coverings. Assessment of static electrical propensity EN 1846: Firefighting and rescue service vehicles EN 1888-1: Child use and care articles - Wheeled child conveyances (Part 1: Pushchairs and prams -- Up to 15kg) EN 1888-2: Child use and care articles - Wheeled child conveyances (Part 2: Pushchairs for children above 15 kg up to 22 kg) EN 1891: Personal protective equipment for the prevention of falls from a height. Low stretch kernmantel ropes EN 1972: Diving equipment - Snorkels - Requirements and test methods EN 1990–1999 (Eurocodes) EN 1990: (Eurocode 0) Basis of structural design EN 1991: (Eurocode 1) Actions on structures EN 1992: (Eurocode 2) Design of concrete structures EN 1993: (Eurocode 3) Design of steel structures EN 1994: (Eurocode 4) Design of composite steel and concrete structures EN 1995: (Eurocode 5) Design of timber structures EN 1996: (Eurocode 6) Design of masonry structures EN 1997: (Eurocode 7) Geotechnical design EN 1998: (Eurocode 8) Design of structures for earthquake resistance EN 1999: (Eurocode 9) Design of aluminium structures EN 10000–10999 This range includes almost exclusively CEN Standards related to iron and steel. EN 10002: Metallic Materials - Tensile Testing EN 10002-1: Method of Test at Ambient Temperature EN 10002-5: Method of testing at elevated temperatures EN 10024: Hot rolled taper flange I sections. Tolerances on shape and dimensions EN 10025: Hot rolled products of structural steels EN 10025-1: Part 1: General technical delivery conditions EN 10025-2: Part 2: Technical delivery conditions for non-alloy structural steels EN 10025-3: Part 3: Technical delivery conditions for normalized/normalized rolled weldable fine grain structural steels EN 10025-4: Part 4: Technical delivery conditions for thermomechanical rolled weldable fine grain structural steels EN 10025-5: Part 5: Technical delivery conditions for structural steels with improved atmospheric corrosion resistance EN 10025-6:Part 6: Technical delivery conditions for flat products of high yield strength structural steels in the quenched and tempered condition EN 10027: Designation systems for steel. EN 10088-1 Stainless steels - Part 1: List of stainless steels EN 10088-2 Stainless steels - Part 2: Technical delivery conditions for sheet/plate and strip of corrosion resisting steels for general purposes EN 10088-3 Stainless steels - Part 3: Stainless steels - Technical delivery conditions for semi-finished products, bars, rods, wire, sections and bright products of corrosion resisting steels for general purposes EN 10088-4 Stainless steels - Part 4: Technical delivery conditions for sheet/plate and strip of corrosion resisting steels for construction purposes; EN 10149: Hot rolled flat products made of high yield strength steels for cold forming. EN 10149-1: General technical delivery conditions EN 10149-2: Technical delivery conditions for thermomechanically rolled steels EN 10149-3: Technical delivery conditions for normalized or normalized rolled steels EN 10204: Metallic products - Types of inspection documents EN 10216: Seamless steel tubes for pressure purposes EN 10216-1: Part 1: Non-alloy steel tubes with specified room temperature properties EN 10216-2: Part 2: Non alloy and alloy steel tubes with specified elevated temperature properties EN 10216-3: Part 3: Alloy fine grain steel tubes EN 10216-4: Part 4: Non-alloy and alloy steel tubes with specified low temperature properties EN 10216-5: Part 5: Stainless steel tubes EN 10217: Welded steel tubes for pressure purposes EN 10217-1: Part 1: Non-alloy steel tubes with specified room temperature properties EN 10217-2: Part 2: Electric welded non-alloy and alloy steel tubes with specified elevated temperature properties EN 10217-3: Part 3: Alloy fine grain steel tubes EN 10217-4: Part 4: Electric welded non-alloy steel tubes with specified low temperature properties EN 10240: Internal and/or external protective coating for steel tubes - specification for hot dip galvanized coatings applied in automatic plants EN 10357: Austenitic, austenitic-ferritic and ferritic longitudinally welded stainless steel tubes for the food and chemical industry EN 10365: Hot rolled steel channels, I and H sections. Dimensions and masses EN 11000–49999 EN 12102: Air conditioners, liquid chilling packages, heat pumps and dehumidifiers with electrically driven compressors for space heating and cooling - Measurement of airborne noise - Determination of the sound power level EN 12103: Resilient floor coverings - Agglomerated cork underlays - specification EN 12104: Resilient floor coverings - Cork floor tiles - Specification EN 12105: Resilient floor coverings - Determination of moisture content of agglomerated composition cork EN 12199: Resilient floor coverings. Specifications for homogeneous and heterogeneous relief rubber floor coverings EN 12221: Changing units for domestic use EN 12246: Quality classification of timber used in pallets and packaging EN 12255: Wastewater treatment plants EN 12255-1: Part 1: General construction principles EN 12255-2: Part 2: Performance requirements of raw wastewater pumping installations EN 12255-3: Part 3: Preliminary treatment EN 12255-4: Part 4: Primary settlement EN 12255-5: Part 5: Lagooning processes EN 12255-6: Part 6: Activated sludge process EN 12255-7: Part 7: Biological fixed-film reactors EN 12255-8: Part 8: Sludge treatment and storage EN 12255-9: Part 9: Odour control and ventilation EN 12255-10: Part 10: Safety principles EN 12255-11: Part 11: General data required EN 12255-12: Part 12: Control and automation EN 12255-13: Part 13: Chemical treatment - Treatment of wastewater by precipitation/flocculation EN 12255-14: Part 14: Disinfection EN 12255-15: Part 15: Measurement of the oxygen transfer in clean water in aeration tanks of activated sludge plants EN 12255-16: Part 16: Physical (mechanical) filtration EN 12277: Mountaineering and climbing harnesses EN 12281: Printing and business paper. Requirements for copy paper. EN 12345: Welding. Multilingual terms for welded joints with illustrations EN 12492: Helmets for mountaineering EN 12566: Small wastewater treatment systems for up to 50 PT EN 12566-1: Part 1: Prefabricated septic tanks EN 12566-2: Part 2: Soil infiltration systems EN 12566-3: Part 3: Packaged and/or site assembled domestic wastewater treatment plants EN 12566-4: Part 4: Septic tanks assembled in situ from prefabricated kits EN 12566-5: Part 5: Pretreated Effluent Filtration systems EN 12566-6: Part 6: Prefabricated treatment units for septic tank effluent EN 12566-7: Part 7: Prefabricated tertiary treatment units EN 12572: Artificial climbing structures EN 12600: Classification of Resistance of Glazing to Impact EN 12663: Railway applications - Structural requirements of railway vehicle bodies EN 12797: Brazing – Destructive tests of brazed joints EN 12799: Brazing – Non-destructive examination brazed joints EN 12810: Facade scaffolds made of prefabricated parts EN 12811: Temporary works equipment EN 12841: Personal fall protection equipment. Rope access systems. Rope adjustment devices BS EN 12845:2015+A1:2019 Fixed firefighting systems. Automatic sprinkler systems. Design, installation and maintenance, as amended 2019 EN 12890: Patterns, pattern equipment and coreboxes for the production of sand molds and sand cores EN 12952: Water-Tube Boilers and Auxiliary Installations EN 12973: Value Management EN 12975-1: Thermal solar systems and components - Solar collectors EN 13000: Cranes - Mobile Cranes EN 13133: "Brazing - Brazer approval" (2000) EN 13145: Railway applications - Track - Wood sleepers and bearers EN 13146: Railway applications - Track - Test methods for fastening systems EN 13162:2013-03: Thermal insulation products for buildings - Factory made mineral wool (MW) products EN 13204: Double acting hydraulic rescue tools for fire and rescue service use. Safety and performance requirements EN 13300: quality and classification of (interior) wall paint EN 13309: Construction machinery - Electromagnetic compatibility of machines with internal power supply EN 13319: Diving accessories. Depth gauges and combined depth and time measuring devices. Functional and safety requirements, test methods. EN 13402: Size designation of clothes EN 13432: Compostable and biodegradable packaging EN 13445: Unfired pressure vessels EN 13480: Metallic industrial piping EN 13501: Fire classification of construction products and building elements EN 13537: Temperature ratings for sleeping bags EN 13594:2002: Protective gloves for professional motorcycle riders. Requirements and test methods EN 13595-1:2002: Protective clothing for professional motorcycle riders. Jackets, trousers and one piece or divided suits. General requirements EN 13595-2:2002: Protective clothing for professional motorcycle riders. Jackets, trousers and one piece or divided suits. Test method for determination of impact abrasion resistance EN 13595-3:2002: Protective clothing for professional motorcycle riders. Jackets, trousers and one piece or divided suits. Test method for determination of burst strength. EN 13595-4:2002: Protective clothing for professional motorcycle riders. Jackets, trousers and one piece or divided suits. Test methods for the determination of impact cut resistance EN 13612: Performance evaluation of in-vitro diagnostic devices EN 13634:2002: Protective footwear for professional motorcycle riders. Requirements and test methods EN 13640: Stability testing of in vitro diagnostic reagents EN 13757: Communication system for meters and remote reading of meters (Meter-Bus) EN 13940:2016: System of concepts to support continuity of care EN ISO 13982: Protective clothing for use against solid particulates — Part 1: Performance requirements for chemical protective clothing providing protection to the full body against airborne solid particulates (type 5 clothing) EN 14081-1: Timber structures - Strength graded structural timber with rectangular cross section - Part 1: General requirements EN 14081-2: Timber structures — Strength graded structural timber with rectangular cross section — Part 2 : Machine grading; additional requirements for initial type testing. EN 14081-3:Timber structures — Strength graded structural timber with rectangular cross section — Part 3: Machine grading; additional requirements for factory production control. EN 14081-4:Timber structures — Strength graded structural timber with rectangular cross section — Part 4: Machine grading; grading machine settings for machine controlled systems. EN 14214: The pure biodiesel standard EN 14225-1: Diving suits. Wet suits. Requirements and test methods. EN 14225-2: Diving suits. Dry suits. Requirements and test methods. EN 14511: Air conditioners, liquid chilling packages and heat pumps with electrically driven compressors for space heating and cooling EN 14904: Surfaces for sports areas. Indoor surfaces for multi-sports use. Specification EN 14988-1:2006: Children's High chairs. Part 1: Safety requirements EN 14988-2:2006: Children's High chairs. Part 2: Test methods EN 15251: Indoor environmental input parameters for design and assessment of energy performance of buildings- addressing indoor air quality, thermal environment, lighting and acoustics EN 15273: Railway applications - Gauges EN 15273-1: Part 1: General - Common rules for infrastructure and rolling stock EN 15273-2: Part 2: Rolling stock gauge EN 15273-3: Part 3: Structure gauges EN 15531: Service Interface for Real Time Information EN 15595: Railway applications - Braking - Wheel slip prevention equipment EN 15714: Industrial Valves - Actuators EN 15744: Film identification — Minimum set of metadata for cinematographic works EN 15838: Customer Contact Centres - Requirements for service provision EN 15883: Washer-disinfectors EN 15907: Film identification — Enhancing interoperability of metadata - Element sets and structures EN 16001: Energy management systems; withdrawn, replaced by ISO 50001 EN 16034: Pedestrian doorsets, industrial, commercial, garage doors and openable windows. Product standard, performance characteristics. Fire resisting and/or smoke control characteristics EN 16114: Management consultancy services EN 16228: Drilling and foundation equipment – Safety EN 16228-1: Common requirements EN 16228-2: Mobile drill rigs for civil and geotechnical engineering, quarrying and mining EN 16228-3: Horizontal directional drilling equipment (HDD) EN 16228-4: Foundation equipment EN 16228-5: Diaphragm walling equipment EN 16228-6: Jetting, grouting and injection equipment EN 16228-7: Interchangeable auxiliary equipment EN 16247: Energy audits EN 16804: Diving equipment - Diving open heel fins - Requirements and test methods EN 16805: Diving equipment - Diving mask - Requirements and test methods EN 16931-1: Electronic Invoicing - Semantic data model of the core elements of an electronic invoice EN 20345: Personal Protective Equipment – Safety Footwear EN 28601: Data elements and interchange formats; information interchange; representation of dates and times EN 45502-1: Active implantable medical devices - Part 1: General requirements for safety, marking and information to be provided by the manufacturer EN 45545: Railway applications. Fire protection on railway vehicles. EN 45545-1: Railway applications. Fire protection on railway vehicles. General EN 45545-2: Railway applications. Fire protection on railway vehicles. Requirements for fire behaviour of materials and components EN 45545-3: Railway applications. Fire protection on railway vehicles. Fire resistance requirements for fire barriers EN 45545-4: Railway applications. Fire protection on railway vehicles. Fire safety requirements for rolling stock design EN 45545-5: Railway applications. Fire protection on railway vehicles. Fire safety requirements for electrical equipment including that of trolley buses, track guided buses and magnetic levitation vehicles EN 45545-6: Railway applications. Fire protection on railway vehicles. Fire control and management systems EN 45545-7: Railway applications. Fire protection on railway vehicles. Fire safety requirements for flammable liquid and flammable gas installations EN 45554: General methods for the assessment of the ability to repair, reuse and upgrade energy-related products EN 50000–59999 (CEN specific, non-IEC electrical standards) EN 50022: 35 mm snap-on top-hat mounting rails for low-voltage switchgear (DIN rail) EN 50075: Europlug EN 50090: Home and Building Electronic Systems (KNX/EIB) EN 50102: Degrees of protection provided by enclosures for electrical equipment against external mechanical impacts EN 50119: Railway applications - Fixed installations: Electric traction overhead contact lines for railways EN 50121: Railway applications - Electromagnetic compatibility EN 50121-1: Railway applications - Electromagnetic compatibility Part 1: General EN 50121-2: Railway applications - Electromagnetic compatibility - Part 2 : emission of the whole railway system to the outside world EN 50121-3-1: Railway applications - Electromagnetic compatibility - Part 3-1 : rolling stock - Train and complete vehicle EN 50121-3-2: Railway applications - Electromagnetic compatibility - Part 3-2 : rolling stock - Apparatus EN 50121-4: Railway applications - Electromagnetic compatibility - Part 4 : emission and immunity of the signaling and telecommunications apparatus EN 50121-5: Railway applications - Electromagnetic compatibility - Part 5: Emission and immunity of fixed power supply installations and apparatus EN 50122: Railway applications - Fixed installations EN 50122-1: Railway applications - Fixed installations - Electrical safety, earthing and the return circuit - Part 1: Protective provisions against electric shock EN 50122-2: Railway applications - Fixed installations - Part 2: Protective provisions against the effects of Stray currents caused by d.c. traction systems EN 50122-3: Railway applications - Fixed installations - Electrical safety, earthing and the return circuit - Part 3: Mutual Interaction of a.c. and d.c. traction systems EN 50123: Railway applications - Fixed installations - D.C. switchgear EN 50124: Railway applications - Insulation coordination EN 50125-1: Railway applications - Environmental conditions for equipment - Part 1: Rolling stock and on-board equipment EN 50125-2: Railway applications - Environmental conditions for equipment - Part 2: Fixed electrical installations EN 50125-3: Railway applications - Environmental conditions for equipment - Part 3: Equipment for signaling and telecommunications EN 50126: Railway applications - The specification and demonstration of reliability, availability, maintainability and safety (RAMS) EN 50128: Railway applications - Communication, signalling and processing systems - Software for railway control and protection systems EN 50129: Railway applications - Communication, signalling and processing systems – Safety related electronic systems for signalling EN 50130: Alarm systems - Electromagnetic compatibility and Environmental test methods EN 50131: Alarm systems - Intrusion and hold-up systems EN 50136: Alarm systems - Alarm transmission systems EN 50153: Railway applications - Rolling stock - Protective provisions relating to electrical hazards EN 50155: Railway applications - Electronic equipment used on rolling stock EN 50157: Domestic and Similar Electronic Equipment Interconnection Requirements (Part1 = AV.link) EN 50159: Railway applications - Communication, signaling and processing systems - Safety-related communication in transmission systems EN 50163: Railway applications - Supply voltages of traction systems EN 50178: Electronic equipment for use in power installations EN 50262: Metric cable glands EN 50267: Corrosive Gases EN 50272-1: Standards for Safety requirements for secondary batteries and battery installations - Part 1 General safety information EN 50272-2: Standards for Safety requirements for secondary batteries and battery installations - Part 2 Stationary batteries EN 50522: Earthing of power installations exceeding 1 kV a.c. EN 50308: Wind Turbines - Protective Measures - Requirements for design, operation and maintenance EN 50325: Industrial communications subsystem based on ISO 11898 (CAN) for controller-device interfaces EN 50412: Power line communication apparatus and systems used in low-voltage installations in the frequency range 1.6 MHz to 30 MHz EN 50436: Alcohol interlocks EN 50525: Low voltage energy cables; a merger of HD 21 and HD 22. EN 50571: Specifies the general requirements for central power supply systems for an independent energy supply to essential safety equipment. Covers systems permanently connected to a.c. supply voltages not exceeding 1,000V and that use batteries as the alternative power source EN 50581: documentation for the assessment of electrical and electronic products with respect to the RoHS EN 50600: Information technology - Data centre facilities and infrastructures EN 50657: Railways Applications - Rolling stock applications - Software on Board Rolling Stock EN 55014: Electromagnetic compatibility — Requirements for household appliances, electric tools and similar apparatus EN 55022: Information technology equipment. Radio disturbance characteristics. EN 55024: Information technology equipment. Immunity characteristics EN 55032: Electromagnetic Compatibility of Multimedia Equipment. Emission requirements. EN 55035: Electromagnetic Compatibility of Multimedia Equipment. Immunity requirements. EN 60000-69999 (CEN editions of IEC standards) EN 60065: Audio, Video and similar electronics apparatus - Safety requirements. EN 60950-1: Information technology equipment - Safety - Part1: General requirements EN 60950-21: Information technology equipment - Safety - Part21: Remote power feeding EN 60950-22: Information technology equipment - Safety - Part22: Equipment installed outdoors EN 60950-23: Information technology equipment - Safety - Part23: Large data storage equipment EN 61000-1-2: Electromagnetic compatibility (EMC). General. Methodology for the achievement of functional safety of electrical and electronic systems including equipment with regard to electromagnetic phenomena EN 61000-1-3: Electromagnetic compatibility (EMC). General. The effects of high-altitude EMP (HEMP) on civil equipment and systems EN 61000-1-4: Electromagnetic compatibility (EMC). General. Historical rationale for the limitation of power-frequency conducted harmonic current emissions from equipment, in the frequency range up to 2 kHz EN 61000-1-5: Electromagnetic compatibility (EMC). General. High power electromagnetic (HPEM) effects on civil systems EN 61000-1-6: Electromagnetic compatibility (EMC). General. Guide to the assessment of measurement uncertainty EN 61000-2-2: Electromagnetic compatibility (EMC). Environment. Compatibility levels for low-frequency conducted disturbances and signaling in public low-voltage power supply systems EN 61000-2-4: Electromagnetic compatibility (EMC). Environment. Compatibility levels in industrial plants for low-frequency conducted disturbances EN 61000-2-9: Electromagnetic compatibility (EMC). Environment. Description of HEMP environment. Radiated disturbance. Basic EMC publication EN 61000-2-10: Electromagnetic compatibility (EMC). Environment. Description of HEMP environment. Conducted disturbance EN 61000-2-12: Electromagnetic compatibility (EMC). Environment. Compatibility levels for low-frequency conducted disturbances and signaling in public medium-voltage power supply systems EN 61000-3-2: Electromagnetic compatibility (EMC). Limits. Limits for harmonic current emissions (equipment input current up to and including 16 A per phase) EN 61000-3-3: Electromagnetic compatibility (EMC). Limits. Limitation of voltage changes, voltage fluctuations and flicker in public low-voltage supply systems, for equipment with rated current ≤ 16 A per phase and not subject to conditional connection EN 61000-3-11: Electromagnetic compatibility (EMC). Limits. Limitation of voltage changes, voltage fluctuations and flicker in public low-voltage supply systems. Equipment with rated voltage current ≤ 75 A and subject to conditional connection EN 61000-3-12: Electromagnetic compatibility (EMC). Limits. EN 61000-4-1: Electromagnetic compatibility (EMC). Testing and measurement techniques. Overview of IEC 61000-4 series EN 61000-4-2: Electromagnetic compatibility (EMC). Testing and measurement techniques. Electrostatic discharge immunity test. Basic EMC publication EN 61000-4-3: Electromagnetic compatibility (EMC). Testing and measurement techniques. Radiated, radio-frequency, electromagnetic field immunity test EN 61000-4-4: Electromagnetic compatibility (EMC). Testing and measurement techniques. Electrical fast transient/burst immunity test EN 61000-4-5: Electromagnetic compatibility (EMC). Testing and measurement techniques. Surge immunity test EN 61000-4-6: Electromagnetic compatibility (EMC). Testing and measurement techniques. Immunity to conducted disturbances, induced by radio-frequency fields EN 61000-4-7: Electromagnetic compatibility (EMC). Testing and measurement techniques. General guide on harmonics and interharmonics measurements and instrumentation, for power supply systems and equipment connected thereto EN 61000-4-8: Electromagnetic compatibility (EMC). Testing and measurement techniques. Power frequency magnetic field immunity test. Basic EMC publication EN 61000-4-11: Electromagnetic compatibility (EMC). Testing and measurement techniques. Voltage dips, short interruptions and voltage variations immunity tests EN 61000-4-12: Electromagnetic compatibility (EMC). Testing and measurement techniques. Oscillatory waves immunity test. Basic EMC publication EN 61000-4-13: Electromagnetic compatibility (EMC). Testing and measurement techniques. Harmonics and interharmonics including mains signaling at a.c. power port, low frequency immunity tests EN 61000-4-14: Electromagnetic compatibility (EMC). Testing and measurement techniques. Voltage fluctuation immunity test for equipment with input current not exceeding 16 A per phase EN 61000-4-15: Electromagnetic compatibility (EMC). Testing and measurement techniques. Flickermeter. Functional and design specifications. Basic EMC publication EN 61000-4-16: Electromagnetic compatibility (EMC). Testing and measurement techniques. Test for immunity to conducted, common mode disturbances in the frequency range 0 Hz to 150 kHz EN 61000-4-17: Electromagnetic compatibility (EMC). Testing and measurement techniques. Ripple on d.c. input power port immunity test EN 61000-4-18: Electromagnetic compatibility (EMC). Testing and measurement techniques. Damped oscillatory wave immunity test EN 61000-4-19: Electromagnetic compatibility (EMC). Testing and measurement techniques. Test for immunity to conducted, differential mode disturbances and signaling in the frequency range 2 kHz to 150 kHz at a.c. power ports EN 61000-4-20: Electromagnetic compatibility (EMC). Testing and measurement techniques. Emission and immunity testing in transverse electromagnetic (TEM) waveguides EN 61000-4-21: Electromagnetic compatibility (EMC). Testing and measurement techniques. Reverberation chamber test methods EN 61000-4-22: Electromagnetic compatibility (EMC). Testing and measurement techniques. Radiated emission and immunity measurements in fully anechoic rooms (FARs) EN 61000-4-23: Electromagnetic compatibility (EMC). Testing and measurement techniques. Test methods for protective devices for HEMP and other radiated disturbances EN 61000-4-24: Electromagnetic compatibility (EMC). Testing and measurement techniques. Test methods for protective devices for HEMP conducted disturbance. Basic EMC publication EN 61000-4-25: Electromagnetic compatibility (EMC). Testing and measurement techniques. HEMP immunity test methods for equipment and systems EN 61000-4-27: Electromagnetic compatibility (EMC). Testing and measurement techniques. Unbalance, immunity test for equipment with input current not exceeding 16 A per phase EN 61000-4-28: Electromagnetic compatibility (EMC). Testing and measurement techniques. Variation of power frequency, immunity test for equipment with input current not exceeding 16 A per phase EN 61000-4-29: Electromagnetic compatibility (EMC). Testing and measurement techniques. Voltage dips, short interruptions and voltage variations on d.c.input power ports. Immunity tests. Basic EMC Publication. EN 61000-4-30: Electromagnetic compatibility (EMC). Testing and measurement techniques. Testing and measurement techniques. Power quality measurement methods EN 61000-4-34: Electromagnetic compatibility (EMC). Testing and measurement techniques. Voltage dips, short interruptions and voltage variations immunity tests for equipment with mains current more than 16 A per phase IEC 61000-5-1 Electromagnetic compatibility (EMC). Installation and mitigation guidelines. General considerations. Basic EMC publication EN 61000-5-5 Electromagnetic compatibility (EMC). Installation and mitigation guidelines. Specification of protective devices for HEMP conducted disturbance. Basic EMC publication EN 61000-5-7 Electromagnetic compatibility (EMC). Installation and mitigation guidelines. Degrees of protection by enclosures against electromagnetic disturbances (EM code). Degrees of protection against electromagnetic disturbances provided by enclosures (EM code) EN 61000-6-1 Electromagnetic compatibility (EMC). Generic standards. Immunity for residential, commercial and light-industrial environments EN 61000-6-2 Electromagnetic compatibility (EMC). Generic standards. Immunity for industrial environments EN 61000-6-3 Electromagnetic compatibility (EMC). Generic standards. Emission standard for residential, commercial and light-industrial environments EN 61000-6-4 Electromagnetic compatibility (EMC). Generic standards. Emission standard for industrial environments EN 62061 /IEC 62061 Safety of machinery: Functional safety of electrical, electronic and programmable electronic control systems EN 62353:2014 Medical electrical equipment. Recurrent test and test after repair of medical electrical equipment EN 62366 Medical devices - Application of usability engineering to medical devices Moreover, there are a lot of ISO and IEC standards that were accepted as "European Standard" (headlined as EN ISO xxxxx) and are valid in the European Economic Region. See also Institute for Reference Materials and Measurements (IRMM) List of ASTM standards List of DIN standards List of ISO standards References External links EN standards European Standard
25590399
https://en.wikipedia.org/wiki/Constantinos%20Daskalakis
Constantinos Daskalakis
Constantinos Daskalakis (; born 29 April 1981) is a Greek theoretical computer scientist. He is a professor at MIT's Electrical Engineering and Computer Science department and a member of the MIT Computer Science and Artificial Intelligence Laboratory. He was awarded the Rolf Nevanlinna Prize and the Grace Murray Hopper Award in 2018. Early life and education Daskalakis was born in Athens on 29 April 1981. His grandparents originated from Crete, where he summered as a child. He has a younger brother, Nikolaos. When Daskalakis was in third grade, his father bought an Amstrad CPC, which Daskalakis stayed up all night attempting to learn how it worked. He attended Varvakeio High School, and completed his undergraduate studies in the National Technical University of Athens, where in 2004 he received his Diploma in Electrical and Computer Engineering. He completed his undergraduate thesis "On the Existence of Pure Nash Equilibria in Graphical Games with succinct description" under the supervision of Stathis Zachos. As an undergraduate, Daskalakis attained perfect scores in all but one of his classes, something which had not previously been achieved in the university's history. He continued to study at University of California, Berkeley, where he received his PhD in Electrical Engineering and Computer Science in 2008 under the supervision of Christos Papadimitriou. His thesis was awarded the 2008 ACM Doctoral Dissertation Award. Research and career After his PhD he spent a year as a postdoctoral researcher in Jennifer Chayes's group at Microsoft Research, New England. Daskalakis works on the theory of computation and its interface with game theory, economics, probability theory, statistics and machine learning. He has resolved long-standing open problems about the computational complexity of the Nash equilibrium, the mathematical structure and computational complexity of multi-item auctions, and the behavior of machine-learning methods such as the expectation–maximization algorithm. He has obtained computationally and statistically efficient methods for statistical hypothesis testing and learning in high-dimensional settings, as well as results characterizing the structure and concentration properties of high-dimensional distributions. Daskalakis co-authored The Complexity of Computing a Nash Equilibrium with his doctoral advisor Christos Papadimitriou and Paul W. Goldberg, for which they received the 2008 Kalai Game Theory and Computer Science Prize from the Game Theory Society for "the best paper at the interface of game theory and computer science", in particular "for its key conceptual and technical contributions"; and the outstanding paper prize from the Society for Industrial and Applied Mathematics (SIAM). He was appointed a tenured Professor at MIT in May 2015. Awards and honors Constantinos Daskalakis was awarded the 2008 ACM Doctoral Dissertation Award for advancing our understanding of behavior in complex networks of interacting individuals, such as those enabled and created by the Internet. His dissertation on the computational complexity of Nash Equilibria provides a novel, algorithmic perspective on game theory and the concept of the Nash equilibrium. For this work Daskalakis was also awarded the 2008 Kalai Prize for outstanding articles at the interface of computer science and game theory, along with Christos Papadimitriou and Paul W. Goldberg. In 2018, Daskalakis was awarded the Nevanlinna Prize for "transforming our understanding of the computational complexity of fundamental problems in markets, auctions, equilibria and other economic structures". He also received the Simons Foundation Investigator award in Theoretical Computer Science, an award designed for "outstanding scientists in their most productive years," who are "providing leadership to the field". References 1981 births Living people Greek computer scientists MIT School of Engineering faculty Nevanlinna Prize laureates Greek emigrants to the United States
679485
https://en.wikipedia.org/wiki/Source%20%28game%20engine%29
Source (game engine)
Source is a 3D game engine developed by Valve. It debuted as the successor to GoldSrc with Half-Life: Source in June 2004, followed by Counter-Strike: Source and Half-Life 2 later that year. Source does not have a concise version numbering scheme; instead, it is designed in constant incremental updates. The engine began to be phased out by the late 2010s with Source 2 succeeding it. History Source distantly originates from the GoldSrc engine, itself a heavily modified version of John Carmack's Quake engine with some code from the Quake II engine. Carmack commented on his blog in 2004 that "there are still bits of early Quake code in Half-Life 2". Valve employee Erik Johnson explained the engine's nomenclature on the Valve Developer Community: Source was developed part-by-part from this fork onwards, slowly replacing GoldSrc in Valve's internal projects and, in part, explaining the reasons behind its unusually modular nature. Valve's development of Source since has been a mixture of licensed middleware and in-house-developed code. Among others, Source uses Bink Video for video playback. Modularity and notable updates Source was created to evolve incrementally with new technology, as opposed to the backward compatibility-breaking "version jumps" of its competitors. Different systems within Source are represented by separate modules which can be updated independently. With Steam, Valve can distribute these updates automatically among its many users. In practice, however, there have been occasional breaks in this chain of compatibility. The release of Half-Life 2: Episode One and The Orange Box both introduced new versions of the engine that could not be used to run older games or mods without the developers performing upgrades to code and, in some cases, content. Both cases required markedly less work to update its version than competing engines. Source 2006 The Source 2006 branch was the term used for Valve's games using technology that culminated with the release of Half-Life 2: Episode One. HDR rendering and color correction were first implemented in 2005 using Day of Defeat: Source, which required the engine's shaders to be rewritten. The former, along with developer commentary tracks, were showcased in Half-Life 2: Lost Coast. Episode One introduced Phong shading and other smaller features. Image-based rendering technology had been in development for Half-Life 2, but was cut from the engine before its release. It was mentioned again by Gabe Newell in 2006 as a piece of technology he would like to add to Source to implement support for much larger scenes that are impossible with strictly polygonal objects. Source 2007 The Source 2007 branch represented a full upgrade of the Source engine for the release of The Orange Box. An artist-driven, threaded particle system replaced previously hard-coded effects for all of the games within. An in-process tools framework was created to support it, which also supported the initial builds of Source Filmmaker. In addition, the facial animation system was made hardware-accelerated on modern video cards for "feature film and broadcast television" quality. The release of The Orange Box on multiple platforms allowed for a large code refactoring, which let the Source engine take advantage of multiple CPU cores. However, support on the PC was experimental and unstable until the release of Left 4 Dead. Multiprocessor support was later backported to Team Fortress 2 and Day of Defeat: Source. Valve created the Xbox 360 release of The Orange Box in-house, and support for the console is fully integrated into the main engine codeline. It includes asset converters, cross-platform play and Xbox Live integration. Program code can be ported from PC to Xbox 360 simply by recompiling it. The PlayStation 3 release was outsourced to Electronic Arts, and was plagued with issues throughout the process. Gabe Newell cited these issues when criticizing the console during the release of The Orange Box. Left 4 Dead branch The Left 4 Dead branch is an overhaul of many aspects of the Source engine through the development of the Left 4 Dead series. Multiprocessor support was further expanded, allowing for features like split screen multiplayer, additional post-processing effects, event scripting with Squirrel, and the highly-dynamic AI Director. The menu interface was re-implemented with a new layout designed to be more console-oriented. This branch later fueled the releases of Alien Swarm and Portal 2, the former released with source code outlining many of the changes made since the branch began. Portal 2, in addition, served as the result of Valve taking the problem of porting to PlayStation 3 in-house, and in combination with Steamworks integration creating what they called "the best console version of the game". OS X, Linux, and Android support In April 2010, Valve released all of their major Source games on OS X, coinciding with the release of the Steam client on the same platform. Valve announced that all their future games would be released simultaneously for Windows and Mac. The first of Valve's games to support Linux was Team Fortress 2, the port released in October 2012 along with the closed beta of the Linux version of Steam. Both the OS X and Linux ports of the engine take advantage of OpenGL and are powered by Simple DirectMedia Layer. During the process of porting, Valve rearranged most of the games released up to The Orange Box into separate, but parallel "singleplayer" and "multiplayer" branches. The game code to these branches was made public to mod developers in 2013, and they serve as the current stable release of Source designated for mods. Support for Valve's internal Steam Pipe distribution system as well as the Oculus Rift are included. In May 2014, Nvidia released ports of Portal and Half-Life 2 to their Tegra 4-based Android handheld game console Nvidia Shield. Tools and resources Source SDK Source SDK is the software development kit for the Source engine, and contains many of the tools used by Valve to develop assets for their games. It comes with several command-line programs designed for special functions within the asset pipeline, as well as a few GUI-based programs designed for handling more complex functions. Source SDK was launched as a free standalone toolset through Steam, and required a Source game to be purchased on the same account. Since the release of Left 4 Dead in late 2008, Valve began releasing "Authoring Tools" for individual games, which constitute the same programs adapted for each game's engine build. After Team Fortress 2 became free-to-play, Source SDK was effectively made open to all Steam users. When some Source games were updated to Source 2013, the older Source SDKs were phased out. The three applications mentioned below are now included in the install of each game. There are three applications packaged in the Source SDK: Hammer Editor, Model Viewer, and Face Poser. The Model Viewer is a program that allows users to view models and can be used for a variety of different purposes, including development. Developers may use the program to view models and their corresponding animations, attachment points, bones, and so on. Face Poser is the tool used to access facial animations and choreography systems. This tool allows one to edit facial expressions, gestures and movements for characters, lip sync speech, and sequence expressions and other acting cues and preview what the scene will look like in the game engine. Hammer Editor The Hammer Editor, the engine's official level editor, uses rendering and compiling tools included in the SDK to create maps using the binary space partitioning (BSP) method. The tool was originally a GoldSrc editor known as Worldcraft and was developed independently by Ben Morris before Valve acquired it. Level geometry is created with 3D polygons called brushes, each face can be assigned a texture which also defines the properties of the surface such as the sounds used for footsteps. Faces can also be converted into a displacement allowing for more natural shapes such as hills to be created. Scenery objects or complex geometry can be imported as separate 3D models from the game directory. These models can also be used as physics objects or interactive props. The editor also features an in-depth logic I/O system that can be used to create complex interactive elements. Signals to trigger different responses or change the state of an entity can be sent between entities such as buttons, NPCs, intangible trigger brushes, and map props. Source Dedicated Server The Source Dedicated Server (SRCDS) is a standalone launcher for the Source engine that runs multiplayer game sessions without requiring a client. It can be launched through Windows or Linux and can allow for custom levels and assets. Most third-party servers additionally run Metamod:Source and SourceMod, which together provide a framework on top of SRCDS for custom modification of gameplay on existing titles. Source Filmmaker The Source Filmmaker (SFM) is a video capture and editing application that works from within the Source engine. Developed by Valve, the tool was originally used to create movies for Day of Defeat: Source and Team Fortress 2. It was also used to create some trailers for Source Engine games. The software was released to the public in 2012. Destinations Workshop Tools In June 2016, Valve released the Destinations Workshop Tools, a set of free virtual reality (VR) creation tools running using the Source 2 SDK. Valve Developer Community In June 2005, Valve opened the Valve Developer Community (VDC) wiki. VDC replaced Valve's static Source SDK documentation with a full MediaWiki-powered community site; within a matter of days Valve reported that "the number of useful articles nearly doubled". These new articles covered the previously undocumented Counter-Strike: Source bot, Valve's non-player character AI, advice for mod teams on setting up source control, and other articles. Academic papers Valve staff have occasionally produced professional and/or academic papers for various events and publications, including SIGGRAPH, Game Developer Magazine and Game Developers Conference, explaining various aspects of Source engine's development. Games using Source Titanfall, Titanfall 2, and Apex Legends are not included because their engines, while originally based on the Source SDK, were extensively modified to the point that they are effectively different engines. Source 2 Source 2, the successor to Source, was announced by Valve at the Game Developers Conference in March 2015. There, Valve stated that it would be free to use for developers, with support for the Vulkan graphical API, as well as using a new in-house physics engine called Rubikon. In June 2015, Valve announced that Dota 2, originally made in the Source engine, would be ported over to Source 2 in an update called Dota 2 Reborn. Reborn was first released to the public as an opt-in beta update that same month before officially replacing the original client in September 2015, making it the first game to use the engine. Source 2 had succeeded the original engine by the late 2010s. See also First-person shooter engine List of Source engine mods Source 2 Notes References 2004 software Game engines for Linux Proprietary software Video game engines
274976
https://en.wikipedia.org/wiki/WPS%20Office
WPS Office
WPS Office (an acronym for Writer, Presentation and Spreadsheets, previously known as Kingsoft Office) is an office suite for Microsoft Windows, macOS, Linux, iOS, Android, and HarmonyOS developed by Zhuhai-based Chinese software developer Kingsoft. It also comes pre-installed on Fire tablets. WPS Office is made up of three primary components: WPS Writer, WPS Presentation, and WPS Spreadsheet. The personal basic version is free to use. A fully featured professional-grade version is also available for a subscription fee. WPS Office 2016 was released in 2016. As of 2019, the Linux version is developed and supported by a volunteer community rather than Kingsoft itself. By 2019, WPS Office reached a number of more than 310 million monthly active users. The product has had a long history of development in China under the name "WPS" and "WPS Office". For a time, Kingsoft branded the suite as "KSOffice" for the international market, but later returned to "WPS Office". Since WPS Office 2005 the user interface is similar to that of Microsoft Office products, it supports Microsoft document formats besides native Kingsoft formats. History Origins WPS Office was initially known as Super-WPS文字处理系统 (Super-WPS Word Processing System, then known simply as WPS) in 1988 as a word processor that ran on DOS systems and sold by then-Hong Kong Kingsun COMPUTER CO. LTD.. It was the first Chinese-language word processor designed and developed for the mainland Chinese market. WPS was used from the late 1980s. Early history Faced with competition from Microsoft Office, Kingsoft chief software architect Pak Kwan Kau (求伯君) diverted 4 million Renminbi from his personal account to assist in the development of WPS 97 for Microsoft Windows. In 1997, WPS 97 was released. The next version, WPS 2000, was released two years later. Both products were developed for a 16-bit Windows platform, with the capability of running on 32-bit Windows platforms. In May 2001, Kingsoft launched a full office suite under the name WPS Office 2001, which contained a word processor together with spreadsheet and presentation applications. With WPS Office 2001, Kingsoft entered the office productivity market in the People's Republic of China. In 2002, WPS Office 2002 was released, adding an email client to the office suite. WPS Office 2002 aimed to maintain interface compatibility with established office products. In 2003, WPS Office 2003 was released. The Chinese government made Kingsoft office software the standard for various divisions of the government. The 2004 incarnation of the office suite, dubbed WPS Office Storm, was released in late 2004. It claimed to offer total backward compatibility with Microsoft Office file formats. Unlike previous versions, WPS Storm was based on OpenOffice.org and was the first WPS Office suite to support operating systems other than Microsoft Windows. Kingsoft collaborated with Intel and IBM to integrate its text-to-text and text-to-speech technology into WPS Office Storm. In late 2005, WPS Office 2005 was released with a revamped interface and a smaller file size. Besides the Professional edition, a free Simplified Chinese edition was offered for students and home users. A Wine-hosted edition was provided for Linux users of WPS Office Storm. In 2007, Kingsoft Office 2007 was released. This was the first version that tried to enter international markets, with support for the English and Japanese languages. The native Chinese-language version continued under the name WPS Office. In 2009, Kingsoft Office 2009 was released. It had increased compatibility with Microsoft Office including support for the newer 2007-version file formats. In 2010, Kingsoft Office 2010 was released. In 2011, Kingsoft Office was granted funding from the Chinese government and received further orders from central ministries in China. Kingsoft Office Suite Free 2012 was released in 2011. Kingsoft Office Professional 2012 and Kingsoft Office Standard 2012 were released for sale in February 2012, in addition to Kingsoft Office for Android. The initial release for Android included standard word processor functions such as creating documents, spreadsheets, and presentations. On 28 March 2012 Kingsoft announced that WPS for Linux was under development. It is the third WPS Linux product, following WPS Storm and WPS 2005. It was developed from scratch, based on the Qt framework, as compatible as possible with its Windows counterpart. The free and paid versions of Kingsoft Office 2013 were released on 4 June 2013. They consist of three programs: Writer, Spreadsheets, and Presentation, which are similar to Microsoft Word, Excel, and PowerPoint. WPS Office for Linux Alpha 18 Patch 1 was released on 11 June 2015. 2014-present On June 6, 2014, all Kingsoft Office products were renamed WPS Office. On December 16, 2014, WPS Office 2014 for Windows, build 9.1.0.4932, was released as a subscription model for a monthly charge of US$3 for some features. The free version provided basic features and supported Microsoft Office .doc, .xls, and .ppt file formats. Premium paid versions provided full compatibility for Microsoft Office files. Officially only the paid 2014 version supported saving files in .docx, .xlsx, and .pptx formats, but, actually, the free version also supported these formats (as had the 2013 free version). On June 21, 2016, WPS Office 2016 for Windows became generally available as Freemium software, with no subscription needed for basic features. On 28 May 2017 Kingsoft tweeted that the Linux version was at a halt, but denied this a few days later, removed the tweet, and issued a further alpha version. Kingsoft also tweeted making reference to making WPS Office for Linux open-source towards the end of 2017 to allow the Linux community to step in and continue maintaining it, but later deleted this tweet too. WPS Office 2019 was released on May 6, 2019. It introduced new integration and personalized features as well as full support for the PDF format. Editions WPS Office has versions for multiple operating systems. It has editions for: Windows macOS Linux (Fedora, CentOS, OpenSUSE, Ubuntu, Mint, Knoppix) — Originally supports both 32- and 64-bit systems, however support for 32-bit systems stopped as of July 2019. Android iOS In addition to the above, WPS Office also has a web version. Versions and subscription model WPS Office 2016 is available in Free, Premium, and Professional versions, along with versions for Android and iOS. The free version provides basic features and supports Microsoft Office file formats. Some features, such as printing and mail-merge, can be temporarily accessed only after viewing an advertisement, which WPS Office refers to as sponsored access. The subscription-based, paid version, called WPS Office 2016 Premium, is available for US$9.99 per 3 months, and makes all features available without viewing advertisements. A lifetime license for WPS Office 2016 Professional can be purchased for $79.99. File format According to an April 2017 review of WPS Office 2016 Free v10.2.0.5871 for Windows, the program opens and saves all Microsoft Office document formats (doc, docx, xls, xlsx, etc.), HTML, RTF, XML, and PDF. Text document formats: wps, wpt, doc, dot, docx, dotx, docm, dotm XML document formats: xml, htm, html, mht, mhtm, mhtml Spreadsheet document formats: et, ett, xls, xlsx, xlt, xltx, csv, xlsm, xltm, xlsb, ets Slideshow document formats: ppt, pot, pps, dps, dpt, pptx, potx, ppsx, pptm, potm, ppsm, dpss See also List of office suites Comparison of office suites Office Open XML software OpenDocument software Web desktop Notes and references External links Windows Update History 1988 software Android (operating system) software Chinese brands Computer-related introductions in 1988 IOS software Office suites Office suites for Linux Office suites for macOS Office suites for Windows Pascal (programming language) software Proprietary commercial software for Linux Proprietary software that uses Qt Software that uses Qt Windows word processors
53772623
https://en.wikipedia.org/wiki/Elena%20Ferrari
Elena Ferrari
Elena Ferrari is a Professor of Computer Science and Director of the STRICT Social Lab at the Università degli Studi dell’Insubria, Varese, Italy. Ferrari was named Fellow of the Institute of Electrical and Electronics Engineers (IEEE) in 2013 for contributions to security and privacy for data and applications. She has been named one of the “50 Most Influential Italian Women in Tech” in 2018. She was elected as an ACM Fellow in 2019 "for contributions to security and privacy of data and social network systems". Education Ferrari received her MS degree in Computer Science from the University of Milano (Italy) in 1992, and PhD in Computer Science in 1998. Career Ferrari is a full professor of Computer Science at the Università degli Studi dell’Insubria, Varese, Italy. She was an assistant professor at the Department of Computer Science of the University of Milano (Italy) from 1998 until January 2001. She has served on the editorial board of ACM/IMS Transactions on Data Science (TDS), IEEE Internet Computing, and the Transactions on Data Privacy. She is an associate editor of Springer Journal in Data Science And Engineering. Research Ferrari's main research interests are in cybersecurity, privacy, and trust, and she publishes mainly in the areas of security and privacy for big data and Internet of Things (IoT); access control; machine learning for cybersecurity; risk analysis; blockchain; and secure social media. Fundamentally, Ferrari's research investigated means for users to protect their privacy online, and solutions for users to practice better ownership of their data. Examples of her work include temporal role-based access control, enforcing access control in web-based social networks, web content filtering and rule-based access control for social networks. Awards She has received several awards for her work: ACM Conference on Data and Application Security and Privacy (CODASPY) Research Award (2019) ACM SACMAT 10-Year Test-of-Time Award (2019) for her work "A semantic web based framework for social network access control" (2009) IEEE Computer Society Technical Achievement Award (2009) “for pioneering contributions to Secure Data Management.” IEEE Fellow (2012) ACM Fellow (2019) References Fellow Members of the IEEE Fellows of the Association for Computing Machinery Living people Year of birth missing (living people)
1241238
https://en.wikipedia.org/wiki/Pipeline%20%28software%29
Pipeline (software)
In software engineering, a pipeline consists of a chain of processing elements (processes, threads, coroutines, functions, etc.), arranged so that the output of each element is the input of the next; the name is by analogy to a physical pipeline. Usually some amount of buffering is provided between consecutive elements. The information that flows in these pipelines is often a stream of records, bytes, or bits, and the elements of a pipeline may be called filters; this is also called the pipes and filters design pattern. Connecting elements into a pipeline is analogous to function composition. Narrowly speaking, a pipeline is linear and one-directional, though sometimes the term is applied to more general flows. For example, a primarily one-directional pipeline may have some communication in the other direction, known as a return channel or backchannel, as in the lexer hack, or a pipeline may be fully bi-directional. Flows with one-directional tree and directed acyclic graph topologies behave similarly to (linear) pipelines – the lack of cycles makes them simple – and thus may be loosely referred to as "pipelines". Implementation Pipelines are often implemented in a multitasking OS, by launching all elements at the same time as processes, and automatically servicing the data read requests by each process with the data written by the upstream process – this can be called a multiprocessed pipeline. In this way, the CPU will be naturally switched among the processes by the scheduler so as to minimize its idle time. In other common models, elements are implemented as lightweight threads or as coroutines to reduce the OS overhead often involved with processes. Depending upon the OS, threads may be scheduled directly by the OS or by a thread manager. Coroutines are always scheduled by a coroutine manager of some form. Usually, read and write requests are blocking operations, which means that the execution of the source process, upon writing, is suspended until all data could be written to the destination process, and, likewise, the execution of the destination process, upon reading, is suspended until at least some of the requested data could be obtained from the source process. This cannot lead to a deadlock, where both processes would wait indefinitely for each other to respond, since at least one of the two processes will soon thereafter have its request serviced by the operating system, and continue to run. For performance, most operating systems implementing pipes use pipe buffers, which allow the source process to provide more data than the destination process is currently able or willing to receive. Under most Unices and Unix-like operating systems, a special command is also available which implements a pipe buffer of potentially much larger and configurable size, typically called "buffer". This command can be useful if the destination process is significantly slower than the source process, but it is anyway desired that the source process can complete its task as soon as possible. E.g., if the source process consists of a command which reads an audio track from a CD and the destination process consists of a command which compresses the waveform audio data to a format like MP3. In this case, buffering the entire track in a pipe buffer would allow the CD drive to spin down more quickly, and enable the user to remove the CD from the drive before the encoding process has finished. Such a buffer command can be implemented using system calls for reading and writing data. Wasteful busy waiting can be avoided by using facilities such as poll or select or multithreading. Some notable examples of pipeline software systems include: RaftLib – C/C++ Apache 2.0 License VM/CMS and z/OS CMS Pipelines is a port of the pipeline idea to VM/CMS and z/OS systems. It supports much more complex pipeline structures than Unix shells, with steps taking multiple input streams and producing multiple output streams. (Such functionality is supported by the Unix kernel, but few programs use it as it makes for complicated syntax and blocking modes, although some shells do support it via arbitrary file descriptor assignment). Traditional application programs on IBM mainframe operating systems have no standard input and output streams to allow redirection or piping. Instead of spawning processes with external programs, CMS Pipelines features a lightweight dispatcher to concurrently execute instances of built-in programs to run the pipeline. More than 200 built-in programs that implement typical UNIX utilities and interface to devices and operating system services. In addition to the built-in programs, CMS Pipelines defines a framework to allow user-written REXX programs with input and output streams that can be used in the pipeline. Data on IBM mainframes typically resides in a Record-oriented filesystem and connected I/O devices operate in record mode rather than stream mode. As a consequence, data in CMS Pipelines is handled in record mode. For text files, a record holds one line of text. In general, CMS Pipelines does not buffer the data but passes records of data in a lock-step fashion from one program to the next. This ensures a deterministic flow of data through a network of interconnected pipelines. Object pipelines Beside byte stream-based pipelines, there are also object pipelines. In an object pipeline, processing elements output objects instead of text. Windows PowerShell includes an internal object pipeline that transfers .NET objects between functions within the PowerShell runtime. Channels, found in the Limbo programming language are other examples of this metaphor. Pipelines in GUIs Graphical environments such as RISC OS and ROX Desktop also make use of pipelines. Rather than providing a save dialog box containing a file manager to let the user specify where a program should write data, RISC OS and ROX provide a save dialog box containing an icon (and a field to specify the name). The destination is specified by dragging and dropping the icon. The user can drop the icon anywhere an already-saved file could be dropped, including onto icons of other programs. If the icon is dropped onto a program's icon, it's loaded and the contents that would otherwise have been saved are passed in on the new program's standard input stream. For instance, a user browsing the world-wide web might come across a .gz compressed image which they want to edit and re-upload. Using GUI pipelines, they could drag the link to their de-archiving program, drag the icon representing the extracted contents to their image editor, edit it, open the save as dialog, and drag its icon to their uploading software. Conceptually, this method could be used with a conventional save dialog box, but this would require the user's programs to have an obvious and easily accessible location in the filesystem that can be navigated to. In practice, this is often not the case, so GUI pipelines are rare. Other considerations The name "pipeline" comes from a rough analogy with physical plumbing in that a pipeline usually allows information to flow in only one direction, like water often flows in a pipe. Pipes and filters can be viewed as a form of functional programming, using byte streams as data objects; more specifically, they can be seen as a particular form of monad for I/O. The concept of pipeline is also central to the Cocoon web development framework or to any XProc (the W3C Standards) implementations, where it allows a source stream to be modified before eventual display. This pattern encourages the use of text streams as the input and output of programs. This reliance on text has to be accounted when creating graphic shells to text programs. See also Anonymous pipe Component-based software engineering Flow-based programming GStreamer for a multimedia framework built on plugin pipelines Graphics pipeline Iteratees Named pipe, an operating system construct intermediate to anonymous pipe and file. Pipeline (computing) for other computer-related versions of the concept. Kahn process networks to extend the pipeline concept to a more generic directed graph structure Pipeline (Unix) for details specific to Unix Plumber – "intelligent pipes" developed as part of Plan 9 Producer–consumer problem – for implementation aspects of software pipelines Software design pattern Stream processing XML pipeline for processing of XML files Notes External links Pipeline Processing. Parallel Programming: Do you know Pipeline Parallelism? Software design patterns Programming paradigms Inter-process communication sv:Vertikalstreck#Datavetenskap
65835477
https://en.wikipedia.org/wiki/Alla%20Sheffer
Alla Sheffer
Alla Sheffer is a Canadian researcher in computer graphics, geometric modeling, geometry processing, and mesh generation, particularly known for her research on mesh parameterization and angle-based flattening. She is currently a professor of computer science at the University of British Columbia. Education and career Sheffer was educated at the Hebrew University of Jerusalem, earning a bachelor's degree in mathematics and computer science in 1991, a master's degree in computer science in 1995, and a Ph.D. in computer science in 1999. Her dissertation, Geometric Modeling and Applied Computational Geometry, was supervised by Michel Bercovier. After postdoctoral research at the University of Illinois at Urbana–Champaign, she became an assistant professor at the Technion – Israel Institute of Technology in 2001. She moved to the University of British Columbia in 2003, and became a full professor there in 2013. Recognition The Canadian Human–Computer Communications Society gave Sheffer their Achievement Award in 2018, "for her numerous highly impactful contributions to the field of computer graphics research". In 2020, Sheffer was elected as a Fellow of the Royal Society of Canadaand a member of the ACM SIGGRAPH Academy. In 2021, she was elected as a Fellow of IEEE. She was named a 2021 ACM Fellow "for contributions to geometry processing, mesh parameterization, and perception-driven shape analysis and modeling". References External links Home page Year of birth missing (living people) Living people Canadian computer scientists Canadian women computer scientists Israeli computer scientists Israeli women computer scientists Researchers in geometric algorithms Hebrew University of Jerusalem alumni Technion – Israel Institute of Technology faculty University of British Columbia faculty Fellows of the Royal Society of Canada Fellows of the Association for Computing Machinery
1228095
https://en.wikipedia.org/wiki/Group%20Policy
Group Policy
Group Policy is a feature of the Microsoft Windows NT family of operating systems (including Windows 7, Windows 8.1, Windows 10, Windows 11, and Windows Server 2003+) that controls the working environment of user accounts and computer accounts. Group Policy provides centralized management and configuration of operating systems, applications, and users' settings in an Active Directory environment. A set of Group Policy configurations is called a Group Policy Object (GPO). A version of Group Policy called Local Group Policy (LGPO or LocalGPO) allows Group Policy Object management without Active Directory on standalone computers. Active Directory servers disseminate group policies by listing them in their LDAP directory under objects of class groupPolicyContainer. These refer to fileserver paths (attribute gPCFileSysPath) that store the actual group policy objects, typically in an SMB share \\domain.com\SYSVOL shared by the Active Directory server. If a group policy has registry settings, the associated file share will have a file registry.pol with the registry settings that the client needs to apply. The Policy Editor (gpedit.msc) is not provided on Home versions of Windows XP/Vista/7/8/8.1/10/11. Operation Group Policies, in part, control what users can and cannot do on a computer system. For example, a Group Policy can be used to enforce a password complexity policy that prevents users from choosing an overly simple password. Other examples include: allowing or preventing unidentified users from remote computers to connect to a network share, or to block/restrict access to certain folders. A set of such configurations is called a Group Policy Object (GPO). As part of Microsoft's IntelliMirror technologies, Group Policy aims to reduce the cost of supporting users. IntelliMirror technologies relate to the management of disconnected machines or roaming users and include roaming user profiles, folder redirection, and offline files. Enforcement To accomplish the goal of central management of a group of computers, machines should receive and enforce GPOs. A GPO that resides on a single machine only applies to that computer. To apply a GPO to a group of computers, Group Policy relies on Active Directory (or on third-party products like ZENworks Desktop Management) for distribution. Active Directory can distribute GPOs to computers which belong to a Windows domain. By default, Microsoft Windows refreshes its policy settings every 90 minutes with a random 30 minutes offset. On domain controllers, Microsoft Windows does so every five minutes. During the refresh, it discovers, fetches and applies all GPOs that apply to the machine and to logged-on users. Some settings - such as those for automated software installation, drive mappings, startup scripts or logon scripts - only apply during startup or user logon. Since Windows XP, users can manually initiate a refresh of the group policy by using the gpupdate command from a command prompt. Group Policy Objects are processed in the following order (from top to bottom): Local - Any settings in the computer's local policy. Prior to Windows Vista, there was only one local group policy stored per computer. Windows Vista and later Windows versions allow individual group policies per user accounts. Site - Any Group Policies associated with the Active Directory site in which the computer resides. (An Active Directory site is a logical grouping of computers, intended to facilitate management of those computers based on their physical proximity.) If multiple policies are linked to a site, they are processed in the order set by the administrator. Domain - Any Group Policies associated with the Windows domain in which the computer resides. If multiple policies are linked to a domain, they are processed in the order set by the administrator. Organizational Unit - Group policies assigned to the Active Directory organizational unit (OU) in which the computer or user are placed. (OUs are logical units that help organizing and managing a group of users, computers or other Active Directory objects.) If multiple policies are linked to an OU, they are processed in the order set by the administrator. The resulting Group Policy settings applied to a given computer or user are known as the Resultant Set of Policy (RSoP). RSoP information may be displayed for both computers and users using the gpresult command.in networking we can run with it gpedit.msc command Inheritance A policy setting inside a hierarchical structure is ordinarily passed from parent to children, and from children to grandchildren, and so forth. This is termed inheritance. It can be blocked or enforced to control what policies are applied at each level. If a higher level administrator (enterprise administrator) creates a policy that has inheritance blocked by a lower level administrator (domain administrator), this policy will still be processed. Where a Group Policy Preference Settings is configured and there is also an equivalent Group Policy Setting configured, then the value of the Group Policy Setting will take precedence. Filtering WMI filtering is the process of customizing the scope of the GPO by choosing a Windows Management Instrumentation (WMI) filter to apply. These filters allow administrators to apply the GPO only to, for example, computers of specific models, RAM, installed software, or anything available via WMI queries. Local Group Policy Local Group Policy (LGP, or LocalGPO) is a more basic version of Group Policy for standalone and non-domain computers, that has existed at least since Windows XP, and can be applied to domain computers. Prior to Windows Vista, LGP could enforce a Group Policy Object for a single local computer, but could not make policies for individual users or groups. From Windows Vista onward, LGP allow Local Group Policy management for individual users and groups as well, and also allows backup, importing and exporting of policies between standalone machines via "GPO Packs" – group policy containers which include the files needed to import the policy to the destination machine. Group Policy preferences Group Policy Preferences are a way for the administrator to set policies that are not mandatory, but optional for the user or computer. There is a set of group policy setting extensions that were previously known as PolicyMaker. Microsoft bought PolicyMaker and then integrated them with Windows Server 2008. Microsoft has since released a migration tool that allows users to migrate PolicyMaker items to Group Policy Preferences. Group Policy Preferences adds a number of new configuration items. These items also have a number of additional targeting options that can be used to granularly control the application of these setting items. Group Policy Preferences are compatible with x86 and x64 versions of Windows XP, Windows Server 2003, and Windows Vista with the addition of the Client Side Extensions (also known as CSE). Client Side Extensions are now included in Windows Server 2008, Windows 7, and Windows Server 2008 R2. Group Policy Management Console Originally, Group Policies were modified using the Group Policy Edit tool that was integrated with Active Directory Users and Computers Microsoft Management Console (MMC) snap-in, but it was later split into a separate MMC snap-in called the Group Policy Management Console (GPMC). The GPMC is now a user component in Windows Server 2008 and Windows Server 2008 R2 and is provided as a download as part of the Remote Server Administration Tools for Windows Vista and Windows 7. Advanced Group Policy Management Microsoft has also released a tool to make changes to Group Policy called Advanced Group Policy Management (a.k.a. AGPM). This tool is available for any organization that has licensed the Microsoft Desktop Optimization Pack (a.k.a. MDOP). This advanced tool allows administrators to have a check in/out process for modification Group Policy Objects, track changes to Group Policy Objects, and implement approval workflows for changes to Group Policy Objects. AGPM consists of two parts - server and client. The server is a Windows Service that stores its Group Policy Objects in an archive located on the same computer or a network share. The client is a snap-in to the Group Policy Management Console, and connects to the AGPM server. Configuration of the client is performed via Group Policy. Security Group Policy settings are enforced voluntarily by the targeted applications. In many cases, this merely consists of disabling the user interface for a particular function. Alternatively, a malevolent user can modify or interfere with the application so that it cannot successfully read its Group Policy settings, thus enforcing potentially lower security defaults or even returning arbitrary values. Windows 8 enhancements Windows 8 has introduced a new feature called Group Policy Update. This feature allows an administrator to force a group policy update on all computers with accounts in a particular Organizational Unit. This creates a scheduled task on the computer which runs the gpupdate command within 10 minutes, adjusted by a random offset to avoid overloading the domain controller. Group Policy Infrastructure Status was introduced, which can report when any Group Policy Objects are not replicated correctly amongst domain controllers. Group Policy Results Report also has a new feature that times the execution of individual components when doing a Group Policy Update. See also Administrative Templates Group Policy improvements in Windows Vista Workgroup Manager References Further reading External links Group Policy Team Blog Group Policy Settings Reference for Windows and Windows Server Force Gpupdate Active Directory Windows components Windows administration
57307464
https://en.wikipedia.org/wiki/Word%20processor
Word processor
A word processor (WP) is a device or computer program that provides for input, editing, formatting, and output of text, often with some additional features. Early word processors were stand-alone devices dedicated to the function, but current word processors are word processor programs running on general purpose computers. The functions of a word processor program fall somewhere between those of a simple text editor and a fully functioned desktop publishing program. However, the distinctions between these three have changed over time and were unclear after 2010. Background Word processors did not develop out of computer technology. Rather, they evolved from mechanical machines and only later did they merge with the computer field. The history of word processing is the story of the gradual automation of the physical aspects of writing and editing, and then to the refinement of the technology to make it available to corporations and Individuals. The term word processing appeared in American offices in early 1970s centered on the idea of streamlining the work to typists, but the meaning soon shifted toward the automation of the whole editing cycle. At first, the designers of word processing systems combined existing technologies with emerging ones to develop stand-alone equipment, creating a new business distinct from the emerging world of the personal computer. The concept of word processing arose from the more general data processing, which since the 1950s had been the application of computers to business administration. Through history, there have been three types of word processors: mechanical, electronic and software. Mechanical word processing The first word processing device (a "Machine for Transcribing Letters" that appears to have been similar to a typewriter) was patented by Henry Mill for a machine that was capable of "writing so clearly and accurately you could not distinguish it from a printing press". More than a century later, another patent appeared in the name of William Austin Burt for the typographer. In the late 19th century, Christopher Latham Sholes created the first recognizable typewriter although it was a large size, which was described as a "literary piano". The only "word processing" these mechanical systems could perform was to change where letters appeared on the page, to fill in spaces that were previously left on the page, or to skip over lines. It was not until decades later that the introduction of electricity and electronics into typewriters began to help the writer with the mechanical part. The term “word processing” (translated from the german word Textverarbeitung) itself was created in the 1950s by Ulrich Steinhilper, a German IBM typewriter sales executive. However, it did not make its appearance in 1960s office management or computing literatures, though many of the ideas, products, and technologies to which it would later be applied were already well known. But by 1971 the term was recognized by the New York Times as a business "buzz word". Word processing paralleled the more general "data processing", or the application of computers to business administration. Thus by 1972 discussion of word processing was common in publications devoted to business office management and technology, and by the mid-1970s the term would have been familiar to any office manager who consulted business periodicals. Electromechanical and electronic word processing By the late 1960s, IBM had developed the IBM MT/ST (Magnetic Tape/Selectric Typewriter). This was a model of the IBM Selectric typewriter from the earlier part of this decade, but it came built into its own desk, integrated with magnetic tape recording and playback facilities along with controls and a bank of electrical relays. The MT/ST automated word wrap, but it had no screen. This device allowed a user to rewrite text that had been written on another tape, and it also allowed limited collaboration in the sense that a user could send the tape to another person to let them edit the document or make a copy. It was a revolution for the word processing industry. In 1969, the tapes were replaced by magnetic cards. These memory cards were inserted into an extra device that accompanied the MT/ST, able to read and record users' work. In the early 1970s, word processing began to slowly shift from glorified typewriters augmented with electronic features to become fully computer-based (although only with single-purpose hardware) with the development of several innovations. Just before the arrival of the personal computer (PC), IBM developed the floppy disk. In the early 1970s, the first word-processing systems appeared which allowed display and editing of documents on CRT screens. During this era, these early stand-alone word processing systems were designed, built, and marketed by several pioneering companies. Linolex Systems was founded in 1970 by James Lincoln and Robert Oleksiak. Linolex based its technology on microprocessors, floppy drives and software. It was a computer-based system for application in the word processing businesses and it sold systems through its own sales force. With a base of installed systems in over 500 sites, Linolex Systems sold 3 million units in 1975 — a year before the Apple computer was released. At that time, the Lexitron Corporation also produced a series of dedicated word-processing microcomputers. Lexitron was the first to use a full-sized video display screen (CRT) in its models by 1978. Lexitron also used 5 inch floppy diskettes, which became the standard in the personal computer field. The program disk was inserted in one drive, and the system booted up. The data diskette was then put in the second drive. The operating system and the word processing program were combined in one file. Another of the early word processing adopters was Vydec, which created in 1973 the first modern text processor, the “Vydec Word Processing System”. It had built-in multiple functions like the ability to share content by diskette and print it. The Vydec Word Processing System sold for $12,000 at the time, (about $60,000 adjusted for inflation). The Redactron Corporation (organized by Evelyn Berezin in 1969) designed and manufactured editing systems, including correcting/editing typewriters, cassette and card units, and eventually a word processor called the Data Secretary. The Burroughs Corporation acquired Redactron in 1976. A CRT-based system by Wang Laboratories became one of the most popular systems of the 1970s and early 1980s. The Wang system displayed text on a CRT screen, and incorporated virtually every fundamental characteristic of word processors as they are known today. While early computerized word processor system were often expensive and hard to use (that is, like the computer mainframes of the 1960s), the Wang system was a true office machine, affordable to organizations such as medium-sized law firms, and easily mastered and operated by secretarial staff. The phrase "word processor" rapidly came to refer to CRT-based machines similar to Wang's. Numerous machines of this kind emerged, typically marketed by traditional office-equipment companies such as IBM, Lanier (AES Data machines - re-badged), CPT, and NBI. All were specialized, dedicated, proprietary systems, with prices in the $10,000 range. Cheap general-purpose personal computers were still the domain of hobbyists. Japanese word processor devices In Japan, even though typewriters with Japanese writing system had widely been used for businesses and governments, they were limited to specialists who required special skills due to the wide variety of letters, until computer-based devices came onto the market. In 1977, Sharp showcased a prototype of a computer-based word processing dedicated device with Japanese writing system in Business Show in Tokyo. Toshiba released the first Japanese word processor JW-10 in February 1979. The price was 6,300,000 JPY, equivalent to US$45,000. This is selected as one of the milestones of IEEE. The Japanese writing system uses a large number of kanji (logographic Chinese characters) which require 2 bytes to store, so having one key per each symbol is infeasible. Japanese word processing became possible with the development of the Japanese input method (a sequence of keypresses, with visual feedback, which selects a character) -- now widely used in personal computers. Oki launched OKI WORD EDITOR-200 in March 1979 with this kana-based keyboard input system. In 1980 several electronics and office equipment brands entered this rapidly growing market with more compact and affordable devices. While the average unit price in 1980 was 2,000,000 JPY (US$14,300), it was dropped to 164,000 JPY (US$1,200) in 1985. Even after personal computers became widely available, Japanese word processors remained popular as they tended to be more portable (an "office computer" was initially too large to carry around), and become necessities in business and academics, even for private individuals in the second half of the 1980s. The phrase "word processor" has been abbreviated as "Wa-pro" or "wapuro" in Japanese. Word processing software The final step in word processing came with the advent of the personal computer in the late 1970s and 1980s and with the subsequent creation of word processing software. Word processing software that would create much more complex and capable output was developed and prices began to fall, making them more accessible to the public. By the late 1970s, computerized word processors were still primarily used by employees composing documents for large and midsized businesses (e.g., law firms and newspapers). Within a few years, the falling prices of PCs made word processing available for the first time to all writers in the convenience of their homes. The first word processing program for personal computers (microcomputers) was Electric Pencil, from Michael Shrayer Software, which went on sale in December 1976. In 1978 WordStar appeared and because of its many new features soon dominated the market. However, WordStar was written for the early CP/M (Control Program–Micro) operating system, and by the time it was rewritten for the newer MS-DOS (Microsoft Disk Operating System), it was obsolete. WordPerfect and its competitor Microsoft Word replaced it as the main word processing programs during the MS-DOS era, although there were less successful programs such as XyWrite. Most early word processing software required users to memorize semi-mnemonic key combinations rather than pressing keys such as "copy" or "bold". Moreover, CP/M lacked cursor keys; for example WordStar used the E-S-D-X-centered "diamond" for cursor navigation. However, the price differences between dedicated word processors and general-purpose PCs, and the value added to the latter by software such as “killer app” spreadsheet applications, e.g. VisiCalc and Lotus 1-2-3, were so compelling that personal computers and word processing software became serious competition for the dedicated machines and soon dominated the market. Then in the late 1980s innovations such as the advent of laser printers, a "typographic" approach to word processing (WYSIWYG - What You See Is What You Get), using bitmap displays with multiple fonts (pioneered by the Xerox Alto computer and Bravo word processing program), and graphical user interfaces such as “copy and paste” (another Xerox PARC innovation, with the Gypsy word processor). These were popularized by MacWrite on the Apple Macintosh in 1983, and Microsoft Word on the IBM PC in 1984. These were probably the first true WYSIWYG word processors to become known to many people. Of particular interest also is the standardization of TrueType fonts used in both Macintosh and Windows PCs. While the publishers of the operating systems provide TrueType typefaces, they are largely gathered from traditional typefaces converted by smaller font publishing houses to replicate standard fonts. Demand for new and interesting fonts, which can be found free of copyright restrictions, or commissioned from font designers, occurred. The growing popularity of the Windows operating system in the 1990s later took Microsoft Word along with it. Originally called "Microsoft Multi-Tool Word", this program quickly became a synonym for “word processor”. See also List of word processors References Broad-concept articles
18248185
https://en.wikipedia.org/wiki/Spore%20%282008%20video%20game%29
Spore (2008 video game)
Spore is a 2008 life simulation real-time strategy God game developed by Maxis, published by Electronic Arts and designed by Will Wright, and was released for Microsoft Windows and Mac OS X. Covering many genres including action, real-time strategy, and role-playing games, Spore allows a player to control the development of a species from its beginnings as a microscopic organism, through development as an intelligent and social creature, to interstellar exploration as a spacefaring culture. It has drawn wide attention for its massive scope, and its use of open-ended gameplay and procedural generation. Throughout each stage, players are able to use various creators to produce content for their games. These are then automatically uploaded to the online Sporepedia and are accessible by other players for download. Spore was released after several delays to generally favorable reviews. Praise was given for the fact that the game allowed players to create customized creatures, vehicles, and buildings. However, Spore was criticized for its gameplay which was seen as shallow by many reviewers; GameSpot remarked: "Individual gameplay elements are extremely simple". Controversy surrounded Spore due to the inclusion of SecuROM, and its digital rights management software, which can potentially open the user's computer to security risks. Gameplay Spore allows the player to develop a species from a microscopic organism to its evolution into a complex creature, its emergence as a social, intelligent being, to its mastery of the planet, and then finally to its ascension into space, where it interacts with alien species across the galaxy. Throughout the game, the player's perspective and species change dramatically. The game is broken up into distinct "stages". The outcome of one phase affects the initial conditions and leveling facing the player in the next. Each phase exhibits a distinct style of play, and has been described by the developers as ten times more complicated than its preceding phase. Phases often feature optional missions; when the player completes a mission, they are granted a bonus, such as a new ability or money. If all of a player's creations are eliminated at some point, the species will respawn at its nearest colony or the beginning of the phase. Unlike many other Maxis games, Spore has a primary win condition, which is obtained by reaching a supermassive black hole placed at the center of the galaxy and receiving a "Staff of Life". However, the player may continue to play after any goal has been achieved. The first four phases of the game, if the player uses the editors only minimally, will take up to 15 hours to complete, but can take as little as one or two hours. Note that there is no time limit for any stage: the player may stay in a single stage as long as they wish, and progress to the next stage when ready. At the end of each phase, the player's actions cause their creature to be assigned a characteristic, or consequence trait. Each phase has three consequence traits, usually based on how aggressively or peacefully the phase was played. Characteristics determine how the creature will start the next phase and give it abilities that can be used later in the game. Stages Spore is a game that is separated into stages, each stage presenting a different type of experience with different goals to achieve. The five stages are the Cell Stage, the Creature Stage, the Tribal Stage, the Civilization Stage, and the Space Stage. Once the primary objective is completed, the player has the option to advance to the next stage, or continue playing the current stage. Cell Stage The Cell Stage (sometimes referred to as the tide pool, cellular, or microbial stage) is the very first stage in the game, and begins with a cinematic explanation of how the player's cell got onto the planet through the scientific concept of panspermia, with a meteor crashing into the ocean of a planet and breaking apart, revealing a single-celled organism. The player guides this simple microbe around in a 3D environment on a single 2D plane, reminiscent of Flow, where it must deal with fluid dynamics and predators, while eating meat chunks or plants. The player may choose whether the creature is a herbivore or a carnivore prior to starting the stage. The player can find "meteor bits" (apparently from the aforementioned panspermic meteor) or kill other cells to find parts that upgrade their creature by adding abilities such as electricity, poison or other parts. Once the microbe has found a part, the player can call a mate to enter the editor, in which they can modify the shape, abilities and appearance of the microbe by spending "DNA points" earned by eating meat chunks or plants in the stage. The cell's eating habits in the Cell Stage directly influence its diet in the Creature Stage, and only mouths appropriate to the diet (Herbivore, Carnivore, or Omnivore) established in the Cell Stage will become available in the Creature Stage. Once the creature grows a brain and the player decides to progress to the next stage, the creature editor appears, prompting the user to add legs before the shift to land. The Creature editor differs in that it gives the player the ability to make major changes to the creature's body shape and length, and place parts in three-dimensional space instead of a top-down view as in the Cell editor. Creature Stage In the Creature Stage, the player creates their own land creature intended to live on a single continent. If the player attempts to swim to another island, an unidentified monster eats the player, and the player is warned not to come again. The biosphere contains a variety of animal species which carnivorous and omnivorous creatures can hunt for food, and fruit-bearing plants intended for herbivores and omnivores. The player creature's Hunger becomes a measured stat as well as its Health in this stage; depletion of the Hunger meter results in Health depletion and eventual death of the player creature unless food is eaten. In the Creature Stage, the player has a home nest where members of their own species are located. The nest is where the player respawns following death, and acts as a recovery point for lost HP. Other species' nests are spread throughout the continent. While interacting with them, the player can choose to be social or aggressive; how the player interacts with other creatures will affect their opinion of the player's species. For instance, by mimicking their social behaviors (singing, dancing etc.), NPC creatures will eventually consider the player an ally, but if the player harms members of their species, they will flee or become aggressive upon sighting them. The player can heal in allied nests and add allied creatures to their packs. Epic creatures, which are rare, aggressive creatures more than twenty times the player's height, feature prominently in the Creature Stage. The player cannot use social interactions with an Epic creature. There are also Rogue creatures which may be befriended or attacked. Additionally, spaceships may appear in this stage and abduct a creature. Progress in the Creature Stage is determined by the player's decisions on whether to befriend or attack other species. These decisions will affect the abilities of the player's species in subsequent stages of the game. Successful socialization and hunting attempts will give DNA Points, which may be spent on many new body parts. The player will also be rewarded with multiple DNA points for allying with or causing the extinction of a species. Placing new parts in the Creature editor comes at the expense of DNA points; more expensive parts will further upgrade the player creature's abilities for either method of interaction, as well as secondary abilities such as flight, speed or boosted health. After the player is finished editing, a newly evolved generation of creatures will be present in the home nest as the player's creature hatches. As the player's creature befriends or hunts more creatures, its intelligence and size increases until it can form a tribe. Tribal Stage After the brain of the player's species evolves sufficiently, the species may enter the Tribal Stage. The species' design becomes permanent, and the player sheds control of an individual creature in favor of the entire tribe group, as the game focuses on the birth of division of labor for the species. The player is given a hut, a group of up to 12 fully evolved creatures, as well as two of six possible Consequence Abilities, unlocked depending on the species' behavior in the previous phases. This is only possible if the player played the previous stages; if the player started directly from the Galaxy Screen, they are locked. Gameplay during this stage is styled as an RTS. Rather than controlling one creature, the player now controls an entire tribe and can give them commands such as gathering food, attacking other tribes or simply moving to a certain location. The player may give the tribe tools such as weapons, musical instruments, and healing or food-gathering implements. Food now replaces "DNA points" as the player's currency, and can be spent on structures and additional tribe members, or used to appease other tribes of different species. Tribe members also gain the option to wear clothes, the editing of which replaces the Creature Editor in the 'Tribal Outfitter'. Combat can be made more effective with weapons like stone axes, spears, and torches. For socializing, a player can obtain musical instruments: wooden horns, maracas and didgeridoos. Miscellaneous tools can be used for fishing and gathering food and for healing tribe members. All tools, however, require a specialized tool shack, which costs food to build. Tribe members can also gather food, an essential concept. Food can be stolen by wild creatures or by other tribes in the form of raids. The diet choice that the player made in prior stages, whether herbivore, omnivore, or carnivore, determines what food the tribe can gather and eat. Animals can be hunted for meat, and fish or seaweed can be speared for food. Fruit is gathered from trees and bushes, and players can also domesticate animals for eggs, which all diet types can eat. Any foreign animals in the player's pack in the Creature Stage are automatically added to the tribe as farm animals. Epic creatures may threaten nests or tribes. Allied tribes will occasionally bring the player gifts of food. Players can steal food from other tribes (though it angers them), and dead tribes may be pillaged for their food. There are five other tribes that appear along with the player's tribe. For every tribe befriended or destroyed, a piece of a totem pole is built, which may increase the population limit of the player's tribe or grant access to new tools and clothes. When all five tribes are allied or conquered, the player may move forward to the Civilization Stage. Civilization Stage The events of Tribal Stage have left the player's tribe the dominant species of the planet, but the species itself has now fragmented into many separate nations. The player retains control of a single nation with one city. The goal in the civilization phase is to gain control of the entire planet, and it is left to the player to decide whether to conquer it using military force, diplomacy, or religious influence. Two new editors (the building and vehicle editors) are used to create city buildings and vehicles. The player can place three types of buildings (House, Factory, and Entertainment) around the City Hall (which can also be customized) and may build up to 3 types of vehicles (sea, land and air) at each city. These vehicles serve military, economic or religious purposes. The main unit of currency is "Sporebucks", which is used to purchase vehicles and buildings. To earn income, players can capture spice geysers and set up spice derricks at their locations, conduct trade, or build factories. In constructing vehicles and buildings, as with most real-time strategy games, there is a capacity limit; building houses will increase the cap, and constructing various buildings adjacent to one another will provide a productivity bonus or deficit. The presence of other nations requires the player to continue expanding their empire using military force, propaganda, or simply buying out cities. Players can choose their method of global domination depending on the types of cities they own. Military states grow solely by attacking other cities. Nations with a religious trait construct special missionary units that convert other cities via religious propaganda. Likewise, economic states communicate solely by trade and have no weapons (except for defensive Turrets). If the player's nation captures a city of a different type, they can choose to have the city retain its original type if they wish, or convert it to match the type it was captured with. Players of all three ideological paths can eventually use a superweapon, which requires a large number of cities and Sporebucks, but gives the player a significant advantage over rival nations. Aside from enemy nations, Epic creatures may threaten individual cities. Space Stage The Space Stage provides new goals and paths as the player's species begins to spread through the galaxy. The game adopts the principle of mediocrity, as there are numerous forms of life scattered throughout the galaxy. The player controls a single spaceship, built at the beginning of the Space Stage. The player can travel by clicking on other planets and moons and stars, though each jump costs energy. Later in the game, the player can purchase a wormhole key which enables them to travel through black holes, offering instant transportation to a sister black hole. There are around 500,000 planets in the game's galaxy orbiting around 100,000 stars (including Earth and its star, Sol). Players can visit and explore all rocky planets with all their lifeforms and geologic structure. These planets can also be terraformed and colonized. The colonization of new worlds makes the player's civilization more influential and increases its income. Players can make contact with other space-faring civilizations, or "empires", which sport many different personalities and worldviews, ranging from diplomatic and polite species willing to ally, to distrustful, fanatical empires more willing to wage war. Completing missions for an empire improves the player's relationship with them, as does trading and assisting in fending off attacks. When the player has become allied with an empire, they can ask certain favors of the empire. If the player becomes enemies with an empire, they will send a small fleet of ships to attack the player's ship as soon as they enter their territory. One of the main goals in the Space Stage is for the player to push their way toward a supermassive black hole at the galaxy's center, which introduces the game's final antagonists, the Grox, a unique species of cybernetic aliens with a powerful empire of 2400 systems surrounding the core. Getting to the center of the galaxy and entering starts a cinematic in which the player is introduced to Steve. After the cinematic dialogue with Steve ends the player is shot out of the black hole, and gets rewarded with the Staff of Life. Another major goal in the game was to eradicate the Grox, which yielded an achievement. Removed stages Several other stages were mentioned at various points by the developers, including a Molecular Stage, an Aquatic Stage, a City Stage, and a Terraforming Stage. Ultimately, these were scrapped. Galactic Adventures If Galactic Adventures is installed, the player may be given missions which involve travelling to planets, beaming down and completing Maxis-created, planetside 'adventures'. With this expansion, the player can also outfit their Captain with weapons and accessories which assist in these adventures. The occupants of allied ships can also take part. Editors/creators User-generated content is a major feature of Spore; there are eighteen different editors (some unique to a phase). All have the same general UI and controls for positioning, scaling and colouring parts, whether for the creation of a creature, or for a building or vehicle. The Creature editor, for example, allows the player to take what looks like a lump of clay with a spine and mould it into a creature. Once the torso is shaped, the player can add parts such as legs, arms, feet, hands, noses, eyes, and mouths. Many of these parts affect the creature's abilities (speed, strength, diet, etc.), while some parts are purely decorative. Once the creature is formed, it can be painted using a large number of textures, overlays, colours, and patterns, which are procedurally applied depending on the topology of the creature. The only required feature is the mouth. All other parts are optional; for example, creatures without legs will slither on the ground like a slug or an inchworm, and creatures without arms will be unable to pick up objects. Although there is not a formal planet editor, in the Space Stage, players can freely terraform all rocky planets in the galaxy, adding mountains, valleys, lakes, etc. Players can also change these planets' biological ecosystems. There are two new editors seen in the new expansion Spore Galactic Adventures: these include the captain editor (also called the captain outfitter) and the adventure creator, which enables terraforming and placing objects freely on adventure planets. Community Spore user community functionality includes a feature that is part of an agreement with YouTube granting players the ability to upload a YouTube video of their creatures' activity directly from within the game, and EA's creation of "The Spore YouTube Channel", which will showcase the most popular videos created this way. In addition, some user-created content will be highlighted by Maxis at the official Spore site, and earn badges of recognition. One of Spore's most social features is the Sporecast, an RSS feed that players can use to subscribe to the creations of any specific Spore player, allowing them to track their creations. There is a toggle which allows the player to restrict what downloadable content will be allowed; choices include: "no user generated content", "official Maxis-approved content", "downloadable friend content", and "all user-created content". Players can elect to ban content in-game, at any time, and Maxis monitors content for anything deemed inappropriate, issuing bans for infractions of content policy. Spore API Spore has also released an API (application programming interface) to allow developers to access data about player activity, the content they produce and their interactions with each other. The Spore API is a collection of RESTful public web services that return data in .XML format. In April 2009, the results of the Spore API Contest was concluded with winners building interactive visualizations, games, mobile applications and content navigation tools. The API also includes a Developers forum for people wishing to use all the creations people have made to create applications. Interplay The game is referred to as a "massively single-player online game" and "asynchronous sharing." Simultaneous multiplayer gaming is not a feature of Spore. The content that the player can create is uploaded automatically to a central database, cataloged and rated for quality (based on how many users have downloaded the object or creature in question), and then re-distributed to populate other players' games. The data transmitted is very small — only a couple of kilobytes per item transmitted – due to procedural generation of material. Via the in-game "MySpore Page", players receive statistics of how their creatures are faring in other players' games, which has been referred to as the "alternate realities of the Spore metaverse." The game also reports how many other players have interacted with the player. For example, the game reports how many times other players have allied with the player's species. The personalities of user-created species are dependent on how the user played them. Players can share creations, chat, and roleplay in the Sporum, the game's internet forum hosted by Maxis. Multiple sections allow forum users to share creations and tips for the game, as well as roleplay. Sporepedia The Sporepedia keeps track of nearly every gameplay experience, including the evolution of a creature by graphically displaying a timeline which shows how the creature incrementally changed over the eons; it also keeps track of the creature's achievements, both noteworthy and dubious, as a species. The Sporepedia also keeps track of all the creatures, planets, vehicles and other content the player encounters over the course of a game. Players can upload their creations to Spore.com to be viewed by the public at the Sporepedia website. The ever-growing list of creations made by players is past the 100 million mark so far. Procedural generation Spore uses procedural generation extensively in relation to content pre-made by the developers. Wright mentioned in an interview given at E3 2006 that the information necessary to generate an entire creature would be only a couple of kilobytes, and went on to give the following analogy: "think of it as sharing the DNA template of a creature while the game, like a womb, builds the 'phenotypes' of the animal, which represent a few uploaded and downloaded freely and quickly from the Sporepedia online server. This allows users to asynchronously upload their creations and download other players' content, which enriches the experience of the game as more of its players progress in the game." Reception IGN Australia awarded Spore a 9.2 out of 10 score, saying, "It [Spore] will make you acknowledge just how far we've come, and just how far we have to go, and Spore will change the way you think about the universe we live in." PC Gamer UK awarded the game a 91%, saying "Spores triumph is painfully ironic. By setting out to instill a sense of wonderment at creation and the majesty of the universe, it's shown us that it's actually a lot more interesting to sit here at our computers and explore the contents of each other's brains." In its 4.5 (of 5) -star review, GameSpy wrote "Spore is a technological triumph that introduces a whole new way of tapping into a bottomless well of content." Most of the criticism of Spore came from the lack of depth in the first four phases, summarized by Eurogamer's 9 of 10 review, which stated, "for all their mighty purpose, the first four phases of the game don't always play brilliantly, and they're too fleeting." 1UP.com reasoned in its B+ graded review, "It's not a perfect game, but it's definitely one that any serious gamer should try." GameSpot in its 8.0 of 10 review called Spore "a legitimately great game that will deliver hours of quality entertainment", but criticized the "individual gameplay elements [that] are extremely simple." Jason Ocampo's IGN 8.8 of 10 review stated, "Maxis has made an impressive product that does so many incredible things" but added, "while Spore is an amazing product, it's just not quite an amazing game." The New York Times review of Spore mostly centered on lack of depth and quality of gameplay in the later phases of the game, stating that "most of the basic core play dynamics in Spore are unfortunately rather thin." While a review in PC Gamer US stated that "it just isn't right to judge Spore in the context of so many of the other games we judge", Zero Punctuation was also critical of the game, claiming it did not live up to the legacy of The Sims: "The chief failing of Spore is that it's trying to be five games, each one a shallow and cut down equivalent of another game, with the Civilization Stage even going so far as to be named after the game [Civilization] it's bastardizing." Criticism has also emerged surrounding the stability of the game, with The Daily Telegraph stating: "The launch of Spore, the keenly anticipated computer game from the creators of The Sims, has been blighted by technical problems." In an interview published by MTV, Spore designer Will Wright responded to early criticism that the phases of the game had been dumbed-down by explaining "We were very focused, if anything, on making a game for more casual players. Spore has more depth than, let’s say, The Sims did. But we looked at the Metacritic scores for Sims 2, which was around ninety, and something like Half-Life, which was ninety-seven, and we decided — quite a while back — that we would rather have the Metacritic and sales of Sims 2 than the Metacritic and sales of Half-Life". In its first three weeks on sale, the game sold 2 million copies, according to Electronic Arts. It received a "Silver" sales award from the Entertainment and Leisure Software Publishers Association (ELSPA), indicating sales of at least 100,000 copies in the United Kingdom. DRM controversy Spore uses a modified version of the controversial digital rights management (DRM) software SecuROM as copy protection, which requires authentication upon installation and when online access is used. This system was announced after the originally planned system met opposition from the public, as it would have required authentication every ten days. Additionally, EA released the game under a policy by which the product key of an individual copy of the game would only be authenticated on up to three computers; however, some users ran afoul of the limitations as the software would consider even a slight change of hardware to constitute a different computer, resulting in all authorizations being used up by those who often upgrade their computer. In response to customer complaints, this limit was raised to five computers. After the activation limit has been depleted, EA Customer Service will consider further activations on a case-by-case basis. A survey conducted by EA revealed that only 14% have activated on more than 1 PC and less than 1% of users have tried to activate Spore on more than 3 PCs. By September 14, 2008 (ten days after the game's initial Australian release), 2,016 of 2,216 ratings on Amazon gave the game one out of five stars, most citing EA's implementation of DRM for the low ratings. Electronic Arts cited SecuROM as a "standard for the industry" and Apple's iPod song DRM policy as justification for the control method. Former Maxis developer Chris Harris labeled the DRM a "screw up" and a "totally avoidable disaster". The SecuROM software was not mentioned on the box, in the manual, or in the software license agreement. An EA spokesperson stated that "we don't disclose specifically which copy protection or digital rights management system we use [...] because EA typically uses one license agreement for all of its downloadable games, and different EA downloadable games may use different copy protection and digital rights management.” A cracked version without the DRM was released two days before the initial Australian release, making Spore the most torrented game on BitTorrent in 2008. On September 22, 2008, a class action lawsuit was filed against EA, regarding the DRM in Spore, complaining about EA not disclosing the existence of SecuROM, and addressing how SecuROM runs with the nature of a rootkit, including how it remains on the hard drive even after Spore is uninstalled. On October 14, 2008, a similar class action lawsuit was filed against EA for the inclusion of DRM software in the free demo version of the Creature Creator. The DRM was also one of the major reasons why Spore is still one of the most pirated games to date, where within the first week of the game, over 500,000 people started downloading or downloaded it illegally from sites like The Pirate Bay. EA began selling Spore without SecuROM on December 22, 2008, through Steam. Furthermore, EA Games president Frank Gibeau announced that maximum install limit would be increased from 3 to 5 and that it would be possible to de-authorize and move installations to new machines, citing the need to adapt their policy to accommodate their legitimate customers. EA has stated, "By running the de-authorization tool, a machine 'slot' will be freed up on the online Product Authorization server and can then be re-used by another machine. You can de-authorize at any time, even without uninstalling Spore, and free up that machine authorization. If you re-launch Spore on the same machine, the game will attempt to re-authorize. If you have not reached the machine limitation, the game will authorize and the machine will be re-authorized using up one of the five available machines." However, the de-authorization tool to do this is not available on the Mac platform. In 2016, a DRM-free version of Spore was released on GOG.com. Scientific accuracy The educational community has shown some interest in using Spore to teach students about evolution and biology. However, the game's player-driven evolution mechanism differs from the theory of evolution in some key ways: The different species that appear in Spore each have different ancestors, not shared ones, and the player's creature's "evolutionary" path is linear instead of branched: one species can only evolve into one other species, as opposed to into many related species. In Spore, the player's creature evolves along a path towards intelligence, instead of evolving solely in response to random genetic changes and pressure from its environment. In real world evolution, there are many possible evolutionary pathways, and there is no endpoint except extinction. (However a change in environment most likely will cause the player to change their creature to help survive in a new environment e.g. growing long arms to reach fruits on trees.) In the real world, an organism's environment shapes its evolution by allowing some individuals to reproduce more and causing other individuals to die. In Spore, the only things shaping the way the creatures change over time are game statistics and "whatever the player thinks looks cool." In Spore, creatures have to collect new parts from other creatures or from skeletal remains in order to evolve those parts themselves. In reality, this does not occur, although in some cases organisms can appropriate the genes of other species. Bacteria and viruses can transfer genes from one species of macroscopic organism to another. However, this transfer is limited to single or occasionally multiple alleles; it never involves complex organs like mouths or limbs, as in Spore. In real evolution, microorganisms grew in size due to the rise of cyanobacteria, or photosynthesizing cells, rather than solely the consumption of additional food, as in Spore. In October 2008, John Bohannon of Science magazine assembled a team to review the game's portrayal of evolution and other scientific concepts. Evolutionary biologists T. Ryan Gregory of the University of Guelph and Niles Elredge of the American Museum of Natural History reviewed the Cell and Creature stages. William Sims Bainbridge, a sociologist from the U.S. National Science Foundation, reviewed the Tribe and Civilization stages. NASA's Miles Smith reviewed the Space Stage. The Science team evaluated Spore on twenty-two subjects. The game's grades ranged from a single A in galactic structure and a B+ in sociology to Fs in mutation, sexual selection, natural selection, genetics, and genetic drift. In addition, Yale evolutionary biologists Thomas Near and Thomas Prum found Spore fun to play and admired its ability to get people to think about evolutionary questions, but consider the game's evolutionary mechanism to be "severely messed up.". With this noted, study of how players make meaning with the game suggest that the game prompts more sophisticated thinking about evolution than the model the game presents. According to Seed magazine, the original concept for Spore was more scientifically accurate than the version that was eventually released. It included more realistic artwork for the single-celled organisms and a rejection of faster-than-light travel as impossible. However, these were removed to make the game more friendly to casual users. While Seed does not entirely reject Spore as a teaching tool, admiring its ability to show the user experimentation, observation, and scale, biological concepts did not fare so well: Will Wright argues that developers "put the player in the role of an intelligent designer" because of the lack of emotional engagement of early prototypes focusing on mutation. Intelligent design advocate Michael Behe of Lehigh University reviewed the game and said that Spore "has nothing to do with real science or real evolution—neither Darwinian nor intelligent design." Expansions Spore Creature Creator is the creature creator element of Spore released prior to the full game. Spore Creepy and Cute is an expansion pack that was released in late 2008, it includes new parts and color schemes for creature creation. Among the new parts were additional mouths and eyes, as well as "insect legs." The pack also included new test-drive animations and backgrounds. Spore Galactic Adventures was released on June 23, 2009. It allows the player's creature to beam onto planets, rather than using a hologram. It also adds an "Adventure Creator" which allows for the creation of missions and goals to share with the Spore community. Creatures can add new abilities, including weaponry, tanks, and crew members, as well as a section of the adventure creator that involves editing a planet and using 60 new flora parts. Spore Bot Parts Pack is an expansion part of an EA promotion with Dr Pepper in early 2010, 14 new robotic parts for Spore creatures were released in a new patch (1.06.0000) available only from the Dr. Pepper website. Codes found on certain bottles of Dr Pepper allow the player to redeem these parts, albeit only for the US, excluding Maine. It was only available for Windows PC, and was eventually extended to Canadian residents. The promotion ended in late 2011. The Spore Bot Parts Pack has caused controversy within the Spore community, because of many problems with the download and its exclusive nature. Spinoffs The Nintendo DS spinoff is titled Spore Creatures, focusing on the Creature phase. The game is a 2D/3D story-based role-playing game as the gamer plays a creature kidnapped by a UFO and forced to survive in a strange world, with elements of Nintendogs. Another Spore title for the DS called Spore Hero Arena was released in 2009. Spore Origins is the mobile phone/iPhone/iPod spinoff of Spore, and as with the Nintendo DS version, focuses on a single phase of gameplay; in this case, the cell phase. The simplified game allows players to try to survive as a multicellular organism in a tide pool, similar to Flow.<ref>{{cite web|url=http://www.1up.com/do/previewPage?cId=3166260|archive-url=https://archive.today/20120719105230/http://www.1up.com/do/previewPage?cId=3166260|url-status=dead|archive-date=2012-07-19|title=1Up Spore Mobile preview}}</ref> The iPhone version takes advantage of the device's touch capabilities and 3-axis accelerometer. A Wii spinoff of the game now known as Spore Hero has been mentioned by Will Wright several times, such as in his October 26, 2007 interview with The Guardian. Buechner confirmed it, revealing that plans for a Wii version were underway, and that the game would be built from the ground up and would take advantage of the Wii Remote, stating, "We're not porting it over. You know, we're still so early in design and prototyping that I don't know where we're going to end up, so I don't want to lead you down one path. But suffice to say that it's being developed with the Wii controls and technology in mind." Eventually, a spin-off under the title "Spore Hero" was announced, an adventure game built ground up for the Wii with a heavier focus on evolution. For a time, Xbox 360 and PlayStation 3 versions of Spore were under consideration. Frank Gibeau, president of Electronic Arts' Games Label announced that the publisher might use the underlying technology of Spore to develop electric software titles, such as action, real-time strategy, and role-playing games for the PlayStation 3, Xbox 360, and Wii.Spore Hero Arena is a spinoff game for the Nintendo DS released on October 6, 2009.Darkspore was an action role-playing game that utilized the same creature-editing mechanics. It was released in April 2011 for Microsoft Windows. The game was shut down in March 2016.Spore Creature Keeper was a spin-off game developed by Maxis for Windows and OS X. Made for younger users, the gameplay was heavily based on The Sims. Originally planned for a summer 2009 release, the game development was eventually cancelled. Other media Merchandising There is an iTunes-style "Spore Store" built into the game, allowing players to purchase external Spore licensed merchandise, such as t-shirts, posters, and future Spore expansion packs. There are also plans for the creation of a type of Spore collectible card game based on the Sporepedia cards of the creatures, buildings, vehicles, and planets that have been created by the players. There are also indications of plans for the creation of customized creature figurines; some of those who designed their own creatures at E3 2006 later received 3D printed models of the creatures they created. On December 18, 2008, it was announced that players could now turn their creations into 3D sculptures using Z Corporations 3D printing technology. The Spore Store also allows people to put their creatures on items such as T-shirts, mugs, and stickers. The Spore team worked with a comic creation software company to offer comic book versions of players' "Spore stories". Comic books with stylized pictures of various creatures, some whose creation has been shown in various presentations, can be seen on the walls of the Spore team's office. The utility was revealed at the Comic-Con International: San Diego on July 24, 2008, as the Spore Comic Creator, which would utilize MashOn.com and its e-card software.Spore: Galactic Edition, a special edition of the game; includes a Making of Spore DVD video, How to Build a Better Being DVD video by National Geographic Channel, The Art of Spore hardback mini-book, a fold-out Spore poster and a 100-page Galactic Handbook published by Prima Games. Canceled theatrical film EA, 20th Century Fox, and AIG announced the development of a Spore film on October 1, 2009. The adaptation would be a CGI-animated film created by Blue Sky Studios and directed by Chris Wedge. However, the film remained in development hell for years. Following Disney's purchase of Fox, Blue Sky Studios announced that they would be closing down, leaving the film ostensibly canceled. Soundtrack Cliff Martinez composed the main menu Galaxy theme track, along with the related interstellar and solar music. Brian Eno together with Peter Chilvers created the generative music heard while editing planets in the space stage. Kent Jolly, with sample source from Brian Eno, created the generative music for cell game, cell editor, creature game, creature editor, tribe game, and civ stage building editor. Aaron Mcleran, also with some sample source from Brian Eno, created the generative music for the Tribe Editor, and all of the vehicle editors. Other composers included Jerry Martin, Saul Stokes (Sporepedia music), and Marc Russo. The civ stage user theme generation was designed by Kent Jolly, Aaron Mcleran and Cyril Saint Girons, with sample source provided by Brian Eno. All of the audio in Spore was implemented using a modified version Pure Data created by Miller Puckette. A demonstration and talk about the techniques used in Spore's generative music can be found here: https://www.gdcvault.com/play/323/Procedural-Music-in Use in academiaSpore has been used in academic studies to see how respondents display surrogation. See also Black & White Creatures Eco E.V.O.: Search for Eden Evolution: The Game of Intelligent Life Impossible Creatures L.O.L.: Lack of Love No Man's Sky Seventh Cross: Evolution SimEarth SimLife Universe Sandbox 3D Virtual Creature EvolutionReferences External links Sporepedia at official Spore'' website 2008 video games Biological simulation video games Electronic Arts franchises Electronic Arts games God games MacOS games Maxis Sim games Science fiction video games Single-player video games Single-player online games Video game franchises Video game franchises introduced in 2008 Video games about evolution Video games about microbes Video games with expansion packs Video games with underwater settings Video games using procedural generation Video games with user-generated gameplay content Video games developed in the United States Video games scored by Cliff Martinez Video games set on fictional planets Windows games
50003408
https://en.wikipedia.org/wiki/Open%20Mainframe%20Project
Open Mainframe Project
Open Mainframe Project is a Collaborative Project managed by the Linux Foundation to encourage the use of Linux-based operating systems and open source software on mainframe computers. The project was announced on August 17, 2015 and was driven by IBM, a major supplier of mainframe hardware, as well as 16 other founding members, that included SUSE, CA Technologies, BMC Software, Compuware as well as clients and partners such as RSM Partner, Vicom Infinity, L3C LLP and ADP, and academic institutions such as Marist College and University of Bedfordshire. Coincident with the announcement, IBM also announced a partnership with Canonical to make the Ubuntu operating system available for their high-end z Systems hardware. Development priorities for the project in 2016 include OpenJDK, Docker and Hyperledger. In February 2016 the Linux Foundation announced new members had joined the Open Mainframe Project: Hitachi Data Systems, Sine Nomine Associates, East Carolina University and DataKinetics, a 35% expansion in the overall membership. Canonical, the organization behind Ubuntu, has also joined. Part of the announcement was the launch of a summer intern program. Projects Zowe Zowe is the first open source project for z/OS. It was announced in August 2018 at SHARE in St. Louis together with the open beta release of version 0.9 that contained contributions from IBM, Computer Associates, and Rocket Software. Version 1.0 was released in February 2019. In September 2019 Phoenix Software International obtained Zowe conformance for their (E)JES Command Line Interface plugins and REST API extension. It narrows the skills gap between new and legacy z/OS developers by offering the choice to work with z/OS either through a Command Line Interface, a "Zowe Explorer" Visual Studio extension, a web browser served from the Zowe Application Framework, or through REST APIs and web sockets served through the API Mediation Layer. Zowe is an extensible platform for tools, and provides the ability for extension through CLI plugins, new applications to be added to the web desktop, and onboarding of REST APIs to the API Mediation Layer. The Zowe conformance program provides certification accreditation to Independent Software Vendors (ISVs) and System Integrators (SIs) building and distributing Zowe extensions. See also Linux on IBM Z References External links Linux Foundation projects IBM
13137293
https://en.wikipedia.org/wiki/Williams%20FW15C
Williams FW15C
The Williams FW15C is a Renault-powered Formula One car designed by Adrian Newey and built by Williams Grand Prix Engineering. It was raced by Alain Prost and Damon Hill during the 1993 Formula One season. As the car that won both the drivers' and Constructors' Championships in the last season before the FIA banned electronic driver aids, the FW15C (along with its racing predecessor FW14B) was, in 2005, considered to be the most technologically sophisticated Formula One cars of all time, incorporating anti-lock brakes, traction control, active suspension, and a semi-automatic and fully-automatic gearbox. Predecessors FW15 The original FW15 was a new car designed in 1992 to incorporate the active suspension changes developed by Frank Dernie and implemented on the previous season's FW14B. The FW14B had initially been designed as a passive car (FW14) and had been pushed into being active. This meant it had various new active components implemented on the car which had not been in the original design brief. It was therefore considered a relatively overweight package. The original FW15 was an active car from the start which enabled a much tidier package closer to the minimum weight limit. The success of the FW14B meant that the FW15 was not needed in 1992. FW15B The FW15B was a 1992 FW15 hastily converted to the 1993 regulations featuring narrower front suspension, narrower rear tyres, raised nose and wing endplates, and narrower wings to enable early season testing for 1993. Chassis Building on the hugely successful FW14B which took Nigel Mansell and Williams to both titles in 1992, the car was the first all-new car to be produced by Patrick Head and Adrian Newey in collaboration (Head had designed many of Williams's previous cars, while Newey had designed cars for the March and Leyton House Racing teams). With Newey's aerodynamic input the FW15 was a significant improvement on its predecessor, with a narrower nose, sleeker airbox and engine cover and carefully sculpted sidepods. Another new feature was the larger rear wing used at high-downforce circuits which featured an extra element ahead and above the main wing (similar to the 'winglets' seen in Grand Prix racing in and ). The car was available in August 1992, but given the success and improved reliability of the FW14B, prudence dictated that the new car did not make its debut until the following year's season-opener in South Africa. As a result of the huge difference in build of their two drivers (Alain Prost was nearly half a foot shorter than Damon Hill), Williams eventually opted to build two slightly different FW15C tubs, so as to accommodate Hill's size 12 feet, as he had repeatedly complained of cramp in the tight confines around the pedals. The FW15C had 12% better aerodynamics (downforce/drag) and an engine with 30 additional horsepower than the FW14B. Newey said in an interview in 1994 that the aerodynamics on the FW14B were messy due to the switch to active suspension from passive suspension, and that the FW15C was an aerodynamically cleaned up version of the aero on the FW14B. In addition, the FW15C featured an ABS braking system which was not available on the FW14B and featured a 210L fuel tank, compared to the 230L tank in the FW14B. Engine Renault went into their fifth year with Williams and again proved to be the class of the field, with their RS5 67° V10 engine producing at least , at least more than Benetton and McLaren's Ford V8, and with less of a penalty in terms of extra fuel carried than Ferrari's powerful but thirsty 041 3.5 litre V12. Renault had acquired a reputation for almost bullet-proof reliability but Williams did suffer three engine failures during races in 1993, although on each occasion the sister car won the race. The French Grand Prix was a PR dream for Renault, with a French driver leading home the team's only 1-2 finish of the year, while Hill's victory at the Belgian race was Renault's 50th Formula One win. Transmission The FW15C used a semi-automatic transmission very similar to the FW14B, but with changes to the hydraulic activation system. A press button starting device by means of which the clutch comes under automatic control attracted the drivers' unreserved approval during a succession of tests, but they did not use it in races, preferring the notional, psychological reassurance of controlling the clutch pedal at the start. The transmission also featured an automatic system. If the "auto-up" button is pressed, which could be at any time on the circuit, it will do automatic changes until the next time drivers call for a gear change with the levers. The software is so programmed that it recognises when a driver calls for a gear change before the automatic system is ready to do so and immediately hands back control to the manual system. Electronics By 1993, Formula One had become very much a high-tech arena and the FW15C was at the very forefront, featuring active suspension, anti-lock brakes, traction control, telemetry, drive-by-wire controls, pneumatic engine valve springs, power steering, semi-automatic transmission, a fully-automatic transmission, and also a continuously variable transmission (CVT), although the latter was only used in testing. As a result, Alain Prost described the car as, "a little Airbus". CVTs have the potential to dramatically increase average engine power over a lap, providing a significant advantage over competing teams. They would have also required the engine to run at a constant speed for a longer period of time, posing design challenges. CVTs were explicitly banned from Formula 1 in 1994, just two weeks after successful tests of the CVT in 1993. While anti-lock brakes and traction control made driving the car on the limit easier, an added complication arose from occasions when the computer systems wrongly interpreted the information they were receiving from their sensors, the active suspension being particularly prone to this from time to time. With so many computer systems onboard the car required three laptop computers to be connected to it every time it was fired up: one each for the engine, the telemetry, and the suspension. The FW15C also featured a push-to-pass system (left yellow button on the steering wheel), which would use the active suspension to lower the car at the rear and eliminate the drag from the diffuser, effectively increasing speed through a lack of downforce. Williams was able to use the electronics, so they could sync up a flawless link that would simultaneously set the engine for another 300 revs, and raise the active suspension for when the driver needed extra speed while overtaking. This system could be seen being used by Hill and Prost numerous times in 1993 while attempting passing manoeuvres. So great was the level of technology on the cars that FIA decided to ban several of what they considered to be "driver aids" with immediate effect following the British Grand Prix, leading to the so-called "Weikershof Protocol", by which the ban was postponed to the start of 1994. Drivers An all new driver line-up was featured. Triple world champion Alain Prost had signed with Williams for the 1993 season, having spent the previous year out of motorsport competition on a sabbatical. Reigning Champion Nigel Mansell departed Formula One, over a dispute with Frank Williams about money and the signing of Prost, to race in the American CART series for 1993, while Riccardo Patrese moved to Benetton-Ford. Patrese signed with Benetton in the belief that Prost and Mansell would be the Williams drivers in 1993 and did not know he could have stayed with the Didcot based team following Mansell's departure. Mika Häkkinen was considered for the vacancy left behind before the team decided to promote Damon Hill, the team's test driver for the past two years, who had made two starts for Brabham in 1992. Williams retained this driver pairing in all 16 races in 1993. With McLaren having lost its supply of Honda engines after the Japanese company pulled out of the sport at the end of 1992, triple World Champion Ayrton Senna, who had previously had a test with Williams in 1983, had repeatedly tried to get Frank Williams to sign him and even went so far as to offer his services for free, but a clause in Prost's contract specifically forbade Williams signing Senna as Prost's team-mate and the Brazilian instead opted to remain at McLaren on a race-by-race basis. However, Prost's clause only covered the 1993 season. Performance Williams quickly established themselves as the team to beat, with Prost winning in South Africa by a margin of almost a lap over Senna's McLaren. The FW15C was so dominant in qualifying that Prost and Hill often qualified 1.5 to 2 seconds in front of Schumacher or Senna. For example, at the Brazilian Grand Prix, Prost out-qualified his teammate by a whole second at Interlagos, who was again a second ahead of the eventual winner Senna. In the race Prost retired midway through, a victim of someone else's accident, and Senna managed to get past Hill to win, with the Englishman registering his first podium and points in F1 in second. The third race of the season at Donington Park saw Senna's most dominant performance, with Hill taking second with Prost inheriting third from the Jordan-Hart of Rubens Barrichello late on after the Brazilian lost fuel pressure resulting in his retirement. The Frenchman's race was hampered by intermittent gearbox problems in addition to seven pit stops to change tyres in the changeable conditions. With three races gone Senna lay 12 points ahead of Prost, but it was already becoming clear that even Senna in his prime would struggle to keep ahead of Prost and the superior Williams-Renault, and so it proved with the team going on a run of nine wins in the next ten races. Dominant displays from Prost at Imola and Spain lifted him above Senna in the standings, but Senna regained the lead with his sixth and final win at Monaco before Prost's Canada win gave him back the lead. By now Hill was starting to consistently challenge his teammate. The Englishman was in touch with Prost nose to tail virtually throughout the French Grand Prix at Magny-Cours, and seemed to be set fair for his debut win in the British Grand Prix before a rare engine failure 18 laps from the end left the home crowd disappointed. In Germany, Hill came even closer after a stop-go penalty held Prost up, but this time the Englishman's rear tyre suffered a puncture on the penultimate lap, with Prost again claiming the win. In Hungary Hill finally got his first win, a task made easier after Prost stalled on the warm-up lap and had to start from the rear of the grid. Prost fought his way up to fourth before a rear wing failure ended his bid for a points finish, but a retirement for Senna meant there was no ground lost. Hill made up for lost time completing a hat trick of wins in Belgium and Italy. Hill and Prost's 1-3 finishes, respectively, at Spa secured Williams their sixth Constructors' Championship. Senna experienced a terrible run of fortune but was still in with a mathematical chance of the title as the teams met in Portugal, but Prost's second place was enough to secure his fourth World Drivers' Championship, prompting the Frenchman to announce his retirement at the end of the year. In the last two races in Japan and Australia respectively, Prost followed Senna home, which meant Hill dropped to third behind the Brazilian in the final Championship standings. Criticisms The primary criticism of the FW15C was an inconsistent handling manner arising from occasions when the computer systems wrongly interpreted the information they were receiving from their sensors, or due to air being present in the hydraulics of the active system. Slight changes to the weight distribution of this latest Williams produced a car that was slightly more responsive than its immediate predecessor, if rather more nervous when driven on the limit. In particular this trait manifested itself in slight rear-end instability under braking, most notable on high speed circuits such as Hockenheim when the car was operating in a low downforce trim. It was a trait that particularly caused problems for the smoother driving style of Alain Prost who could set up the car best when it had even handling characteristics. Alain Prost was quoted as saying: "I think that an active suspension car with traction control needs to be thrown around quite a lot, whereas I like to drive a little more quietly, perhaps using the throttle more sensitively, which perhaps is not needed quite so much in an active car". In the wet the car also exhibited a tendency to momentarily lock the rear wheels during downchanges. This however was alleviated with the fitting of a power throttle system at Imola ensuring that the revs could be perfectly matched when the clutch was engaged. Prost also later said that although he was amazed at the general quality and technology of the car, the FW15C was not his favorite car to drive and work with, as it was such a different car to work with than any of the other cars he had driven before. FW15D In early 1994, two FW15C chassis were modified to run without electronic driving aids, which were banned for 1994. The FW15D was an interim car with passive suspension, and no traction control. The cars were tested by Senna and Hill in January 1994, but the car was far from optimal, as it was originally designed around an active suspension system. The FW15D was used during the Rothmans Williams Renault launch at Estoril, on 19 January 1994 (Notably with Damon Hill driving with an onboard camera recorder, and not Ayrton Senna as many have incorrectly assumed). The car was retired once the FW16 became available, yet it was notably the quicker car of the two in early season testing laptimes. Complete Formula One results (key) (results in bold indicate pole position; results in italics indicate fastest lap) References Williams Formula One cars Vehicles with CVT transmission Formula One championship-winning cars
46728288
https://en.wikipedia.org/wiki/Cambridge%20Distributed%20Computing%20System
Cambridge Distributed Computing System
The Cambridge Distributed Computing System is an early discontinued distributed operating system, developed in the 1980s at Cambridge University. It grew out of the Cambridge Ring local area network, which it used to interconnect computers. The Cambridge system connected terminals to "processor banks". At login, a user would request from the bank a machine with a given architecture and amount of memory. The system then assigned to the user a machine that served, for the duration of the login session, as their "personal" computer. The machines in the processor bank ran the TRIPOS operating system. Additional special-purpose servers provided file and other services. At its height, the Cambridge system consisted of some 90 machines. References Distributed operating systems Discontinued operating systems History of computing in the United Kingdom University of Cambridge Computer Laboratory 68k architecture
36529953
https://en.wikipedia.org/wiki/OpenQRM
OpenQRM
openQRM is a free and open-source cloud computing management platform for managing heterogeneous data centre infrastructures. It provides a complete Automated Workflow Engine for all Bare-Metal and VM deployment, as well as for all IT subsystems, enabling professional management and monitoring of your data centre & cloud capacities. The openQRM platform manages a data centre's infrastructure to build private, public and hybrid infrastructure as a service clouds. openQRM orchestrates storage, network, virtualisation, monitoring, and security implementations technologies to deploy multi-tier services (e.g. compute clusters) as virtual machines on distributed infrastructures, combining both data centre resources and remote cloud resources, according to allocation policies. The openQRM platform emphasises a separation of hardware (physical servers and virtual machines) from software (operating system server-images). Hardware is treated agnostically as a computing resource that should be replaceable without the need to reconfigure the software. Supported virtualisation solutions include KVM, Linux-VServer, OpenVZ, VMware ESXi, Hyper-V and Xen. Virtual machines of these types are managed transparently via openQRM. P2V (physical to virtual), V2P (virtual to physical), and V2V (virtual to virtual) migration are possible as well as transitioning from one virtualisation technology to another with the same VM openQRM is developed and distributed by OPENQRM AUSTRALIA PTY LTD, a company located in New South Wales, Australia. The openQRM Enterprise Edition is the commercially backed, extended product for professional users offering reliable support options and access to additional features. Users combine the services required. Simply integrate additional technologies and services through a large variety of plug-ins to exactly fit the use-case (OpenvSwitch, KVM, ESXi, OpenStack, AWS EC2, MS Azure, etc.). Over 50 plug-ins are available for openQRM Enterprise. Plug-Ins openQRM utilises plug-ins to customise its functionality. These plug-ins allow for increased integration and compatibility. Their plug-in library is ever-expanding and falls into the categories; Cloud, Container, Deployment, Documentation, High-Availability, Management, Miscellaneous, Monitoring, Network, Storage and Virtualisation. History openQRM was initially released by the Qlusters company and went open-source in 2004. Qlusters ceased operations, while openQRM was left in the hands of the openQRM community. In November 2008, the openQRM community released version 4.0 which included a complete port of the platform from Java to PHP/C/Perl/Shell. In 2020, openQRM Enterprise GmbH had its assets ad intellectual property acquired by Fiveways International Ltd, who appointed OPENQRM AUSTRALIA PTY LTD as the master distributor. Latest release Release 5.3.8 on 30.01.2018 5.3.8 openQRM release for Community and for Enterprise. Dependencies have been updated to ensure compatibility to the latest Linux distributions. There is also an enhanced check for PHP versions in place as well as full support for PHP 7. While openQRM fully supports PHP7, some integrated technologies have not yet completed up this step. In this case Magento, Mantis and I-do-it Integration may still require PHP 5. The new openQRM 5.3.8 is tested on: Debian 8/9, Ubuntu 16.x + 17.x and Centos 7. The 5.3.5 Community Release includes updated package dependencies. The 5.3.2 Community Release includes enhanced package dependencies for latest Ubuntu, Debian, CentOS and removed rpmforge repository dependencies. The 5.3.1 Community Release includes important security updates, bugfixes and enhancements, especially for the KVM and Cloud plug-ins. See also Cloud computing Cloud computing comparison Cloud infrastructure References External links OpenQRM Website Cloud infrastructure Free software programmed in Java (programming language) Free software programmed in C Free software programmed in PHP Free software for cloud computing Virtualization-related software for Linux
19134265
https://en.wikipedia.org/wiki/Chico%20DeBarge%20%28album%29
Chico DeBarge (album)
Chico DeBarge is the eponymous debut album from R&B/soul singer Chico DeBarge. It includes the hit single "Talk To Me", and it peaked at number 90 on the Billboard 200 album chart. Chico DeBarge appeared on Season 3 of Punky Brewster with his brothers of DeBarge in the episode "Reading, Writing, and Rock & Roll" on October 30, 1987 Chico sang "Cross That Line". Track listing Personnel Chico DeBarge - vocals, keyboards (4, 8, 10), synthesizer (4, 8, 10) Skip Drinkwater - drum computer programming (1) Nick Mundy - guitar (1), synthesizer (1), drum computer programming (1), backing vocals (1) Steve Dubin - bass synthesizer (1), drum computer programming (1, 7), percussion (7) Paul Fox - synthesizer (1), drum computer programming (1), programming (4, 10) Curtis Anthony Nolen - keyboards (2, 3), drum programming (2), backing vocals (2) Jay Gruska - keyboards (3, 8 (overdubs), 9), drum programming (3, 9) Paul Jackson Jr. - rhythm guitar (4) Ralph Benatar - keyboards (5), soprano saxophone (5), Linn drums (5), programming (8) Lorenzo Pryor - bass (5) Larry Lingle - electric guitar (5) Michael Dorian - keyboards (5) Thomas Organ - guitar (6) Gary Taylor - DMX synthesizer programming (6), backing vocals (6) Dan Segal - synthesizer programming (6) Kevin O'Neal - synthesizer (6) Neil Stubenhause (sic) - bass (7) Dann Huff - guitar (7) Tommy Faragher - synthesizer (7) Nathan East - bass (9) Dee Dee Belson - backing vocals (1) Maxie Anderson - backing vocals (1, 2, 3, 9) Alfie Silas - backing vocals (2, 3, 9) Phyllis St. James - backing vocals (2, 3, 9) Darryl DeBarge - backing vocals (4) James DeBarge - backing vocals (4, 5, 7) David Paul Bryant - backing vocals (7) DeBarge - backing vocals (8) References 1986 debut albums Chico DeBarge albums Motown albums Dance-pop albums by American artists Freestyle music albums
22726888
https://en.wikipedia.org/wiki/List%20of%20Netflix-compatible%20devices
List of Netflix-compatible devices
Netflix is an American global provider of streaming movies and TV series. Summary table This is a list of devices that are compatible with Netflix streaming services. Platforms The devices featured in this list feature hardware that is compatible for streaming Netflix: Amazon Fire TV, Kindle Fire, Kindle Fire HD, Kindle Fire HDX Android smartphones and tablets (in SD, or more correctly 480p, while "HD playback is available on select Android devices") Android TV devices Apple: Apple TV, iPad, iPhone, iPod Touch Barnes & Noble Nook Color, Nook Tablet, Nook HD D-Link Boxee Box (only supports Netflix USA and Canada; not Netflix in other countries) Fetch TV includes both Fetch Mighty, and Fetch Mini (only available in Australia) Google Chromecast can receive a Netflix stream from a supported mobile device or Chrome Google TV devices Insignia Blu-ray Disc players and home theater systems LG Electronics: some Blu-ray Disc players, TVs, and home theater systems Microsoft: Windows 11, Windows 10, Windows 8, Windows Phone, Xbox 360, Xbox One, and Xbox Series X/S Panasonic: some Blu-ray Disc players, televisions and home theater systems Philips: some Blu-ray Disc players and TVs Roku streaming player Samsung: some Blu-ray Disc players, home theater systems, smartphones, TVs, and tablets Seagate FreeAgent Theater+ HD media player Sharp: some LED/LCD TVs and Blu-ray Disc players Sony Blu-ray Disc players, televisions, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5 TiVo DVRs (HD, HD XL, Series3, Premiere, Premiere XL, Roamio, and Bolt boxes) Viewsonic VMP75 Vizio: some Blu-ray Disc players and TVs Western Digital WD Live Plus media player Yamaha BD-A1020 YouView set-top boxes in the UK Devices listed here previously had Netflix support but were later discontinued: Nintendo Wii Nintendo 3DS and Wii U Sony PlayStation 2 (only in the United States and Brazil via streaming disc) Software support Compatible web browsers by platform: macOSHardware Requirement: Intel Core Duo 1.83-gigahertz (GHz) or higher processor; 512MB RAMMicrosoft Silverlight player: Intel-based Macs running OS X 10.4.11 or later. Compatible browsers are Safari 3 (or higher), Firefox 3 (or higher).HTML5 player: Intel-based Macs running OS X 10.6 or later. Compatible browsers are Safari 8* (or higher), Google Chrome 37 (or higher).*Note: Using HTML5 player with Safari 8 (or higher) requires certain late Intel Sandy Bridge or any Intel Ivy Bridge or later generation processor Macs running OS X 10.10. Microsoft Windows:Hardware Requirement: x86 or x64 (64-bit mode used by Internet Explorer only) 1.6-gigahertz (GHz) or higher processor; 512MB RAMMicrosoft Silverlight player: Windows XP Service Pack 3 or later. Compatible browsers are Internet Explorer 6 (or higher), Firefox 3 (or higher), Google Chrome 4 (or higher).HTML5 Player: Windows XP Service Pack 2 or later. Compatible browsers are Internet Explorer 11* (or higher), Microsoft Edge, Google Chrome 37 (or higher).*Note: Using HTML5 player with Internet Explorer 11 (or higher) requires Windows 8.1 or later. 4K playback requires a 4K display, an Intel Kaby Lake or later generation processor, Windows 10 or later, and either Microsoft Edge or the UWP Netflix app. Linux:HTML5 Player: Ubuntu 12.04 LTS, Ubuntu 14.04 LTSm and later. PCLinuxOS supported after October 10, 2014. Compatible browsers are Google Chrome 37 (or higher).In addition to official support in Chrome, unofficial support is provided for other browsers such as Firefox to Ubuntu-based distributions with the use of Wine and other community maintained packages. Other software options: Android: Version 2.3 and above. (HDR playback is available on the LG G6, LG V30, Samsung Galaxy Note 8, and Sony Xperia XZ1. 4K and HDR playback is available on the Sony Xperia XZ Premium) Google Chrome OS: Any Chrome OS device works. Previously, ARM based Chromebooks were not compatible because of a plugin issue. However, with the introduction of the HTML5 Player, even such devices are now work. iOS: iPad, iPhone, iPod Touch, Apple TV (HDR playback is available on the iPhone 8, 8 Plus, and X or later, and the 2nd-gen iPad Pro or later running iOS 11 or later) tvOS: Apple TV (4K and HDR playback is available on the Apple TV 4K and newer) Windows Media Center: Windows XP Media Center Edition, Windows Vista (Home Premium, Ultimate), Windows 7 (Home Premium, Professional, Enterprise, Ultimate), Windows 8 (Pro via either Windows 8 Pro Pack or Windows 8 Media Center Pack). Windows Phone UWP: Windows 10 (Home, Pro and Mobile editions), Windows 11, Xbox One, Xbox Series X/S Video game consoles At E3 2008, Microsoft announced a deal to distribute Netflix videos over Xbox Live. This service was launched on November 19, 2008 to Xbox 360 owners with a Netflix Unlimited subscription and an Xbox Live Gold subscription allowing them to stream films and television shows directly from their Netflix Instant Queue from an application on the Dashboard. Xbox Live's Party Mode had a popular feature where users could create a virtual party and bring their avatars to a virtual theater to watch Netflix simultaneously and even send comments and smiley faces to each other. This feature was discontinued on December 6, 2011. In November 2009, the Netflix service became available on PlayStation 3. The set-up was similar to that on the Xbox 360, allowing Netflix subscribers to stream films and television shows from their Instant Queue to watch on the console. Unlike on the Xbox 360, the Netflix service for PlayStation 3 was originally available on a Blu-ray Disc (available free to subscribers). On October 19, 2010, a downloadable application was made available through the PlayStation Network, making the Blu-ray Disc no longer necessary. Users do not have to pay for use of the service other than the monthly Netflix subscription. In 2012, the PlayStation 3 became the device most used to watch Netflix. In spring 2010, the Netflix service became available on the Wii. The service allows the console to stream content in a user's Instant Queue. Initially, a streaming disc specifically for the Wii was required, along with an Internet connection to the console. Besides a Netflix account with unlimited streaming, there are no additional costs for the service. In contrast to the other two consoles, the Wii is not capable of HD resolution. The Wii streaming disc was released for testing to customers on March 25, 2010, and was released to all registered Netflix members on April 12, 2010. On October 18, 2010, Netflix was released in the United States and Canada as a free downloadable application on the Wii Shop Channel, making the streaming disc no longer necessary; the channel was released in the United Kingdom and Ireland on January 9, 2012. After the channel was delisted from the Wii Shop Channel, support for Netflix on the Wii was discontinued on January 30, 2019. Netflix confirms the end of service on the Wii console was Nintendo's decision, as it coincides with Nintendo's discontinuation of the Wii Shop Channel. The Netflix service launched on the Nintendo 3DS on July 14, 2011. The Netflix application for PlayStation Vita was launched the same day as the device's launch on February 22, 2012, making it available for download via the PlayStation Store for free. The Wii's successor console, the Wii U, began supporting Netflix shortly after its North American release on November 18, 2012. Netflix was later embedded in the Wii U's own Nintendo TVii app the following March 2013. On November 15, 2013, the Netflix app became available for download on the PlayStation 4 via the PlayStation Store upon the console's U.S. release. Shortly after Microsoft's November 22, 2013 release of the Xbox One in the United States, Netflix became available for download as an app for the console. In 2014, Microsoft changed the terms for Xbox Live, no longer requiring a Gold subscription to access Netflix and any other online streaming service on Xbox consoles; however, a Netflix subscription is still required to access content. Set-top boxes In May 2008, Roku released the first set-top box, The Netflix Player by Roku, to stream Netflix's Instant Watch movies directly to television sets. The device provided unlimited access to the Netflix streaming media catalog for all subscribers. Blu-ray Disc players On August 6, 2008, LG demonstrated the world's first Blu-ray Disc Player with Netflix streaming embedded. The product was launched in U.S. stores later that month. Hastings stated in the announcement that "LG Electronics was the first of our technology partners to publicly embrace our strategy for getting the Internet to the TV, and is the first to introduce a Blu-ray player that will instantly stream movies and TV episodes from Netflix to the TV." Subsequently, Netflix agreed to stream movies to two of Samsung's Blu-ray Disc players. Soon after, it agreed to stream movies to TiVo DVRs. Televisions In January 2009, Netflix partnered with Vizio and LG to stream movies to newer HDTV set models. In July 2009, Sony partnered with Netflix to enable Sony BRAVIA Internet Platforms to access instant queues for Netflix users. Any Netflix member with an Internet-enabled BRAVIA HDTV will be able to link up their account to their television and stream videos from their queue. In 2012, Sony released a firmware "update" for some of its "older" BRAVIA TV's which meant that Netflix & YouTube support was terminated. Among affected products was the KDL-46HX823. The firmware "update" violated the Sale of Goods Act 1979 and Consumer Rights Act 2015. The 2010 line of Panasonic HDTVs with Viera Cast functionality gained the ability to stream Netflix content directly to the television. With the 2010 release of the Google TV, Netflix streaming was included a built-in application. A Netflix application is available to download on Samsung Smart TV through the Samsung Apps Service, and is preloaded on higher-end sets. Handheld devices In September 2009, Hastings expressed his desire to expand his company's video-streaming service to Apple's iPhone and iPod Touch mobile devices, once the Xbox 360 exclusivity deal expired. In April 2010, the Netflix app debuted on the App Store for use with the iPad. The version for iPod Touch and iPhone was released on August 26, 2010 via the App Store. On March 15, 2011, Netflix was made available for Android phones. However, not all phones using the OS can use the application due to Digital Rights Management (DRM) issues. The malfunctioning DRM was later removed and the app now works on the majority of Android devices. However, only a very limited set of devices can stream in HD. On July 14, 2011, Netflix became available on the Nintendo 3DS; no 3D content is available at this time. Due to copyright issues, access to Netflix on 3DS is limited by geographic location. In November 2011, Barnes & Noble began shipping Nook Tablets with the Netflix app pre-installed, offering Netflix as an optional app for Nook Color devices. When the PlayStation Vita launched on February 22, 2012, it had a Netflix app built in. Due to copyright issues, access to Netflix on Vita is limited by geographic location. A Sandvine report released in 2013 stated that the company's mobile data usage share doubled over a 12-month period in North America. Operating systems Android Chrome OS iOS Linux macOS Microsoft Windows Windows Phone References External links Netflix-compatible devices Devices
30811
https://en.wikipedia.org/wiki/Troff
Troff
troff (), short for "typesetter roff", is the major component of a document processing system developed by AT&T Corporation for the Unix operating system. troff and the related nroff were both developed from the original roff. While nroff was intended to produce output on terminals and line printers, troff was intended to produce output on typesetting systems, specifically the Graphic Systems CAT that had been introduced in 1972. Both used the same underlying markup language and a single source file could normally be used by nroff or troff without change. troff features commands to designate fonts, spacing, paragraphs, margins, footnotes and more. Unlike many other text formatters, troff can position characters arbitrarily on a page, even overlapping them, and has a fully programmable input language. Separate preprocessors are used for more convenient production of tables, diagrams, and mathematics. Inputs to troff are plain text files that can be created by any text editor. Extensive macro packages have been created for various document styles. A typical distribution of troff includes the me macros for formatting research papers, man and mdoc macros for creating Unix man pages, mv macros for creating mountable transparencies, and the ms and mm macros for letters, books, technical memoranda, and reports. History troffs origins can be traced to a text-formatting program called RUNOFF, which was written by Jerome H. Saltzer for MIT's CTSS operating system in the mid-1960s. (The name allegedly came from the phrase I'll run off a document.) Bob Morris ported it to the GE 635 architecture and called the program roff (an abbreviation of runoff). It was rewritten as rf for the PDP-7, and at the same time (1969), Doug McIlroy rewrote an extended and simplified version of roff in the BCPL programming language. The first version of Unix was developed on a PDP-7 which was sitting around Bell Labs. In 1971 the developers wanted to get a PDP-11 for further work on the operating system. In order to justify the cost for this system, they proposed that they would implement a document-formatting system for the Bell Labs patents department. This first formatting program was a reimplementation of McIllroy's roff, written by Joe F. Ossanna. When they needed a more flexible language, a new version of roff called nroff (newer "roff") was written, which provided the basis for all future versions. When they got a Graphic Systems CAT phototypesetter, Ossanna modified nroff to support multiple fonts and proportional spacing. Dubbed troff, for typesetter roff, its sophisticated output amazed the typesetter manufacturer and confused peer reviewers, who thought that manuscripts using troff had been published before. As such, the name troff is pronounced rather than *. With troff came nroff (they were actually almost the same program), which was for producing output for line printers and character terminals. It understood everything troff did, and ignored the commands which were not applicable, e.g., font changes. Ossanna's troff was written in PDP-11 assembly language and produced output specifically for the CAT phototypesetter. He rewrote it in C, although it was now 7000 lines of uncommented code and still dependent on the CAT. As the CAT became less common, and was no longer supported by the manufacturer, the need to make it support other devices became a priority. Ossanna died before this task was completed, so Brian Kernighan took on the task of rewriting troff. The newly rewritten version produced a device-independent code which was very easy for post-processors to read and translate to the appropriate printer codes. Also, this new version of troff (often called ditroff for device independent troff) had several extensions, which included drawing functions. The program's documentation defines the output format of ditroff, which is used by many modern troff clones like GNU groff. The troff collection of tools (including pre- and post-processors) was eventually called Documenter's WorkBench (DWB), and was under continuous development in Bell Labs and later at the spin-off Unix System Laboratories (USL) through 1994. At that time, SoftQuad took over the maintenance, although Brian Kernighan continued to improve troff on his own. Thus, there are at least the following variants of the original Bell Labs troff in use: the SoftQuad DWB, based on USL DWB 2.0 from 1994; the DWB 3.4 from Lucent Software Solutions (formerly USL); troff, Plan 9 edition. While troff has been supplanted by other programs such as Interleaf, FrameMaker, and LaTeX, it is still being used quite extensively. It remains the default formatter for the UNIX documentation. The software was reimplemented as groff for the GNU system beginning in 1990. In addition, due to the open sourcing of Ancient UNIX systems, as well as modern successors such as the ditroff-based open-sourced versions found on OpenSolaris and Plan 9 from Bell Labs, there are several versions of AT&T troff (CAT and ditroff-based) available under various open-source licenses. Macros Troff includes sets of commands called macros that are run before starting to process the document. These macros include setting up page headers and footers, defining new commands, and generally influencing how the output will be formatted. The command-line argument for including a macro set is -mname, which has led to many macro sets being known as the base filename with a leading m. The standard macro sets, with leading m are: man for creating manual pages mdoc for semantically-annotated manual pages, which are better adapted to mandoc conversion to other formats. mandoc is a fusion that supports both sets of manual commands. me for creating research papers mm for creating memorandums ms''' for creating books, reports, and technical documentation A more comprehensive list of macros available is usually listed in a tmac(5) manual page. Preprocessors As troff evolved, since there are several things which cannot be done easily in troff, several preprocessors were developed. These programs transform certain parts of a document into troff input, fitting naturally into the use of "pipelines" in Unix — sending the output of one program as the input to another (see pipes and filters). Typically, each preprocessor translates only sections of the input file that are specially marked, passing the rest of the file through unchanged. The embedded preprocessing instructions are written in a simple application-specific programming language, which provides a high degree of power and flexibility. eqn preprocessor allows mathematical formulae to be specified in simple and intuitive manner. tbl is a preprocessor for formatting tables. refer (and the similar program bib) processes citations in a document according to a bibliographic database. Three preprocessors provide troff with drawing capabilities by defining a domain-specific language for describing the picture. pic is a procedural programming language providing various drawing functions like circle and box. ideal allows the drawing of pictures declaratively, deriving the picture by solving a system of simultaneous equations based on vectors and transformations described by its input. grn describes the pictures through graphical elements drawn at absolute coordinates, based on the gremlin file format defined by an early graphics workstation. Yet more preprocessors allow the drawing of more complex pictures by generating output for pic. grap draws charts, like scatter plots and histograms. chem draws chemical structure diagrams. dformat'' draws record-based data structures. Reimplementations groff is GNU Project's free replacement for troff and nroff. unroff is an extensible replacement of troff written in Scheme Heirloom troff is based on troff from OpenSolaris. It includes support for OpenType fonts, improved support for Type 1 fonts, support for Unicode, a new paragraph formatting algorithm, and a groff compatibility mode. mandoc is a specialised compiler/formatter only for the man and mdoc macro packages. Neatroff is a new troff implementation, including support for advanced font features and bi-directional text. See also Desktop publishing DocBook groff GNU troff/nroff replacement nroff SGML TeX Scribe (markup language) References External links The Text Processor for Typesetters The history of troff OpenSolaris-derived port of troff and related programs User manual for the Plan 9 edition of troff (In PostScript format) A History of UNIX before Berkeley section 3 describes the history of roff, nroff, troff, ditroff, tbl, eqn, and more. The original source code of nroff, troff and the preprocessors from AT&T Bell Labs in form of the Documenter's Workbench (DWB) Release 3.3 (ported to current UNIX systems from http://www2.research.att.com/sw/download) Free typesetting software Page description languages History of software Plan 9 commands Unix text processing utilities
1109754
https://en.wikipedia.org/wiki/SGI%20O2
SGI O2
The O2 was an entry-level Unix workstation introduced in 1996 by Silicon Graphics, Inc. (SGI) to replace their earlier Indy series. Like the Indy, the O2 used a single MIPS microprocessor and was intended to be used mainly for multimedia. Its larger counterpart was the SGI Octane. The O2 was SGI's last attempt at a low-end workstation. Hardware System architecture Originally known as the "Moosehead" project, the O2 architecture featured a proprietary high-bandwidth Unified Memory Architecture (UMA) to connect system components. A PCI bus is bridged onto the UMA with one slot available. It had a designer case and an internal modular construction. Two SCSI drives could be mounted on special caddies (1 in the later R10000/R12000 models due to heat constraints) and an optional video capture / sound cassette mounted on the far left side. CPU The O2 comes in two distinct CPU flavours; the low-end MIPS 180 to 350 MHz R5000- or RM7000-based units and the higher-end 150 to 400 MHz R10000- or R12000-based units. The 200 MHz R5000 CPUs with 1 MB L2-cache are generally noticeably faster than the 180 MHz R5000s with 512 KB cache. There is a hobbyist project that has successfully retrofitted a 600 MHz RM7xxx MIPS processor into the O2. Memory There are eight DIMM slots on the motherboard and memory, and all O2s are expandable to 1 GB using proprietary 239-pin SDRAM DIMMs. The Memory & Rendering Engine (MRE) ASIC contains the memory controller. Memory is accessed via a 133 MHz 144-bit bus, of which 128 bits are for data and the remaining for ECC. This bus is interfaced by a set of buffers to the 66 MHz 256-bit memory system. I/O I/O functionality is provided by the IO Engine ASIC. The ASIC provides a 64-bit PCI bus, an ISA bus, two PS/2 ports for keyboard and mouse, and a 10/100 Base-T Ethernet port. The PCI bus has one 64-bit slot, but the ISA bus is present solely for attaching a Super I/O chip to provide serial and parallel ports. Disks The O2 carries an UltraWide SCSI drive subsystem (Adaptec 7880). Older O2's generally have 4x speed Toshiba CD-ROMs, but any Toshiba SCSI CD-ROM can be used (as well as from other manufacturers, the bezel replacement however is designed to fit Toshiba design and also IRIX cannot utilize CD-DA mode other than Toshiba). Later units have Toshiba DVD-ROMs. The R5000/RM7000 units have two available drive sleds for SCA UltraWide SCSI hard-disks. Because the R10000/R12000 CPU module has a much higher cooling-fan assembly, the R10000/R12000 units have room for only one drive-sled. Graphics The O2 used the CRM chipset specifically developed by SGI for the O2. It was developed to be a low-cost implementation of the OpenGL 1.1 architecture with ARB image extensions in both software and hardware. The chipset consists of the microprocessor, and the ICE, MRE and Display ASICs. All display list and vertex processing, as well as the control of the MRE ASIC is performed by the microprocessor. The ICE ASIC performs the packaging and unpacking of pixels as well as operations on pixel data. The MRE ASIC performs rasterization and texture mapping. Due to the unified memory architecture, the texture and framebuffer memory comes from main memory, resulting in a system that has a variable amount of each memory. The Display Engine generates analog video signals from framebuffer data fetched from the memory for display. Operating systems Several operating systems support the O2: IRIX 6.3 or 6.5.x (native platform). Linux port is working, but some drivers are missing. Both Gentoo and Debian have releases that work on the O2. See the IP32 port page on linux-mips.org. OpenBSD has run on the O2 since OpenBSD 3.7. See the sgi port page. NetBSD has run on the O2 since NetBSD 2.0. It was the first Open Source operating system to be ported to the O2. See the sgimips port page. Performance The SGI O2 has an Imaging and Compression Engine (ICE) application-specific integrated circuit (ASIC) for processing streaming media and still images. ICE operates at 66 MHz and contains a R3000-derived microprocessor serving as the scalar unit to which a 128-bit SIMD unit is attached using the MIPS coprocessor interface. ICE operates on eight 16-bit or sixteen 8-bit integers, but still provides a significant amount of computational power which enables the O2 to do video decoding and audio tasks that would require a much faster CPU if done without SIMD instructions. ICE only works with the IRIX operating system, as this is the only system that has drivers capable of taking advantage of this device. The Unified Memory Architecture means that the O2 uses main memory for graphics textures, making texturing polygons and other graphics elements trivial. Instead of transferring textures over a bus to the graphics subsystem, the O2 passes a pointer to the texture in main memory which is then accessed by the graphics hardware. This makes using large textures easy, and even makes using streaming video as a texture possible. Since the CPU performs many of geometry calculations, using a faster CPU will increase the speed of a geometry-limited application. The O2's graphics is known to have slower rasterization speed than the Indigo2's Maximum IMPACT graphics boards, though the Maximum IMPACT graphics is limited to 4 MB of texture memory, which can result in thrashing, whereas the O2 is limited only by available memory. While CPU frequencies of 180 to 400 MHz seem low today, when the O2 was released in 1996, these speeds were on par with or above the current offerings for the x86 family of computers (cf. Intel's Pentium and AMD's K5). Uses O2s were often used in the following fields: Imaging (especially medical) On-air TV graphics; the most widespread example of an O2 running TV graphics is the Weather Star XL computer for The Weather Channel Desktop workstation 3D modelling Analogue video post-production Defense industries References External links SGIstuff: O2 Remotely installing SGI IRIX 6.5 from a GNU/Linux server SGI O2 Power Supply basics O2 64-bit computers
4459886
https://en.wikipedia.org/wiki/Password%20strength
Password strength
Password strength is a measure of the effectiveness of a password against guessing or brute-force attacks. In its usual form, it estimates how many trials an attacker who does not have direct access to the password would need, on average, to guess it correctly. The strength of a password is a function of length, complexity, and unpredictability. Using strong passwords lowers overall risk of a security breach, but strong passwords do not replace the need for other effective security controls. The effectiveness of a password of a given strength is strongly determined by the design and implementation of the factors (knowledge, ownership, inherence). The first factor is the main focus in this article. The rate at which an attacker can submit guessed passwords to the system is a key factor in determining system security. Some systems impose a time-out of several seconds after a small number (e.g. three) of failed password entry attempts. In the absence of other vulnerabilities, such systems can be effectively secured with relatively simple passwords. However, the system must store information about the user's passwords in some form and if that information is stolen, say by breaching system security, the user's passwords can be at risk. In 2019, the United Kingdom's NCSC analysed public databases of breached accounts to see which words, phrases and strings people used. Top of the list was 123456, appearing in more than 23 million passwords. The second-most popular string, 123456789, was not much harder to crack, while the top five included "qwerty", "password" and 1111111. Password creation Passwords are created either automatically (using randomizing equipment) or by a human; the latter case is more common. While the strength of randomly chosen passwords against a brute-force attack can be calculated with precision, determining the strength of human-generated passwords is difficult. Typically, humans are asked to choose a password, sometimes guided by suggestions or restricted by a set of rules, when creating a new account for a computer system or internet website. Only rough estimates of strength are possible since humans tend to follow patterns in such tasks, and those patterns can usually assist an attacker. In addition, lists of commonly chosen passwords are widely available for use by password guessing programs. Such lists include the numerous online dictionaries for various human languages, breached databases of plaintext, and hashed passwords from various online business and social accounts, along with other common passwords. All items in such lists are considered weak, as are passwords that are simple modifications of them. Although random password generation programs are available nowadays which are meant to be easy to use, they usually generate random, hard to remember passwords, often resulting in people preferring to choose their own. However, this is inherently insecure because the person's lifestyles, entertainment preferences, and other key individualistic qualities usually come into play to influence the choice of password, while the prevalence of online social media has made obtaining information about people much easier. Password guess validation Systems that use passwords for authentication must have some way to check any password entered to gain access. If the valid passwords are simply stored in a system file or database, an attacker who gains sufficient access to the system will obtain all user passwords, giving the attacker access to all accounts on the attacked system and possibly other systems where users employ the same or similar passwords. One way to reduce this risk is to store only a cryptographic hash of each password instead of the password itself. Standard cryptographic hashes, such as the Secure Hash Algorithm (SHA) series, are very hard to reverse, so an attacker who gets hold of the hash value cannot directly recover the password. However, knowledge of the hash value lets the attacker quickly test guesses offline. Password cracking programs are widely available that will test a large number of trial passwords against a purloined cryptographic hash. Improvements in computing technology keep increasing the rate at which guessed passwords can be tested. For example, in 2010, the Georgia Tech Research Institute developed a method of using GPGPU to crack passwords much faster. Elcomsoft invented the usage of common graphic cards for quicker password recovery in August 2007 and soon filed a corresponding patent in the US. By 2011, commercial products were available that claimed the ability to test up to 112,000 passwords per second on a standard desktop computer, using a high-end graphics processor for that time. Such a device will crack a six-letter single-case password in one day. Note that the work can be distributed over many computers for an additional speedup proportional to the number of available computers with comparable GPUs. Special key stretching hashes are available that take a relatively long time to compute, reducing the rate at which guessing can take place. Although it is considered best practice to use key stretching, many common systems do not. Another situation where quick guessing is possible is when the password is used to form a cryptographic key. In such cases, an attacker can quickly check to see if a guessed password successfully decodes encrypted data. For example, one commercial product claims to test 103,000 WPA PSK passwords per second. If a password system only stores the hash of the password, an attacker can pre-compute hash values for common passwords variants and for all passwords shorter than a certain length, allowing very rapid recovery of the password once its hash is obtained. Very long lists of pre-computed password hashes can be efficiently stored using rainbow tables. This method of attack can be foiled by storing a random value, called a cryptographic salt, along with the hash. The salt is combined with the password when computing the hash, so an attacker precomputing a rainbow table would have to store for each password its hash with every possible salt value. This becomes infeasible if the salt has a big enough range, say a 32-bit number. Unfortunately, many authentication systems in common use do not employ salts and rainbow tables are available on the Internet for several such systems. Entropy as a measure of password strength It is usual in the computer industry to specify password strength in terms of information entropy, which is measured in bits and is a concept from information theory. Instead of the number of guesses needed to find the password with certainty, the base-2 logarithm of that number is given, which is commonly referred to as the number of "entropy bits" in a password, though this is not exactly the same quantity as information entropy. A password with an entropy of 42 bits calculated in this way would be as strong as a string of 42 bits chosen randomly, for example by a fair coin toss. Put another way, a password with an entropy of 42 bits would require 242 (4,398,046,511,104) attempts to exhaust all possibilities during a brute force search. Thus, increasing the entropy of the password by one bit doubles the number of guesses required, making an attacker's task twice as difficult. On average, an attacker will have to try half the possible number of passwords before finding the correct one. Random passwords Random passwords consist of a string of symbols of specified length taken from some set of symbols using a random selection process in which each symbol is equally likely to be selected. The symbols can be individual characters from a character set (e.g., the ASCII character set), syllables designed to form pronounceable passwords, or even words from a word list (thus forming a passphrase). The strength of random passwords depends on the actual entropy of the underlying number generator; however, these are often not truly random, but pseudorandom. Many publicly available password generators use random number generators found in programming libraries that offer limited entropy. However most modern operating systems offer cryptographically strong random number generators that are suitable for password generation. It is also possible to use ordinary dice to generate random passwords. See stronger methods. Random password programs often have the ability to ensure that the resulting password complies with a local password policy; for instance, by always producing a mix of letters, numbers and special characters. For passwords generated by a process that randomly selects a string of symbols of length, L, from a set of N possible symbols, the number of possible passwords can be found by raising the number of symbols to the power L, i.e. NL. Increasing either L or N will strengthen the generated password. The strength of a random password as measured by the information entropy is just the base-2 logarithm or log2 of the number of possible passwords, assuming each symbol in the password is produced independently. Thus a random password's information entropy, H, is given by the formula: where N is the number of possible symbols and L is the number of symbols in the password. H is measured in bits. In the last expression, log can be to any base. {| class="wikitable" style="text-align: right;" |+ Entropy per symbol for different symbol sets ! Symbol set || Symbol countN || Entropy per symbolH |- | align=left|Arabic numerals (0–9) (e.g. PIN) || 10 || bits |- | align=left|Hexadecimal numerals (0–9, A–F) (e.g. WEP keys) || 16 || 4.000 bits |- | align=left|Case insensitive Latin alphabet (a–z or A–Z) || 26 || bits |- | align=left|Case insensitive alphanumeric (a–z or A–Z, 0–9) || 36 || bits |- | align=left|Case sensitive Latin alphabet (a–z, A–Z) || 52 || bits |- | align=left|Case sensitive alphanumeric (a–z, A–Z, 0–9) || 62 || bits |- | align=left|All ASCII printable characters except space || 94 || bits |- | align=left|All Latin-1 Supplement characters || 94 || bits |- | align=left|All ASCII printable characters || 95 || bits |- | align=left|All extended ASCII printable characters || 218 || bits |- | align=left|Binary (0–255 or 8 bits or 1 byte) || 256 || bits |- | align=left|Diceware word list || 7776 || bits per word |} A binary byte is usually expressed using two hexadecimal characters. To find the length, L, needed to achieve a desired strength H, with a password drawn randomly from a set of N symbols, one computes: where denotes rounding up to the next largest whole number. The following table uses this formula to show the required lengths of truly randomly generated passwords to achieve desired password entropies for common symbol sets: Human-generated passwords People are notoriously poor at achieving sufficient entropy to produce satisfactory passwords. According to one study involving half a million users, the average password entropy was estimated at 40.54 bits. Thus, in one analysis of over 3 million eight-character passwords, the letter "e" was used over 1.5 million times, while the letter "f" was used only 250,000 times. A uniform distribution would have had each character being used about 900,000 times. The most common number used is "1", whereas the most common letters are a, e, o, and r. Users rarely make full use of larger character sets in forming passwords. For example, hacking results obtained from a MySpace phishing scheme in 2006 revealed 34,000 passwords, of which only 8.3% used mixed case, numbers, and symbols. The full strength associated with using the entire ASCII character set (numerals, mixed case letters and special characters) is only achieved if each possible password is equally likely. This seems to suggest that all passwords must contain characters from each of several character classes, perhaps upper and lower case letters, numbers, and non-alphanumeric characters. In fact, such a requirement is a pattern in password choice and can be expected to reduce an attacker's "work factor" (in Claude Shannon's terms). This is a reduction in password "strength". A better requirement would be to require a password NOT to contain any word in an online dictionary, or list of names, or any license plate pattern from any state (in the US) or country (as in the EU). If patterned choices are required, humans are likely to use them in predictable ways, such as capitalizing a letter, adding one or two numbers, and a special character. This predictability means that the increase in password strength is minor when compared to random passwords. NIST Special Publication 800-63-2 NIST Special Publication 800-63 of June 2004 (revision two) suggested a scheme to approximate the entropy of human-generated passwords: Using this scheme, an eight-character human-selected password without upper case characters and non-alphabetic characters OR with either but of the two character sets is estimated to have eighteen bits of entropy. The NIST publication concedes that at the time of development, little information was available on the real world selection of passwords. Later research into human-selected password entropy using newly available real world data has demonstrated that the NIST scheme does not provide a valid metric for entropy estimation of human-selected passwords. The June 2017 revision of SP 800-63 (Revision three) drops this approach. Usability and implementation considerations Because national keyboard implementations vary, not all 94 ASCII printable characters can be used everywhere. This can present a problem to an international traveler who wished to log into remote system using a keyboard on a local computer. See keyboard layout. Many hand held devices, such as tablet computers and smart phones, require complex shift sequences or keyboard app swapping to enter special characters. Authentication programs vary in which characters they allow in passwords. Some do not recognize case differences (e.g., the upper-case "E" is considered equivalent to the lower-case "e"), others prohibit some of the other symbols. In the past few decades, systems have permitted more characters in passwords, but limitations still exist. Systems also vary in the maximum length of passwords allowed. As a practical matter, passwords must be both reasonable and functional for the end user as well as strong enough for the intended purpose. Passwords that are too difficult to remember may be forgotten and so are more likely to be written on paper, which some consider a security risk. In contrast, others argue that forcing users to remember passwords without assistance can only accommodate weak passwords, and thus poses a greater security risk. According to Bruce Schneier, most people are good at securing their wallets or purses, which is a "great place" to store a written password. Required bits of entropy The minimum number of bits of entropy needed for a password depends on the threat model for the given application. If key stretching is not used, passwords with more entropy are needed. RFC 4086, "Randomness Requirements for Security", published June 2005, presents some example threat models and how to calculate the entropy desired for each one. Their answers vary between 29 bits of entropy needed if only online attacks are expected, and up to 96 bits of entropy needed for important cryptographic keys used in applications like encryption where the password or key needs to be secure for a long period of time and stretching isn't applicable. A 2010 Georgia Tech Research Institute study based on unstretched keys recommended a 12-character random password, but as a minimum length requirement. Keep in mind that computing power continues to grow, so to prevent offline attacks the required bits of entropy should also increase over time. The upper end is related to the stringent requirements of choosing keys used in encryption. In 1999, an Electronic Frontier Foundation project broke 56-bit DES encryption in less than a day using specially designed hardware. In 2002, distributed.net cracked a 64-bit key in 4 years, 9 months, and 23 days. As of October 12, 2011, distributed.net estimates that cracking a 72-bit key using current hardware will take about 45,579 days or 124.8 years. Due to currently understood limitations from fundamental physics, there is no expectation that any digital computer (or combination) will be capable of breaking 256-bit encryption via a brute-force attack. Whether or not quantum computers will be able to do so in practice is still unknown, though theoretical analysis suggests such possibilities. Guidelines for strong passwords Common guidelines Guidelines for choosing good passwords are typically designed to make passwords harder to discover by intelligent guessing. Common guidelines advocated by proponents of software system security have included: Consider a minimum password length of 8 characters as a general guide. Both the US and UK cyber security departments recommend long and easily memorable passwords over short complex ones. Generate passwords randomly where feasible. Avoid using the same password twice (e.g. across multiple user accounts and/or software systems). Avoid character repetition, keyboard patterns, dictionary words, letter or number sequences. Avoid using information that is or might become publicly associated with the user or the account, such as user name, ancestors' names or dates. Avoid using information that the user's colleagues and/or acquaintances might know to be associated with the user, such as relatives' or pet names, romantic links (current or past) and biographical information (e.g. ID numbers, ancestors' names or dates). Do not use passwords which consist wholly of any simple combination of the aforementioned weak components. Consider passwords like "SOMETHINGLIKETHIS" harder to hack than long string of random characters like "80&3T4!*G$\#ET415". The forcing of lowercase, uppercase alphabetic characters, numbers and symbols in passwords was common policy, but has been found to actually decrease security, by making it easier to crack. Research has shown how very predictable common use of such symbols are, and the US, UK government cyber security departments advise against forcing their inclusion in password policy. Complex symbols also make remembering passwords much harder, which increases writing down, password resets and password reuse – all of which lower rather than improve password security. The original author of password complexity rules, Bill Burr, has apologised and admits they actually decrease security, as research has found; this was widely reported in the media in 2017. Online security researchers and consultants are also supportive of the change in best practice advice on passwords. Some guidelines advise against writing passwords down, while others, noting the large numbers of password protected systems users must access, encourage writing down passwords as long as the written password lists are kept in a safe place, not attached to a monitor or in an unlocked desk drawer. Use of a password manager is recommended by the NCSC. The possible character set for a password can be constrained by different web sites or by the range of keyboards on which the password must be entered. Examples of weak passwords As with any security measure, passwords vary in strength; some are weaker than others. For example, the difference in strength between a dictionary word and a word with obfuscation (e.g. letters in the password are substituted by, say, numbers — a common approach) may cost a password cracking device a few more seconds; this adds little strength. The examples below illustrate various ways weak passwords might be constructed, all of which are based on simple patterns which result in extremely low entropy, allowing them to be tested automatically at high speeds.: Default passwords (as supplied by the system vendor and meant to be changed at installation time): password, default, admin, guest, etc. Lists of default passwords are widely available on the internet. Dictionary words: chameleon, RedSox, sandbags, bunnyhop!, IntenseCrabtree, etc., including words in non-English dictionaries. Words with numbers appended: password1, deer2000, john1234, etc., can be easily tested automatically with little lost time. Words with simple obfuscation: p@ssw0rd, l33th4x0r, g0ldf1sh, etc., can be tested automatically with little additional effort. For example, a domain administrator password compromised in the DigiNotar attack was reportedly Pr0d@dm1n. Doubled words: crabcrab, stopstop, treetree, passpass, etc. Common sequences from a keyboard row: qwerty, 123456, asdfgh, fred, etc. Numeric sequences based on well known numbers such as 911 (9-1-1, 9/11), 314159... (pi), 27182... (e), 112 (1-1-2), etc. Identifiers: jsmith123, 1/1/1970, 555–1234, one's username, etc. Weak passwords in non-English languages, such as contraseña (Spanish) and ji32k7au4a83 (bopomofo keyboard encoding from Chinese) Anything personally related to an individual: license plate number, Social Security number, current or past telephone numbers, student ID, current address, previous addresses, birthday, sports team, relative's or pet's names/nicknames/birthdays/initials, etc., can easily be tested automatically after a simple investigation of a person's details. Dates: dates follow a pattern and make your password weak. There are many other ways a password can be weak, corresponding to the strengths of various attack schemes; the core principle is that a password should have high entropy (usually taken to be equivalent to randomness) and not be readily derivable by any "clever" pattern, nor should passwords be mixed with information identifying the user. On-line services often provide a restore password function that a hacker can figure out and by doing so bypass a password. Choosing hard-to-guess restore password questions can further secure the password. Rethinking password change guidelines In December, 2012, William Cheswick wrote an article published in ACM magazine that included the mathematical possibilities of how easy or difficult it would be to break passwords that are constructed using the commonly recommended, and sometimes followed, standards of today. In his article, William showed that a standard eight character alpha-numeric password could withstand a brute force attack of ten million attempts per second, and remain unbroken for 252 days. Ten million attempts each second is the acceptable rate of attempts using a multi-core system that most users would have access to. A much greater degree of attempts, at the rate of 7 billion per second, could also be achieved when using modern GPUs. At this rate, the same 8 character full alpha-numeric password could be broken in approximately 0.36 days (i.e. 9 hours). Increasing the password complexity to a 13 character full alpha-numeric password increases the time needed to crack it to more than 900,000 years at 7 billion attempts per second. This is, of course, assuming the password does not use a common word that a dictionary attack could break much sooner. Using a password of this strength reduces the obligation to change it as often as many organizations require, including the U.S. Government, as it could not be reasonably broken in such a short period of time. Password policy A password policy is a guide to choosing satisfactory passwords. It is intended to: assist users in choosing strong passwords ensure the passwords are suited to the target population provide recommendations for users with regard to the handling of their passwords impose a recommendation to change any password which has been lost or suspected of compromise use a password blacklist to block the usage of weak or easily guessed passwords. Previous password policies used to prescribe the characters which passwords must contain, such as numbers, symbols or upper/lower case. While this is still in use, it has been debunked as less secure by university research, by the original instigator of this policy, and by the cyber security departments (and other related government security bodies) of USA and UK. Password complexity rules of enforced symbols were previously used by major platforms such as Google and Facebook, but these have removed the requirement following the discovery they actually reduced security. This is because the human element is a far greater risk than cracking, and enforced complexity leads most users to highly predictable patterns (number at end, swap 3 for E etc.) which actually helps crack passwords. So password simplicity and length (passphrases) are the new best practice and complexity is discouraged. Forced complexity rules also increase support costs, user friction and discourage user signups. Password expiration was in some older password policies but has been debunked as best practice and is not supported by USA or UK governments, or Microsoft which removed the password expiry feature. Password expiration was previously trying to serve two purposes: If the time to crack a password is estimated to be 100 days, password expiration times fewer than 100 days may help ensure insufficient time for an attacker. If a password has been compromised, requiring it to be changed regularly may limit the access time for the attacker. However, password expiration has its drawbacks: Asking users to change passwords frequently encourages simple, weak passwords. If one has a truly strong password, there is little point in changing it. Changing passwords which are already strong introduces risk that the new password may be less strong. A compromised password is likely to be used immediately by an attacker to install a backdoor, often via privilege escalation. Once this is accomplished, password changes won't prevent future attacker access. Moving from never changing one's password to changing the password on every authenticate attempt (pass or fail attempts) only doubles the number of attempts the attacker must make on average before guessing the password in a brute force attack. One gains much more security by just increasing the password length by one character than changing the password on every use. Creating and handling passwords The hardest passwords to crack, for a given length and character set, are random character strings; if long enough they resist brute force attacks (because there are many characters) and guessing attacks (due to high entropy). However, such passwords are typically the hardest to remember. The imposition of a requirement for such passwords in a password policy may encourage users to write them down, store them in mobile devices, or share them with others as a safeguard against memory failure. While some people consider each of these user resorts to increase security risks, others suggest the absurdity of expecting users to remember distinct complex passwords for each of the dozens of accounts they access. For example, in 2005, security expert Bruce Schneier recommended writing down one's password: The following measures may increase acceptance of strong password requirements, if carefully used: a training program. Also, updated training for those who fail to follow the password policy (lost passwords, inadequate passwords, etc.). rewarding strong password users by reducing the rate, or eliminating altogether, the need for password changes (password expiration). The strength of user-chosen passwords can be estimated by automatic programs which inspect and evaluate proposed passwords, when setting or changing a password. displaying to each user the last login date and time in the hope that the user may notice unauthorized access, suggesting a compromised password. allowing users to reset their passwords via an automatic system, which reduces help desk call volume. However, some systems are themselves insecure; for instance, easily guessed or researched answers to password reset questions bypass the advantages of a strong password system. using randomly generated passwords that do not allow users to choose their own passwords, or at least offering randomly generated passwords as an option. Memory techniques Password policies sometimes suggest memory techniques to assist remembering passwords: mnemonic passwords: Some users develop mnemonic phrases and use them to generate more or less random passwords which are nevertheless relatively easy for the user to remember. For instance, the first letter of each word in a memorable phrase. Research estimates the password strength of such passwords to be about 3.7 bits per character, compared to the 6.6 bits for random passwords from ASCII printable characters. Silly ones are possibly more memorable. Another way to make random-appearing passwords more memorable is to use random words (see diceware) or syllables instead of randomly chosen letters. after-the-fact mnemonics: After the password has been established, invent a mnemonic that fits. It does not have to be reasonable or sensible, only memorable. This allows passwords to be random. visual representations of passwords: a password is memorized based on a sequence of keys pressed, not the values of the keys themselves, e.g. a sequence !qAsdE#2 represents a rhomboid on a US keyboard. The method to produce such passwords is called PsychoPass; moreover, such spatially patterned passwords can be improved. password patterns: Any pattern in a password makes guessing (automated or not) easier and reduces an attacker's work factor. For example, passwords of the following case-insensitive form: consonant, vowel, consonant, consonant, vowel, consonant, number, number (for example pinray45) are called Environ passwords. The pattern of alternating vowel and consonant characters was intended to make passwords more likely to be pronounceable and thus more memorable. Unfortunately, such patterns severely reduce the password's information entropy, making brute force password attacks considerably more efficient. In the UK in October 2005, employees of the British government were advised to use passwords in this form. Protecting passwords Computer users are generally advised to "never write down a password anywhere, no matter what" and "never use the same password for more than one account." However, an ordinary computer user may have dozens of password-protected accounts. Users with multiple accounts needing passwords often give up and use the same password for every account. When varied password complexity requirements prevent use of the same (memorable) scheme for producing high-strength passwords, oversimplified passwords will often be created to satisfy irritating and conflicting password requirements. A Microsoft expert was quoted as saying at a 2005 security conference: "I claim that password policy should say you should write down your password. I have 68 different passwords. If I am not allowed to write any of them down, guess what I am going to do? I am going to use the same password on every one of them." Software is available for popular hand-held computers that can store passwords for numerous accounts in encrypted form. Passwords can be encrypted by hand on paper and remember the encryption method and key. Even a better way is to encrypt a weak password with one of the commonly available and tested cryptographic algorithms or hashing functions and use the cipher as your password. A single "master" password can be used with software to generate a new password for each application, based on the master password and the application's name. This approach is used by Stanford's PwdHash, Princeton's Password Multiplier, and other stateless password managers. In this approach, protecting the master password is essential, as all passwords are compromised if the master password is revealed, and lost if the master password is forgotten or misplaced. Password managers A reasonable compromise for using large numbers of passwords is to record them in a password manager program, which include stand-alone applications, web browser extensions, or a manager built into the operating system. A password manager allows the user to use hundreds of different passwords, and only have to remember a single password, the one which opens the encrypted password database. Needless to say, this single password should be strong and well-protected (not recorded anywhere). Most password managers can automatically create strong passwords using a cryptographically secure random password generator, as well as calculating the entropy of the generated password. A good password manager will provide resistance against attacks such as key logging, clipboard logging and various other memory spying techniques. See also Keystroke logging Passphrase Phishing Vulnerability (computing) References External links RFC 4086: Randomness Requirements for Security Password Patterns:The next generation dictionary attacks Cryptography Password authentication
15066693
https://en.wikipedia.org/wiki/2008%20USC%20Trojans%20football%20team
2008 USC Trojans football team
The 2008 USC Trojans football team (variously "Trojans" or "USC") represented the University of Southern California during the 2008 NCAA Division I-A football season. The team was coached by Pete Carroll and played their home games at the Los Angeles Coliseum. Before the season Pre-season outlook The Trojans finished the 2007 season with a decisive Rose Bowl victory, #2 ranking in the Coaches Poll and #3 ranking (with one first-place vote) in the AP Poll. In January 2008, immediately after the bowl season, USC was ranked at #4 by Sports Illustrated online and #5 by ESPN.com; the general opinion was that while the Trojans were facing key player departures, the losses were mitigated by the overall talent level of the program. Georgia was ranked as the early pre-season #1 team. Sports Illustrated and ESPN.com soon revised their rankings to #3 and #4, respectively, after nearly all draft-eligible juniors decided to remain with the program instead of entering the NFL Draft. Going into the spring, USC ranked as the premier team in the Pac-10 Conference, taking advantage of a deep talent pool, including a number of talented running backs holding over from the previous season. The Trojans' biggest question entering spring practices was who would take over the starting quarterback position from John David Booty. Junior Mark Sanchez entered spring practice as the acknowledged leader, having started three games the previous season due to an injury to Booty, winning two; however, Arkansas-transfer and former Razorback starter Mitch Mustain had the most college game experience, having started and won eight games for the 2006 Razorbacks team his freshman year while putting on an impressive performance on the scout team in the 2007 season during the NCAA-mandated waiting period. Both Sanchez (2005) and Mustain (2006) were considered the top quarterback in the nation coming out of their respective high school classes. The Trojans entered spring with a number of qualified running backs, but not quite as many as in 2007. Battling for the starting position were top returners Junior Stafon Johnson (673 yards) and sophomore Joe McKnight (540 yards); but challenging them would be redshirt sophomore C.J. Gable, who started five games as a freshman in 2006 and the first two of 2007 before a season-ending injury, junior Allen Bradford, as well as previously injured redshirt freshman Broderick Green and Marc Tyler. All six running backs earned Parade or USA Today All-American honors in high school, four earned both. Questions remained around the wide receivers, who had struggled with consistency the previous season; all starters returned, with special attention focused on Arkansas transfer Damian Williams, who caught 19 passes for the Razorbacks in 2006 but sat out 2007 along with fellow Arkansas teammate Mustain. The offensive line was hit hard by graduation, returning only one starter. The defense lost several important players to graduation, but the linebacker corp returned key players such as Brian Cushing Rey Maualuga and Clay Matthews. By the end of spring practice, the USC coaching staff announced that Sanchez would be the designated starting quarterback going into fall camp. A crowd of 22,000 watched the Trojan Huddle, USC's spring game that ends spring practices, where Sanchez, Mustain and redshirt freshman Aaron Corp all performed well against Trojan defenses; the White team defeated Cardinal, 39–36, in double overtime. With a number of talented linebackers, the Carroll and defensive coordinator Nick Holt began experimenting with using a 3-4 defense variation implementing the "Elephant Position", which features a hybrid end/linebacker position. The Trojans had used Cushing in the Elephant position during the 2006 season before returning to their traditional 4-3 during the 2007 season. In the 2008 variation, the position was filled by senior Clay Matthews, a former walk-on. After spring practices finished across the nation, Sports Illustrated revised its rankings and placed USC as the #3 team, behind Georgia and Ohio State; while ESPN ranked the Trojans #4, behind Ohio State, Georgia and Oklahoma. The running back tandem of Stafon Johnson and Joe McKnight was compared to the "Thunder and Lightning" combination of LenDale White and Reggie Bush, with McKnight mentioned as a top ten Heisman Trophy contender going into the fall. At the Pacific-10 Conference media day, the Trojans were the near-unanimous pre-season pick to win the conference. USC took 38 of 39 first-place votes; California, which were picked to finish fourth in the overall standings, received the other vote. This was USC's sixth year in a row as the favorite to win the conference title, the longest streak since the Trojans 18-year run from 1965 to 1982. On the release of the preseason Coaches' Poll, USC was ranked #2 in the nation, behind the 2008 Georgia team: Georgia received 1438 points with 22 first-place votes while USC received 1430 points with 14 first-place votes. Meanwhile, the 2008 Ohio State team was ranked third with 1392 points but an equal number of first-place votes with 14. The preseason Associated Press (AP) Poll ranked USC #3 in the nation, behind Georgia and Ohio State. USC received 12 first-place votes and 1490 points, compared to Georgia's 22 first-place votes and 1528 points and Ohio State's 21 first-place votes and 1506 points. Both polls added to the interest in the OSU-USC game on September 13. A major concern arose in the first week of fall camp, when Sanchez suffered a dislocated left kneecap while warming up for practice. Trainers were able to immediately put the kneecap back into place, but the injury sidelined Sanchez and threw his availability for the season opener at Virginia (and beyond) into question. As a result, Mustain and redshirt freshman Aaron Corp began alternating repetitions with the first team offense and competing for the possible starting spot. After missing nearly three weeks, Sanchez was cleared to play in the opener on the final day of fall camp; Corp was selected as his back-up. The biggest issue facing the team entering the season was how the rebuilt offensive line would perform, though it had improved over the course of fall camp. Sanchez, Cushing, offensive lineman Jeff Byers, and senior safety Kevin Ellison were elected team captains by their teammates. In the week preceding the regular season, all twelve experts polled by ESPN picked USC to win the Pac-10 conference, and three expected them to make it to the BCS National Championship Game with two expecting them to prevail. All seven experts polled by Sports Illustrated picked USC to win their conference, with three forecasting them in the Championship Game with one selecting them to prevail. Rivals.com's panel of four experts unanimously picked USC to play in the title game. Recruiting class USC brought in a top-10 recruiting class in 2008. Transfers Shane Horton, the brother of 2008 recruit Wes Horton, transferred from UNLV and would be required to sit out one season by NCAA rules. Junior transfer Steve Gatena, former United States Air Force Academy Class of 2008 Cadet, transferred in from UC Davis as an offensive left tackle. Gatena was required to sit out one season by NCAA rules. However, due to his academic standing as a graduate student, Gatena was granted a one time transfer exception for pursuing his academic career and played as the second string left tackle in the season opener against Virginia. Jordan Cameron, the uncle of Matt Leinart's son with USC basketball player Brynn Cameron, transferred in from Ventura College. A former freshman basketball player from Brigham Young University, Cameron attempted to transfer before the 2007 season to also play football as a wide receiver. However, when some of Cameron's units from Brigham Young did not transfer to USC, he needed to withdraw and attend Ventura College, missing the season but with the option to try to rejoin the team in 2008 (regardless, he would have been ineligible to play in 2007 due to NCAA transfer rules). Departures In addition to graduating starting senior 2007 All-Americans Sam Baker (offensive tackle), John Mackey Award-winner Fred Davis (tight end), Sedrick Ellis (nose tackle), and Keith Rivers (linebacker), as well as first team all-conference defensive end Lawrence Jackson, 2006 first team all-conference quarterback John David Booty and second team all-conference defensive back Terrell Thomas, the Trojans also lost junior All-Conference offensive guard Chilo Rachal to the 2008 NFL Draft. Pac-10 conference honorable mention offensive linemen Drew Radovich and Matt Spanos, tailback Chauncey Washington, and linebacker Thomas Williams also departed. Offseason news On the Monday after the 2007 UCLA–USC rivalry game, a 24–7 Trojans victory, embattled Bruins head coach Karl Dorrell was fired. His replacement was former UCLA quarterback Rick Neuheisel, who held previous head coach positions at both Colorado and Washington where he led teams to overall successful records but his departures coincided with NCAA investigations at both universities. Within a month on the job, Neuheisel attracted attention by hiring former USC offensive coordinator Norm Chow as his offensive coordinator. While with Trojans from 2001 to 2004, Chow led the offense to the 2003 and 2004 national championships and saw quarterbacks Carson Palmer and Matt Leinart win the Heisman Trophy. The hire of Chow injected a new level of drama to the rivalry that had somewhat stagnated under Dorrell. Schedule The Sporting News ranked the schedule as the toughest in the Pac-10; ESPN.com ranked it as the fourth toughest in the conference. ESPN.com ranked the nonconference schedule as the fifth most difficult in the nation, noting that if Virginia had a good year it would be the toughest. Roster Coaching staff Nearly the entire USC coaching staff returned from the 2007 season, with the only change being a different Graduate Assistant working with the secondary. Game summaries Virginia The Trojans opened their season by visiting the University of Virginia Cavaliers of the Atlantic Coast Conference (ACC) under Al Groh in the first ever game between the two programs and first USC game in Virginia. Virginia went 9–4 in 2007, but off-season losses to both the NFL and unexpected issues left the Cavaliers ranked fifth-out-of-six teams in the ACC's Coastal Division in the preseason. Five UVA players were arrested during the off-season and five other players were dismissed from the team because of academic reasons. The game was described as possibly the biggest home opener for Virginia at Scott Stadium. Groh noted that he was pleased the game opened his team's schedule due to USC's ability and the general distraction it would otherwise pose to conference play: "It's as good a time as any in that we only wanted to play it in the first game, and they only wanted to play it in the first game." Virginia entered the game having gone 3–4 in opening games under Groh. Virginia will become the 34th state (plus Japan) where the Trojans have played football. USC entered the game favored by 19.5 points. Scoring three touchdowns in the first quarter was all USC needed to beat up on the Cavaliers at their home field. Quarterback Mark Sanchez threw for 338 yards, CJ Gable ran for 73 yards, and Ronald Johnson had 78 yards in receiving. The No. 3 Trojans got off to a good start, thanks to the Virginia turnovers. USC continued their undefeated streak in openers away from home under Carroll. Ohio State After a bye week, the Trojans hosted the Ohio State Buckeyes of the Big Ten Conference under head coach Jim Tressel. USC or Ohio State had played in five of the last six BCS title games. The non-conference game between two perennial powers had potential national championship implications for either program. A historic rivalry existed between the two teams: Between 1968 and 1984, they met six times in the Rose Bowl and determined the eventual National Champion in three of those contests. The teams had not faced one another since September 29, 1990, when Todd Marinovich led the Trojans to a 35–26 victory in Ohio Stadium in a game that was called because of a thunderstorm with 2 minutes 36 seconds to play. By the end of the 2007–08 season, the game garnered interest as a possible early-season battle between top-10 teams. By the beginning of the season it was named as the most anticipated regular-season game of 2008. The winning team was assumed to have an inside track to the national title game, though, given recent trends in the title game, the loser also had a reasonable chance as well. In naming it the top potentially season-defining game of 2008, Sports Illustrated highlighted a theme of credibility: Ohio State enters the game trying to move past the BCS title game losses of the previous two seasons and USC enters trying to show it remains highly competitive with its new starting quarterback and four of five new players on the offensive line. The game was also viewed as a possible Heisman Trophy showdown, primarily between Ohio State's running back Chris "Beanie" Wells and USC's Sanchez. During the preseason Pac-10 Media Day, Carroll noted that "It's games like this that make us." The Buckeyes featured a combination of quarterbacks: Fifth-year senior and All-Big-Ten quarterback Todd Boeckman continued to start for the Buckeyes, however in 2008 he was supplemented by highly regarded true-freshman quarterback Terrelle Pryor. The combination aimed to use Boeckman's prowess as a classic drop-back passer with Pryor's speed and ability to scramble for yards. The Trojans had particular concerns about Pryor, who had many of capabilities that made previous athletic scrambling quarterbacks, such as Dennis Dixon, Jake Locker and Vince Young, difficult for the defense to contain. The Buckeyes came in with a strong defense, led by All-American and Butkus Award-winning linebacker James Laurinaitis and cornerback Malcolm Jenkins. Both linebacking corps, highlighted by the Buckeyes' Laurinaitis and Marcus Freeman as well as the Trojans' Rey Maualuga and Brian Cushing, were considered the best in the country. One of the major storylines entering the game surrounded the health of Ohio State's star running back, Beanie Wells. Well injured his foot in the Buckeyes' opener, and sat out the second game of the season. Early during the week of the game he was cleared to play against the Trojans; however by the Thursday his presence assessed as doubtful after he experienced soreness in his foot one day after returning to practice. Although scheduled for the third week of the season, the game was the primary focus of fan and media attention for both programs. Ticket prices rose to levels from $100 to $5,000 apiece. The game received heightened attention in national sports news when USC alumnus and starting quarterback for the NFL's Cincinnati Bengals, Carson Palmer told a radio show "I cannot stand the Buckeyes and having to live in Ohio and hear those people talk about their team, it drives me absolutely nuts [. . .] I just can't wait for this game to get here so they can come out to the Coliseum and experience L.A. and get an old-fashioned, Pac-10 butt-whupping." While Tressel defended Palmer's comments as those of fan, Ohio State fans were incensed. In the week before the game, Buckeyes wide receiver Ray Small stated that USC lacked the "class" of Ohio State, noting that "[A]t Ohio State, they teach you to be a better man. There, it's just all about football", further nothing that he felt that USC was "not even serious about the game." Sideline passes were in high demand, with celebrities such as Denzel Washington and Jamie Foxx in attendance. USC and Ohio State opened the preseason ranked No. 2 and 3 in both the AP and Coaches Polls, swapping positions in each. After the Trojans' strong performance against Virginia, USC rose to No. 1 and Ohio State ranked No. 3 in both polls. However, after Ohio State struggled in their week 2 win against an unregarded Ohio team, they fell to No. 5 in both polls while USC remained No. 1. By game week the Trojans were considered 10-point favorites. Oregon State The Trojans began their Pac-10 Conference schedule on the road against the Oregon State Beavers, under head coach Mike Riley, in Corvallis, Oregon. On their previous visit to Reser Stadium, during the 2006 season, the Beavers defeated the Trojans, 33–31, in a major upset; as such, the game was mentioned in the preseason as a possible upset for the Trojans. Freshman Jacquizz Rodgers (#1) ran for 186 yards and two touchdowns for OSU, USC quarterback Mark Sanchez (#6) passed for 227 yards, and Damian Williams (#18) caught 80 yards for the top ranked Trojans in an upset loss. Oregon State was the only Pac-10 Conference school to have beaten USC twice during the Pete Carroll era, until Oregon and Stanford equaled the feat in 2009. Oregon Before the season the game was named a game of interest and the second most interesting Pac-10 game to watch after Ohio State-USC, in part due to the potential battle for the top of the conference. Arizona State Entering the season, Arizona State was named as a possible challenger to USC's dominance of the Pac-10. Washington State Arizona Washington California This game was mentioned as a possible upset for the Trojans. Stanford Before the 2007 season, Stanford head coach Jim Harbaugh, in his first year with the Cardinal, garnered attention by first stating that 2007 was going to be Carroll's last year at USC, then, during the Pac-10 media day, that USC "may be the best team in the history of college football." The Cardinal then stunned the Trojans in a major upset, 24–23, ending the Trojans' 35-game home winning streak and causing a major obstacle to the Trojans national title hopes. During the 2008 Pac-10 media day, noted that the aftermath of the Cardinal's victory over USC was "water under the bridge." Given the previous season, before the season it was named as a game to watch by ESPN.com. Notre Dame UCLA Before the season this game garnered interest in seeing how new Bruins' coach Rick Neuheisel would do in his battle to gain supremacy in Los Angeles. Joe McKnight (12-yard run), Damian Williams (12-yard pass from Mark Sanchez), Stafon Johnson (2-yard run) and Patrick Turner (18-yard pass from Sanchez) scored for the Trojans. Sanchez passed for 269 yards, McKnight ran for 99 yards and Turner caught for 81 yards. A fumble recovery turned into a pass reception touchdown by Kahlil Bell in the beginning of the game was all the scoring by the Bruins in this the latest cross-town rivalry game. UCLA Quarterback Kevin Craft completed 11 out of 28 passes for a total of 89 yards and had one pass intercepted. Going into the game, the Trojans were set to be, at worst, co-Pac-10 Champions with Oregon State. However, after the win over UCLA and Oregon State's loss the same day, the Trojans became the Pac-10 Champions for the seventh straight year and qualified for the Rose Bowl, played on January 1, 2009. Linebacker Rey Maualuga was named Pac-10 defensive player of the year. Rose Bowl versus Penn State Rankings After the season Awards Rey Maualuga won the Bednarik Award. Rey Maualuga was the CBS Sportsline.com Defensive Player of the Year. Rey Maualuga was named the Pac-10 Defensive Player of the Year. Rey Maualuga was the USC Team MVP. Rey Maualuga, Taylor Mays and Brian Cushing were named First Team All-American. Fili Moala (Sporting News) was named Second Team All-American. Mark Sanchez, Kevin Ellison, and Clay Matthews III were named All-America honorable mention. Kristofer O'Dowd was named to the College Football News Sophomore All-America Second Team. Damian Williams was named College Football News Sophomore All-America honorable mention. Mark Sanchez, Kristofer O'Dowd, Rey Maualuga, Taylor Mays, Brian Cushing, Fili Moala, Kevin Ellison, Jeff Byers (rivals.com) and David Buehler were named First Team All-Pac-10. Patrick Turner, Clay Matthews III (rivals.com), and Kaluka Maiava were named Second Team All-Pac-10. Charles Brown (offensive lineman), Anthony McCoy, Josh Pinkard, Cary Harris, Kyle Moore, Joe McKnight and Damian Williams were named honorable mention All-Pac-10. NFL Draft Twelve USC players were invited to the NFL Combine. Of the twelve, Josh Pinkard applied for and was granted a sixth season of eligibility by the NCAA and opted to stay at USC for another season. Of the eleven players who attended the Combine, all were drafted by the end of the sixth round of the 2009 NFL Draft. USC topped the total number of draftees for the second consecutive season. References External links USC USC Trojans football seasons Pac-12 Conference football champion seasons Rose Bowl champion seasons USC Trojans football
10210385
https://en.wikipedia.org/wiki/GridApp%20Systems
GridApp Systems
GridApp Systems, Inc. was a database automation software company. It was purchased by BMC Software in December, 2010. Founded in 2002 and headquartered in New York City, GridApp Systems was the brainchild of five former employees of Register.com, Rob Gardos, Shamoun Murtza, Matthew Zito, Dan Cohen, and Eric Gross. The five realized that 85% of the routine tasks performed by database administrators could be automated, decreasing critical errors and improving productivity; all five functioned as GridApp’s CEO, CTO, Chief Scientist, Director of Development, and Mr. Database respectively. GridApp's flagship product is GridApp Clarity, which has won the following awards and recognition: SearchSQLServer - 2006 - "Performance and Tuning" Category - Silver ServerWatch - 2008 - "Automation and Compliance" category - Silver CODiE - 2008 - "Best Database Management Software" category - Finalist EMA - 2008 - "EMA Rising Star" References Software companies established in 2002 American companies established in 2002 Software companies based in New York (state) 2002 establishments in New York City 2010 mergers and acquisitions Software companies of the United States
36639312
https://en.wikipedia.org/wiki/Deal.II
Deal.II
deal.II is a free, open-source library to solve partial differential equations using the finite element method.  The current release is version 9.2.0, released in May 2020. It is one of the most widely used finite element libraries, and provides comprehensive support for all aspects of the solution of partial differential equations. The founding authors of the project — Wolfgang Bangerth, Ralf Hartmann, and Guido Kanschat — won the 2007 J. H. Wilkinson Prize for Numerical Software for deal.II. However, it is today a worldwide project with around a dozen "Principal Developers", to which over the years several hundred people have contributed substantial pieces of code or documentation. Features The library features dimension independent programming using C++ templates on locally adapted meshes, a large collection of different finite elements of any order: continuous and discontinuous Lagrange elements, Nedelec elements, Raviart-Thomas elements, and combinations, parallelization using multithreading through TBB and massively parallel using MPI. deal.II has been shown to scale to at least 16,000 processors and has been used in applications on up to 300,000 processor cores. multigrid method with local smoothing on adaptively refined meshes hp-FEM extensive documentation and tutorial programs, interfaces to several libraries including Gmsh, PETSc, Trilinos, METIS, VTK, p4est, BLAS, LAPACK, HDF5, NetCDF, and Open Cascade Technology. History and Impact The software started from work at the Numerical Methods Group at Heidelberg University in Germany in 1998. The first public release was version 3.0.0 in 2000. Since then deal.II has gotten contributions from several hundred authors and has been used in more than a thousand research publications. The primary maintainers, coordinating the worldwide development of the library, are today located at Colorado State University, Clemson University, Heidelberg University, Texas A&M University, Oak Ridge National Laboratory and a number of other institutions. It is developed as a worldwide community of contributors through GitHub that incorporates several hundred changes by dozens of authors every month. See also List of finite element software packages List of numerical analysis software References External links Source Code on Github List of Scientific publications Free computer libraries Differential calculus Finite element software for Linux C++ numerical libraries Software that uses VTK
19369663
https://en.wikipedia.org/wiki/Iron%20Soldier
Iron Soldier
Iron Soldier is an open world first-person mecha simulation video game developed by Eclipse Software Design and published by Atari Corporation for the Atari Jaguar in North America and Europe on December 22, 1994, then in Europe in January 1995 and later in Japan on March 24 of the same year, where it was instead published by Mumin Corporation. The first installment in the eponymous franchise, the game is set in a dystopian future where industries and machinery has overrun most of the surface on Earth, as players assume the role of a resistance member taking control of the titular mech in an attempt to overthrow the dictatorship of Iron Fist Corporation, who have conquered the world through usage of military force. Conceived by Cybermorph co-producer Sean Patten during his time working at Atari Corp., Iron Soldier began development in late 1993 during the console's launch and was jointly written by Lethal Xcess authors Marc Rosocha and Michael Bittner. Eclipse Software originally pitched an on-rails 3D shooter to the Atari executives but it was rejected for not being an open world title, however Marc later meet with Patten, who proposed him to create a mecha game based on a script he previously wrote that served as the starting basis for the project, which took influences from his fascination with mechas and series such as Godzilla. Iron Soldier received mostly positive reception after being released and critics praised multiple aspects of the game such as the visuals, audio, gameplay and overall design but its control scheme, learning curve and lack of additional texture-mapped graphics drew criticism from some of the reviewers. As of April 1, 1995, the title has sold nearly 21,000 copies though it is unknown how many were sold in total during its lifetime. A sequel, Iron Soldier 2, was released in December 1997 by Telegames for both the Jaguar and Atari Jaguar CD, a year after both platforms were discontinued by Atari in 1996 for being critical and commercial failures. In recent years, it has been referred by several publications as one of the best titles for the system. Gameplay Iron Soldier is an open world 3D first-person mecha simulation game similar to MechWarrior and Metal Head where players assume the role of a resistance member taking control of the titular Iron Soldier, a stolen robot spanning of height in order to complete a series of 16 missions as attempts to overthrow the dictatorship of Iron Fist Corporation. Before starting, players can choose to tackle either of the first four missions at any given order, ranging from objectives such as retrieving new weapons from enemy bases or destroying certain buildings, with more being unlocked after completing a set of four missions as players progress further into the game, however the last set of four missions require to be played in successive order. Later missions involve players completing more complex objectives such as fighting against enemy Iron Soldier units or escorting allied vehicles while protecting them from enemy fire. At the beginning, the player only has access to an assault rifle but players can expand their arsenal with a wide variety of weapons like rocket launchers, gatling guns, cruise missiles, among others which can be equipped on any part of the robot. If the robot gets destroyed by enemy fire, the current mission will be left incomplete and players only have a limited number of continues before the game is over, though players have the option of resuming progress by loading their saved game into the last mission set or last mission reached, but the number of continues used is kept. At the title screen, players can enter to the options menu and change various settings. During gameplay, the action is viewed inside the Iron Soldier's cockpit and controlling it is done by holding the A button and pushing either up or down to move forward or backward respectively, while pressing A by itself brings the robot into a full-stop. The D-pad by itself is used to look the surroundings and change movement direction but holding the C button will allow to look faster, however it can also re-center the player's view when pressed by itself. Pressing both A and C will allow to turn the view much faster, while firing the currently selected weapon is done by pressing the B button and the Option button alternates between the right and left hands of the robot. Depending on their current position, weapons can be chosen by pressing its corresponding number on the controller's keypad, matching the diagram of the robot seen on the upper left side of the screen. Pressing 2 activates the advanced controls, which lock the robot's lower part and allow players to look at any direction without changing course of movement but only 90° and turning left or right is done by holding A and pressing the corresponding direction, while pressing 2 again deactivates the advanced controls and unlocks the lower part. Buildings can be leveled down by destroying them with either close or long range weapons and most of them have crates containing either ammunition for the player's currently equipped weapons, supplies, new weapons or repair a quarter of the damage taken by the robot. However, they can also be used for cover to help avoid taking damage. Some of the smaller buildings and enemy units like the tanks can be destroyed by stepping on them while moving forward or backward, but they slowdown movement. In addition, enemy fire such as the rockets can be counterattacked by shooting at them to avoid taking damage. Development In late 1992, Atari Corporation showed Eclipse Software Design their plans for a new home video game console that would later become the Jaguar and wanted them to create new titles for the upcoming system, receiving one of the first prototype software development kits for it named "Felix" and due to their experience with previous hardware such as the Atari ST and Atari Falcon computers, they became quickly familiarized with the architecture. Eclipse Software originally proposed an on-rails 3D shooter similar in vein to Namco's Starblade as one of their first projects for the Jaguar due to members within the company being fans of said title but it was rejected by the executives at Atari Corp. due to not being an open-world title, upsetting Eclipse founder and Lethal Xcess co-author Marc Rosocha as a result, who almost cut ties with Atari for their decision. From Autumn 1993, the team was only dedicated on making development tools for the Jaguar and prototypes but had nothing regarding to a game project before Marc meet with Cybermorph co-producer Sean Patten on his office at Atari, who proposed the team to make a mech game based on a script he previously wrote that would later serve as the starting basis for Iron Soldiers development due to his fascination with mechas and series like Godzilla. Marc agreed at the request that they "could blow everything up", which Patten instantly agreed as well and the project entered development in November 1993 in conjunction between Atari and Eclipse. Iron Soldier was first showcased to the public in a playable state at Summer Consumer Electronics Show in 1994, where it impressed both attendees and the video game press covering the event with its visuals, which has several differences compared to the final release such as a different diagram of the robot seen on the upper left side of the screen, and development was completed in under a year. The game runs between 25 and 30 frames per second and the three-dimensional models such as the robots consist of 200 polygons, with some of them having texture mapping applied. Most of the personnel on the development team were former Thalion Software employees, with Marc and co-programmer Michael Bittner being previously involved with titles such as Wings of Death and Trex Warrior: 22nd Century Gladiator respectively. The titular mech was designed by artist Mark J.L. Simmons. GamePro magazine and other video game dedicated outlets reported that Iron Soldier, along with Club Drive and Doom, would be one of the first titles to support two-player online gaming via the Jaguar Voice Modem by Phylon, Inc. However, the Jaguar Voice Modem itself was never completed or released, so the title was released as a single-player game only. Release Iron Soldier was released in North America on December 22, 1994 and later in Europe in January 1995 and came packaged with an overlay for the controller's keypad to illustrate their in-game functions. It was also released in Japan on March 24, 1995, where it was published by Mumin Corporation instead of Atari and the difference between the international and Japanese releases is that the latter came bundled with an additional instruction manual written in Japanese. Iron Soldier Beta In 2006, a prototype of the title in the ownership of video game collector Gary DuVall, was released under the title Iron Soldier Beta. 50 copies of the prototype were distributed and created by community member Gusbucket13 of defunct Jaguar Sector II website with the blessing of both the original developer and the owner of the prototype. Demand was high, given that the prototype contained significant differences from the released version including several weapons and defense mechanisms that were removed before the final game was shipped. In 2018, a ROM image of the prototype was made freely available to download by video game collector Nicolas Persjin with permission from the original prototype owner. Reception Iron Soldier received a mixture of opinions from reviewers, though a slight majority gave it a positive recommendation. Mike Weigand of Electronic Gaming Monthly commented that the controls are difficult to get used to, but praised the polygon graphics and the ability to choose which stage to play. GamePros Manny LaMancha, while acknowledging that the game's controls are complicated, maintained that they don't take long to master. He also praised the polygon graphics and most especially the simple yet intense gameplay. The three reviewers of GameFan, while criticizing the lack of texture mapping, said the polygonal graphics have considerable impact. They applauded the gameplay for its variety, challenge, and addictiveness. GameFan later awarded the game in 1995 as both best simulation game and simulation Game of the Year on the Jaguar. Next Generations brief review assessed it as "just plain, good old-fashioned destruction". Gary Lord of Computer and Video Games found it to be passable but unimpressive, remarking that "the control method is far from intuitive, the movement is slow and at times unresponsive, the missions often unclear as to how the objective is to be obtained". He and "second opinion" reviewer Mark Patterson compared the game unfavorably to its contemporary Metal Head in terms of both gameplay and visuals. AllGames Kyle Knight praised the visuals, music, gameplay and replay value, regarding it as "a legend among Jaguar fanatics", however he criticized the sound design. Atari Gaming Headquarterss Keita Iida remarked that although movement is slow, the game's "sense of reality is actually quite refreshing". Stefan Kimmlingen of German magazine Atari Inside gave high remarks to the graphics and sound. Atari Worlds Iain Laskey praised the game's "sense of 3-D", sound design and gameplay but criticized the lack of additional missions. David Msika of French magazine CD Consoles stated that Iron Soldier surpassed both MechWarriors and Vortex. Both Richard Homsy and Marc Menier of Consoles + gave high remarks to the presentation, visuals, sound and gameplay. Edge commented positively in regards to the variety of missions and enemies, which compensated the game's lack of pace, as well as the "Amiga-style" gameplay and visuals. Jan Valenta of Czech magazine Excalibur praised its originality and atmosphere. Game Players praised the 3D polygon visuals and music, touting it as one of the "best Jaguar games yet", though the lack of variety in missions and additional texture mapping were criticized. Hobby Consolass Antonio Caravaca praised the sound design, gameplay and replay value but in terms of visuals, he remarked that "maybe the bar has been placed very high, but even so, flat sprites cannot be shown, without shading or texture in the virtual era". Spanish magazine Hobby Hi-Tech noted similarities in its plot with both Vortex and Mazinger Z, praising the visuals, music, length, mission variety and addictive gameplay. Niclas Felske of German publication Jaguar gave it a mixed outlook. Both Nourdine Nini and Jean-François Morisse of French publication Joypad gave positive remarks to the visuals, controls and sound design. LeveLs Jan Hovora gave it a positive outlook. Winnie Forster of MAN!AC also noted similarities with both Vortex and MechWarrior. Mega Funs Stefan Hellert also gave it a positive outlook. Micromanías C.S.G. noted the title's idea to be similar with Iron Assault from Virgin Interactive Entertainment, criticizing the slow-paced gameplay and difficulty level. Likewise, Player Ones Christophe Pottier remarked its similarities with both BattleTech and Battlecorps, highly praising the title for its visuals, animations, sound and gameplay. Play Times Stephan Girlich gave it a mixed review, giving middling scores to the visuals and sound. Score Andrej Anastasov, though criticized its lack of originality, praised the game's atmosphere and visuals. German magazine ST-Computer stated that "Iron Soldier offers a varied and fast-paced gameplay with partly spectacular graphic effects and groovy music. What else should you write about it. You have to buy it!" ST Magazine Marc Abramson gave it a mixed outlook. Brazilian gaming magazine Super Game Power gave the game a very positive review in regards to its fun factor, controls, sound and graphics. Top Secrets Tytus gave the game a perfect score. In contrast, however, Gonzalo Herrero of Última Generación gave the title a negative review. Ultimate Future Games considered Iron Soldier as a "minor masterpiece", commenting positively about the gameplay's depth and visuals as well as regarding it to be "one of the best titles on the Jag and worth buying just to see a building collapse when you punch it". Video Gamess Wolfgang Schaedle praised the visuals and sound design. Jim Loftus of VideoGames gave it a positive outlook, stating that after changing the control settings at the options menu "made a world of difference and turned an otherwise irritating game into a really enjoyable one". As of April 1, 1995, Iron Soldier has sold nearly 21,000 copies though it is unknown how many were sold in total during its lifetime. It has been referred by publications like PC Magazine and Retro Gamer as one of the best titles for the Atari Jaguar. Legacy A sequel, Iron Soldier 2, entered development shortly after Iron Soldier was published. It was released by Telegames for both the Atari Jaguar and Jaguar CD add-on in 1997 to very positive reception, despite being released after the platforms were discontinued by Atari. In 1996, a year before its sequel was released, the game's trademark was abandoned. A third entry, Iron Soldier 3, was released for both PlayStation in 2000 and Nuon in 2001, serving as the last installment of the Iron Soldier series. References External links Iron Soldier at AtariAge Iron Soldier at MobyGames 1994 video games Atari games Atari Jaguar games Atari Jaguar-only games Eclipse Software Design games First-person shooters Open-world video games Single-player video games Video games developed in Germany Video games about mecha Video game franchises Video game franchises introduced in 1994 Video games set in the future Video games with alternate endings
60985168
https://en.wikipedia.org/wiki/International%20Journal%20of%20Computer%20Mathematics
International Journal of Computer Mathematics
The International Journal of Computer Mathematics is a monthly peer-reviewed scientific journal covering numerical analysis and scientific computing. It was established in 1964 and is published by Taylor & Francis. The editors-in-chief are Choi-Hong Lai (University of Greenwich), Abdul Khaliq (Middle Tennessee State University), and Qin (Tim) Sheng (Baylor University). The collaborative sister journal International Journal of Computer Mathematics: Computer Systems Theory, covering the theory of computing and computer systems was established in 2016. Abstracting and indexing The journal is abstracted and indexed in the Science Citation Index Expanded, MathSciNet, and Scopus. According to the Journal Citation Reports, the journal has a 2018 impact factor of 1.196. References External links International Journal of Computer Mathematics: Computer Systems Theory Mathematics journals Monthly journals English-language journals Taylor & Francis academic journals Publications established in 1964 Computer science journals
142583
https://en.wikipedia.org/wiki/FOSSIL
FOSSIL
FOSSIL is a standard protocol for allowing serial communication for telecommunications programs under the DOS operating system. FOSSIL is an acronym for Fido Opus SEAdog Standard Interface Layer. Fido refers to FidoBBS, Opus refers to Opus-CBCS BBS, and SEAdog refers to a Fidonet compatible mailer. The standards document that defines the FOSSIL protocol is maintained by the Fidonet Technical Standards Committee. Serial device drivers A "FOSSIL driver" is simply a communications device driver. They exist because in the early days of Fidonet, computer hardware was very diverse and there were no standards on how software was to communicate with the serial interface hardware. Initial development of FidoBBS only worked on a specific type of machine. Before FidoBBS could start spreading, it was seen that a uniform method of communicating with serial interface hardware was needed if the software was going to be used on other machines. This need was also apparent for other communications based software. The FOSSIL specification was born in 1986 so as to provide this uniform method. Software using the FOSSIL standard could communicate using the same interrupt functions no matter what hardware it was running on. This enabled the developers to concentrate on the application and not the interface to the hardware. FOSSIL drivers are specific to the hardware they operate on because each is written to fit specifically to the serial interface hardware of that platform. FOSSIL drivers became more well known with the spread of IBM PC compatible machines. These machines ran some form of DOS (Disk Operating System) and their BIOS provided very poor support for serial communications—so poor that it fell far short of the needs of any non-trivial communications task. Over time, MS-DOS and PC DOS became the prevalent operating systems and PC compatible hardware became predominant. Two popular DOS based FOSSIL drivers were X00 and BNU. A popular Windows based FOSSIL driver is NetFoss, which is freeware. SIO is a popular OS/2-based FOSSIL driver. FOSSIL drivers for hardware other than serial interfaces FOSSIL drivers have also been implemented to support other communications hardware by making it "look like a modem" to the application. Internal ISDN cards (that did not use serial ports at all) often came with FOSSIL drivers to make them work with software that was originally intended for modem operation only. References External links FOSSIL drivers' ancient history
24786891
https://en.wikipedia.org/wiki/Distributed%20Access%20Control%20System
Distributed Access Control System
Distributed Access Control System (DACS) is a light-weight single sign-on and attribute-based access control system for web servers and server-based software. DACS is primarily used with Apache web servers to provide enhanced access control for web pages, CGI programs and servlets, and other web-based assets, and to federate Apache servers. Released under an open-source license, DACS provides a modular authentication framework that supports an array of common authentication methods and a rule-based authorization engine that can grant or deny access to resources, named by URLs, based on the identity of the requestor and other contextual information. Administrators can configure DACS to identify users by employing authentication methods and user accounts already available within their organization. The resulting DACS identities are recognized at all DACS jurisdictions that have been federated. In addition to simple web-based APIs, command-line interfaces are also provided to much of the functionality. Most web-based APIs can return XML or JSON documents. Development of DACS began in 2001, with the first open source release made available in 2005. Authentication DACS can use any of the following authentication methods and account types: X.509 client certificates via SSL self-issued or managed Information Cards (InfoCards) (deprecated) two-factor authentication Counter-based, time-based, or grid-based one-time passwords, including security tokens Unix-like systems' password-based accounts Apache authentication modules and their password files Windows NT LAN Manager (NTLM) accounts LDAP or Microsoft Active Directory (ADS) accounts RADIUS accounts Central Authentication Service (CAS) HTTP-requests (e.g., Google ClientLogin) PAM-based accounts private username/password databases with salted password hashing using SHA-1, SHA-2, or SHA-3 functions, PBKDF2, or scrypt imported identities computed identities The extensible architecture allows new methods to be introduced. The DACS distribution includes various cryptographic functionality, such as message digests, HMACs, symmetric and public key encryption, ciphers (ChaCha20, OpenSSL), digital signatures, password-based key derivation functions (HKDF, PBKDF2), and memory-hard key derivation functions (scrypt, Argon2), much of which is available from a simple scripting language. DACS can also act as an Identity Provider for InfoCards and function as a Relying Party, although this functionality is deprecated. Authorization DACS performs access control by evaluating access control rules that are specified by an administrator. Expressed as a set of XML documents, the rules are consulted at run-time to determine whether access to a given resource should be granted or denied. As access control rules can be arbitrary computations, it combines attribute-based access control, role-based access control, policy-based access control, delegated access control, and other approaches. The architecture provides many possibilities to administrators. See also Access control Computer security References Notes R. Morrison, "Web 2.0 Access Control", 2007. J. Falkcrona, "Role-based access control and single sign-on for Web services", 2008. B. Brachman, "Rule-based access control: Improve security and make programming easier with an authorization framework", 2006. A. Peeke-Vout, B. Low, "Spatial Data Infrastructure (SDI)-In-A-Box, a Footprint to Deliver Geospatial Data through Open Source Applications", 2007. External links Cross-platform free software Free security software Free software programmed in C Unix security software Unix user management and support-related utilities Computer access control
16704879
https://en.wikipedia.org/wiki/Microsoft%20Host%20Integration%20Server
Microsoft Host Integration Server
Microsoft Host Integration Server (a.k.a. HIS) is a gateway application providing connectivity between Microsoft Windows networks and IBM mainframe and IBM i systems. Support is provided for SNA, 3270 (standard and TN3270), 5250 (standard and TN5250), CICS, APPC, and other IBM protocols. Support is also provided for advanced integration with Windows networks and software, such as linking Microsoft Message Queuing applications to IBM WebSphere MQ, binding Microsoft DTC transactions with CICS, and cross-protocol access to DB2 databases on IBM platforms. HIS is the successor to Microsoft SNA Server. SNA Server was released in 1994, and was one of the first add-on products available for the fledgling Windows NT. SNA Server was also included in Microsoft BackOffice Server. Similar gateway products were NetWare for SAA (defunct, ran on Novell NetWare) and IBM Communications Manager/2 (defunct, ran on OS/2). HIS has an active ecosystem of third party hardware (e.g. network adapters supporting ESCON and Twinax connectivity) and software. History SNA Server 1.0 Initial version of SNA Server was released in 1994. SNA Server 2.x SNA Server 2.1 was introduced in September 1994, and included in BackOffice 1.0. SNA Server 2.11 was released in July 1995, added new features such as Windows NT 3.51 support. Version 2.11 was included in BackOffice 1.5/2.0. Version 2.11 SP1 was released on January 31, 1996, which included new features such as Distributed Gateway Service, support for TN3270E clients, and FTP-AFTP gateway. SNA Server 3.0 SNA Server 3.0 was released on December 17, 1996. SNA Server 3.0 nearly doubled the capacity up to 5,000 users and up to 15,000 host sessions. Other major new features include SNA print service, single sign-on to AS/400s and mainframes, TN5250 service, support for TN3287 clients in TN3270E service. Version 3.0 was included in BackOffice 2.5. Service Pack was released up to SP4, which was released on November 1, 1996. SNA Server 4.0 SNA Server 4.0 was generally available in January 1998, included in BackOffice Server 4.0/4.5. SNA Server enhanced its features to support more clients and protocols, including Windows NT, Windows 95, Windows for Workgroups, Windows 3.x, DOS, and OS/2, with protocols of TCP/IP, IPX/SPX, Banyan VINES, AppleTalk, and Microsoft Named Pipes. Third-party solutions provide Macintosh and UNIX support. SNA Server 4.0 also included a new COM based integration technology called COM Transaction Integrator (COMTI, code-named Cedar) , which enables easier integration using GUI and Web page. Snap-ins for Microsoft Management Console (MMC) were introduced to easily manage SNA Server, COMTI, and OLE DB Provider in single place. Service Pack was released up to SP4, which was released on march 5, 2001. Host Integration Server 2000 SNA Server code-name “Babylon”was rebranded to "Host Integration Server" from this version and released on September 26, 2000. This version worked with Windows 2000, SQL Server 2000, BizTalk Server 2000, and Commerce Server 2000 to utilize the new generation of technologies such as COM+, XML and SOAP. The key new features include bidirectional application and data integration via enhanced COMTI and OLE DB support to 3270 I/O-based Customer Information Control System (CICS) applications, which was a result from strategic partnership with Microsoft and Software AG. The new adapter was called "Software AG CICS 3270 Adapter for Host Integration Server 2000". Host Integration Server 2000 Web Clients were released to enable users to connect to 3270 and 5250 through HIS 2000. Host Integration Server 2000 was included in BackOffice Server 2000. Service Pack was released up to SP2, which was released on March 30, 2005. See also Data-link switching References External links Host Integration Blog - The latest announcements about Host Integration Server can be found here after September 2018. BizTalk Server team blog - Information such as release history about Host Integration Server between 2006 and 2018 can be found here. Product Lifecycle Information for Host Integration Server How SDLC devices are connected using DLSw Introduction to SNA, Link defunct as of 2018-01-08. Windows software Network protocols Enterprise application integration 1994 software
50957870
https://en.wikipedia.org/wiki/SD-WAN
SD-WAN
A software-defined wide area network (SD-WAN) uses software-defined network technology, such as communicating over the Internet using encryption between an organization's locations. If standard tunnel setup and configuration messages are supported by all of the network hardware vendors, SD-WAN simplifies the management and operation of a WAN by decoupling the networking hardware from its control mechanism. This concept is similar to how software-defined networking implements virtualization technology to improve data center management and operation. In practice, proprietary protocols like Cisco IoS are used to set up and manage an SD-WAN, meaning no decoupling of the hardware and its control mechanism. A key application of SD-WAN is to allow companies to build higher-performance WANs using lower-cost and commercially available Internet access, enabling businesses to partially or wholly replace more expensive private WAN connection technologies such as MPLS. However, since the SD-WAN traffic is carried over the Internet, there are no end-to-end performance guarantees. Carrier MPLS VPN WAN services are not carried as Internet traffic, but rather over carefully-controlled carrier capacity, and do come with an end-to-end performance guarantee. Overview WANs allow companies to extend their computer networks over large distances, connecting remote branch offices to data centers and to each other, and delivering applications and services required to perform business functions. Due to the physical constraints imposed by the propagation time over large distances, and the need to integrate multiple service providers to cover global geographies (often crossing nation boundaries), WANs face important operational challenges, including network congestion, packet delay variation, packet loss, and even service outages. Modern applications such as VoIP calling, videoconferencing, streaming media, and virtualized applications and desktops require low latency. Bandwidth requirements are also increasing, especially for applications featuring high-definition video. It can be expensive and difficult to expand WAN capability, with corresponding difficulties related to network management and troubleshooting. SD-WAN products are designed to address these network problems. By enhancing or even replacing traditional branch routers with virtualization appliances that can control application-level policies and offer a network overlay, less expensive consumer-grade Internet links can act more like a dedicated circuit. This simplifies the setup process for branch personnel. MEF Forum has defined an SD-WAN architecture consisting of an SD-WAN Edge, SD-WAN Controller and SD-WAN Orchestrator. The SD-WAN Edge is a physical or virtual network function that is placed at an organization's branch/regional/central office site, data center, and in public or private cloud platforms. MEF Forum has published the first SD-WAN service standard, MEF 70 which defines the fundamental characteristics of an SD-WAN service plus service requirements and attributes. The SD-WAN Orchestrator, which typically also includes the SD-WAN Controller functionality, is used to set centralized policies which are used to make forwarding decisions for application Flows. Application flows are IP packets that have been classified to determine their user application or grouping of applications to which they are associated. The grouping of application flows based on a common type, e.g., conferencing applications, is referred to as an Application Flow Group in MEF 70. Per MEF 70, the SD-WAN Edge classifies incoming IP packets at the SD-WAN UNI (SD-WAN User Network Interface), determines, via OSI Layer 2 through Layer 7 classification, which application flow the IP packets belong to, and then applies the policies to block the application flow or allow the application flows to be forwarded based on the availability of a route to the destination SD-WAN UNI on a remote SD-WAN Edge. This helps ensure that application performance meets service level agreements (SLAs). History WANs were very important for the development of networking technologies in general and were for a long time the most important application of networks both for military and enterprise applications. The ability to communicate data over large distances was one of the main driving factors for the development of data communications technologies, as it made it possible to overcome the distance limitations, as well as shortening the time necessary to exchange messages with other parties. Legacy WAN technologies allowed communication over circuits connecting two or more endpoints. Earlier technologies supported point-to-point communication over a slow speed circuit, usually between two fixed locations. As technology evolved, WAN circuits became faster and more flexible. Innovations like circuit and packet switching (in the form of X.25, ATM and later Internet Protocol or Multiprotocol Label Switching communications) allowed communication to become more dynamic, supporting ever-growing networks. The need for strict control, security and quality of service meant that multinational corporations were very conservative in leasing and operating their WANs. National regulations restricted the companies that could provide local service in each country, and complex arrangements were necessary to establish truly global networks. All that changed with the growth of the Internet, which allowed entities around the world to connect to each other. However, over the first years, the uncontrolled nature of the Internet was not considered adequate or safe for private corporate use. Independent of safety concerns, connectivity to the Internet became a necessity to the point where every branch required Internet access. At first, due to safety concerns, private communications were still done via WAN, and communication with other entities (including customers and partners) moved to the Internet. As the Internet grew in reach and maturity, companies started to evaluate how to leverage it for private corporate communications. During the early 2000s, application delivery over the WAN became an important topic of research and commercial innovation. Over the next decade, increasing computing power made it possible to create software-based appliances that were able to analyze traffic and make informed decisions in real time, making it possible to create large-scale overlay networks over the public Internet that could replicate all the functionality of legacy WANs, at a fraction of the cost. SD-WAN combines several technologies to create full-fledged private networks, with the ability to dynamically share network bandwidth across the connection points. Additional enhancements include central controllers, zero-touch provisioning, integrated analytics and on-demand circuit provisioning, with some network intelligence based in the cloud, allowing centralized policy management and security. Networking publications started using the term SD-WAN to describe this new networking trend as early as 2014. Required characteristics Research firm Gartner has defined an SD-WAN as having four required characteristics: The ability to support multiple connection types, such as MPLS, Last Mile Fiber Optic Network or through high speed cellular networks e.g. 4G LTE and 5G wireless technologies The ability to do dynamic path selection, for load sharing and resiliency purposes A simple interface that is easy to configure and manage The ability to support VPNs, and third party services such as WAN optimization controllers, firewalls and web gateways Form factors SD-WAN products can be physical appliances or software based only. Features Features of SD-WANs include resilience, quality of service (QoS), security, and performance, with flexible deployment options; simplified administration and troubleshooting; and online traffic engineering. Resilience A resilient SD-WAN reduces network downtime. To be resilient, the technology must feature real-time detection of outages and automatic switch over (fail over) to working links. Quality of service SD-WAN technology supports quality of service by having application level awareness, giving bandwidth priority to the most critical applications. This may include dynamic path selection, sending an application on a faster link, or even splitting an application between two paths to improve performance by delivering it faster. Security SD-WAN communication is usually secured using IPsec, a staple of WAN security. Application optimization SD-WANs can improve application delivery using caching, storing recently accessed information in memory to speed future access. Deployment options Most SD-WAN products are available as pre-configured appliances, placed at the network edge in data centers, branch offices and other remote locations. There are also virtual appliances that can work on existing network hardware, or the appliance can be deployed as a virtual appliance on the cloud in environments such as Amazon Web Services (AWS), Unified Communications as a service (UCaaS) or as Software as a Service (SaaS). This allows enterprises to benefit from SD-WAN services as they migrate application delivery from corporate servers to cloud based services such as Salesforce.com and Google apps. Administration and troubleshooting As with network equipment in general, GUIs may be preferred to command line interface (CLI) methods of configuration and control. Other beneficial administrative features include automatic path selection, the ability to centrally configure each end appliance by pushing configuration changes out, and even a true software defined networking approach that lets all appliances and virtual appliances be configured centrally based on application needs rather than underlying hardware. Online traffic engineering With a global view of network status, a controller that manages SD-WAN can perform careful and adaptive traffic engineering by assigning new transfer requests according to current usage of resources (links). For example, this can be achieved by performing central calculation of transmission rates at the controller and rate-limiting at the senders (end-points) according to such rates. Complementary technology SD-WAN versus WAN optimization There are some similarities between SD-WAN and WAN optimization, the name given to the collection of techniques used to increase data-transfer efficiencies across WANs. The goal of each is to accelerate application delivery between branch offices and data centers, but SD-WAN technology focuses additionally on cost savings and efficiency, specifically by allowing lower cost network links to perform the work of more expensive leased lines, whereas WAN Optimization focuses squarely on improving packet delivery. An SD-WAN utilizing virtualization techniques assisted with WAN Optimization traffic control allows network bandwidth to dynamically grow or shrink as needed. SD-WAN technology and WAN optimization can be used separately or together, and some SD-WAN vendors are adding WAN optimization features to their products. WAN edge routers A WAN edge router is a device that routes data packets between different WAN locations, giving enterprise access to a carrier network. Also called a boundary router, it is unlike a core router, which only sends packets within a single network. SD-WANs can work as an overlay to simplify the management of existing WAN edge routers, by lowering dependence on routing protocols. SD-WAN can also potentially be an alternative to WAN Edge routers. SD-WAN versus hybrid WAN SD-WANs are similar to hybrid WANs, and sometimes the terms are used interchangeably, but they are not identical. A hybrid WAN consists of different connection types, and may have a software defined network (SDN) component, but doesn't have to. SD-WAN versus MPLS Cloud-based SD-WAN offers advanced features, such as enhanced security, seamless cloud, and support for mobile users, that result naturally from the use of cloud infrastructure. As a result, cloud-based SD-WAN can replace MPLS, enabling organizations to release resources once tied to WAN investments and create new capabilities. An overview discussing three typical reasons to compare MPLS with SD-WAN. Specifically where IT teams need to retain MPLS due to contract commitments and where the Enterprise migrates from MPLS to an Internet-based SD WAN. SD-CORE SD-WAN appliances alone do not solve the middle-mile performance issues of the Internet core. SD-CORE architectures are more consistent than the Internet, routing traffic optimally through the core. SD-CORE is available as Independent MPLS backbones or Software-defined backbones. Testing and validation As there is no standard algorithm for SD-WAN controllers, device manufacturers each use their own proprietary algorithm in the transmission of data. These algorithms determine which traffic to direct over which link and when to switch traffic from one link to another. Given the breadth of options available in relation to both software and hardware SD-WAN control solutions, it's imperative they be tested and validated under real-world conditions within a lab setting prior to deployment. There are multiple solutions available for testing purposes, ranging from purpose-built network emulation appliances which can apply specified network impairments to the network being tested in order to reliably validate performance, to software-based solutions. Marketplace IT website Network World divides the SD-WAN vendor market into three groups: established networking vendors who are adding SD-WAN products to their offerings, WAN specialists who are starting to integrate SD-WAN functionality into their products, and startups focused specifically on the SD-WAN market. Alternatively, a market overview by Nemertes Research groups SD-WAN vendors into categories based on their original technology space, and which are "Pure-play SD-WAN providers", "WAN optimization vendors", "Link-aggregation vendors", and "General network vendors" While Network World's second category (startups focused specifically on the SD-WAN market), is generally equivalent to Nemertes' "Pure-play SD-WAN providers" category, Nemertes offers a more detailed view of the preexisting WAN and overall networking providers. Additionally, Nemertes Research also describes the in-net side of the SD-WAN market, describing the go-to-market strategy of connectivity providers entering the SD-WAN market. These providers include "Network-as-a-service vendors", "Carriers or telcos", "Content delivery networks" and "Secure WAN providers". Open source MEF 70 standardizes SD-WAN service attributes and uses standard IPv4 and IPv6 routing protocols. SD-WAN services also use standard IPsec encryption protocols. Additional standardization for other SD-WAN functions and related security functionality not covered in MEF 70 are under development at the MEF Forum. There are several opensource SD-WAN solutions and opensource SD-WAN implementations available. For example, the Linux Foundation has three projects that intersect with and help the SD-WAN market: ONAP, OpenDaylight Project, and the Tungsten Fabric (formerly Juniper Networks' OpenContrail). References Computing terminology Configuration management Data transmission Emerging technologies Network architecture Telecommunications Wide area networks
685849
https://en.wikipedia.org/wiki/Greylisting%20%28email%29
Greylisting (email)
Greylisting is a method of defending e-mail users against spam. A mail transfer agent (MTA) using greylisting will "temporarily reject" any email from a sender it does not recognize. If the mail is legitimate, the originating server will try again after a delay, and if sufficient time has elapsed, the email will be accepted. How it works A server employing greylisting temporarily rejects email from unknown or suspicious sources by sending 4xx reply codes ("please call back later"), as defined in the Simple Mail Transfer Protocol (SMTP). Fully capable SMTP implementations are expected to maintain queues for retrying message transmissions in such cases, and so while legitimate mail may be delayed, it should still get through. The temporary rejection can be issued at different stages of the SMTP dialogue, allowing an implementation to store more or less data about the incoming message. The trade-off is more work and bandwidth for more exact matching of retries with original messages. Rejecting a message after its content has been received allows the server to store a choice of headers and/or a hash of the message body. In addition to whitelisting good senders, a greylister can provide for exceptions. Greylisting can generally be overridden by a fully validated TLS connection with a matching certificate. Because large senders often have a pool of machines that can send (and resend) email, IP addresses that have the most-significant 24 bits (/24) the same are treated as equivalent, or in some cases SPF records are used to determine the sending pool. Similarly, some e-mail systems use unique per-message return-paths, for example variable envelope return path (VERP) for mailing lists, Sender Rewriting Scheme for forwarded e-mail, Bounce Address Tag Validation for backscatter protection, etc. If an exact match on the sender address is required, every e-mail from such systems will be delayed. Some greylisting systems try to avoid this delay by eliminating the variable parts of the VERP by using only the sender domain and the beginning of the local-part of the sender address. Why it works Greylisting is effective against mass email tools used by spammers that do not queue and reattempt mail delivery as is normal for a regular mail transport agent. Delaying delivery also gives real-time blackhole lists and similar lists time to identify and flag the spam source. Thus, these subsequent attempts are more likely to be detected as spam by other mechanisms than they were before the greylisting delay. Advantages The main advantage from the user's point of view is that greylisting requires no additional user configuration. If the server utilizing greylisting is configured appropriately, the end user will only notice a delay on the first message from a given sender, so long as the sending email server is identified as belonging to the same whitelisted group as earlier messages. If mail from the same sender is repeatedly greylisted it may be worth contacting the mail system administrator with detailed headers of delayed mail. From a mail administrator's point of view the benefit is twofold. Greylisting takes minimal configuration to get up and running with occasional modifications of any local whitelists. The second benefit is that rejecting email with a temporary 451 error (actual error code is implementation dependent) is very cheap in system resources. Most spam filtering tools are very intensive users of CPU and memory. By stopping spam before it hits filtering processes, far fewer system resources are used. Disadvantages Delayed delivery issues The biggest disadvantage of greylisting is that for unrecognized servers, it destroys the near-instantaneous nature of email that users have come to expect. Mail from unrecognized servers is typically delayed by about 15 minutes, and could be delayed up to a few days for poorly configured sending systems. Explaining this to users who have become accustomed to immediate email delivery will probably not convince them that a mail server that uses greylisting is behaving correctly. This can be a particular problem with websites that require an account to be created and the email address confirmed before they can be used – or when a user of a greylisting mailserver attempts to reset their credentials on a website that uses email confirmation of password resets. If the sending MTA of the site is poorly configured, greylisting may delay the initial email link. In extreme cases, the delivery delay imposed by the greylister can exceed the expiry time of the password reset token delivered in email. In these cases, manual intervention may be required to whitelist the website's mailserver so the email containing the reset token can be used before it expires. When a mail server is greylisted, the duration of time between the initial delay and the retransmission is variable; the greylisting server has no control or visibility of the delay. SMTP says the retry interval should be at least 30 minutes, while the give-up time needs to be at least 4–5 days; but actual values vary widely between different mail server software. Modern greylisting applications (such as Postgrey for Unix-like operating systems) automatically whitelist senders that prove themselves capable of recovering from temporary errors, regardless of the reputed spamminess of the sender. Implementation also generally include the ability to manually whitelist some mailservers. One 2007 analysis of greylisting considers it totally undesirable due to the delay to mail, and unreliable as, if greylisting becomes widespread, junkmailers can adapt their systems to get around it. The conclusion is that the purpose of greylisting is to reduce the amount of spam that the server's spam-filtering software needs to analyze, resource-intensively, and save money on servers, not to reduce the spam reaching users. The conclusion: "[Greylisting] is very, very annoying. Much more annoying than spam." Other problems The current SMTP specification (RFC 5321) clearly states that "the SMTP client retains responsibility for delivery of that message" (section 4.2.5) and "mail that cannot be transmitted immediately MUST be queued and periodically retried by the sender." (section 4.5.4.1). Most MTAs will therefore queue and retry messages, but a small number do not. These are typically handled by whitelisting or exception lists. Also, legitimate mail might not get delivered if the retry comes from a different IP address than the original attempt. When the source of an email is a server farm or goes out through some other kind of relay service, it is likely that a server other than the original one will make the next attempt. For network fault tolerance, their IPs can belong to completely unrelated address blocks, thereby defying the simple technique of identifying the most significant part of the address. Since the IP addresses will be different, the recipient's server will fail to recognize that a series of attempts are related, and refuse each of them in turn. This can continue until the message ages out of the queue if the number of servers is large enough. This problem can partially be bypassed by proactively identifying as exceptions such server farms. Likewise, exception have to be configured for multihomed hosts and hosts using DHCP. In the extreme case, a sender could (legitimately) use a different IPv6 address for each outbound SMTP connection. A sender server subjected to greylisting might also reattempt delivery to another receiving mailserver if the receiving domain has more than one MX record. This may cause problems if all such hosts do not implement the same greylisting policy and share the same database. See also Nolisting Bandwidth throttling Tarpit (networking) References External links Greylisting.org: Repository of greylist info A greylisting whitepaper by Evan Harris A greylisting implementation for netqmail Microsoft Exchange Greylisting Problems - Newsgroup Article RFC 6647 of the Internet Engineering Task Force, June 2012: Standardizes the current state of the art Spam filtering
33454300
https://en.wikipedia.org/wiki/Xiaomi
Xiaomi
Xiaomi Corporation (; ), registered in Asia as Xiaomi Inc., is a Chinese designer and manufacturer of consumer electronics and related software, home appliances, and household items. Behind Samsung, it is the second largest manufacturer of smartphones, most of which run the MIUI operating system. The company is ranked 338th and is the youngest company on the Fortune Global 500. Xiaomi was founded in 2010 in Beijing by now multi-billionaire Lei Jun when he was 40 years old, along with six senior associates. Lei had founded Kingsoft as well as Joyo.com, which he sold to Amazon for $75 million in 2004. In August 2011, Xiaomi released its first smartphone and, by 2014, it had the largest market share of smartphones sold in China. Initially the company only sold its products online; however, it later opened brick and mortar stores. By 2015, it was developing a wide range of consumer electronics. In 2020, the company sold 146.3 million smartphones and its MIUI operating system has over 500 million monthly active users. In the second quarter of 2021, Xiaomi surpassed Apple Inc. to become the second-largest seller of smartphones worldwide, with a 17% market share, according to Canalys. It also is a major manufacturer of appliances including televisions, flashlights, unmanned aerial vehicles, and air purifiers using its Internet of Things and Xiaomi Smart Home product ecosystems. Xiaomi keeps its prices close to its manufacturing costs and bill of materials costs by keeping most of its products in the market for 18 months, longer than most smartphone companies, The company also uses inventory optimization and flash sales to keep its inventory low. Name The name "Xiaomi" literally means millet and rice, and is based on the Buddhist concept of starting from the bottom before aiming for the top. History 2010-2013 On 6 April 2010 Xiaomi was co-founded by Lei Jun and six others: Lin Bin (), vice president of the Google China Institute of Engineering Zhou Guangping (), senior director of the Motorola Beijing R&D center Liu De (), department chair of the Department of Industrial Design at the University of Science and Technology Beijing Li Wanqiang (), general manager of Kingsoft Dictionary Huang Jiangji (), principal development manager Hong Feng (), senior product manager for Google China Lei had founded Kingsoft as well as Joyo.com, which he sold to Amazon for $75 million in 2004. At the time of the founding of the company, Lei was dissatisfied with the products of other mobile phone manufacturers and thought he could make a better product. On 16 August 2010, Xiaomi launched its first Android-based firmware MIUI. In 2010, the company raised $41 million in a Series A round. In August 2011, the company launched its first phone, the Xiaomi Mi1. The device had Xiaomi's MIUI firmware along with Android installation. In December 2011, the company raised $90 million in a Series B round. In June 2012, the company raised $216 million of funding in a Series C round at a $4 billion valuation. Institutional investors participating in the first round of funding included Temasek Holdings, IDG Capital, Qiming Venture Partners and Qualcomm. In August 2013, the company hired Hugo Barra from Google, where he served as vice president of product management for the Android platform. He was employed as vice president of Xiaomi to expand the company outside of mainland China, making Xiaomi the first company selling smartphones to poach a senior staffer from Google's Android team. He left the company in February 2017. In September 2013, Xiaomi announced its Xiaomi Mi3 smartphone and an Android-based 47-inch 3D-capable Smart TV assembled by Sony TV manufacturer Wistron Corporation of Taiwan. In October 2013, it became the fifth-most-used smartphone brand in China. In 2013, Xiaomi sold 18.7 million smartphones. 2014-2017 In February 2014, Xiaomi announced its expansion outside China, with an international headquarters in Singapore. In April 2014, Xiaomi purchased the domain name mi.com for a record , the most expensive domain name ever bought in China, replacing xiaomi.com as the company's main domain name. In September 2014, Xiaomi acquired a 24.7% stake in Roborock. In December 2014, Xiaomi raised US$1.1 billion at a valuation of over US$45 billion, making it one of the most valuable private technology companies in the world. The financing round was led by Hong Kong-based technology fund All-Stars Investment Limited, a fund run by former Morgan Stanley analyst Richard Ji. In 2014, the company sold over 60 million smartphones. In 2014, 94% of the company's revenue came from mobile phone sales. In April 2015, Ratan Tata acquired a stake in Xiaomi. On 30 June 2015, Xiaomi announced its expansion into Brazil with the launch of locally manufactured Redmi 2; it was the first time the company assembled a smartphone outside of China. However, the company left Brazil in the second half of 2016. On 26 February 2016, Xiaomi launched the Mi5, powered by the Qualcomm Snapdragon 820 processor. On 3 March 2016, Xiaomi launched the Redmi Note 3 Pro in India, the first smartphone to powered by a Qualcomm Snapdragon 650 processor. On 10 May 2016, Xiaomi launched the Mi Max, powered by the Qualcomm Snapdragon 650/652 processor. In June 2016, the company acquired patents from Microsoft. In September 2016, Xiaomi launched sales in the European Union through a partnership with ABC Data. Also in September 2016, the Xiaomi Mi Robot vacuum was released by Roborock. On 26 October 2016, Xiaomi launched the Mi Mix, powered by the Qualcomm Snapdragon 821 processor. On 22 March 2017, Xiaomi announced that it planned to set up a second manufacturing unit in India in partnership with contract manufacturer Foxconn. On 19 April 2017, Xiaomi launched the Mi6, powered by the Qualcomm Snapdragon 835 processor. In July 2017, the company entered into a patent licensing agreement with Nokia. On 5 September 2017, Xiaomi released Xiaomi Mi A1, the first Android One smartphone under the slogan: Created by Xiaomi, Powered by Google. Xiaomi stated started working with Google for the Mi A1 Android One smartphone earlier in 2017. An alternate version of the phone was also available with MIUI, the MI 5X. In 2017, Xiaomi opened Mi Stores in India, Pakistan and Bangladesh. The EU's first Mi Store was opened in Athens, Greece in October 2017. In Q3 2017, Xiaomi overtook Samsung to become the largest smartphone brand in India. Xiaomi sold 9.2 million units during the quarter. On 7 November 2017, Xiaomi commenced sales in Spain and Western Europe. 2018-present In April 2018, Xiaomi announced a smartphone gaming brand called Black Shark. It beared 6 GB of RAM coupled with Snapdragon 845 SoC, and was priced at $508, which was cheaper than its competitors. On 2nd May 2018, Xiaomi announced the launch of Mi Music and Mi Video to offer "value-added internet services" in India. On 3rd May 2018, Xiaomi announced a partnership with 3 to sell smartphones in the United Kingdom, Ireland, Austria, Denmark, and Sweden In May 2018, Xiaomi began selling smart home products in the United States through Amazon. In June 2018, Xiaomi became a public company via an initial public offering on the Hong Kong Stock Exchange, raising $4.72 billion. On 7 August 2018, Xiaomi announced that Holitech Technology Co. Ltd., Xiaomi's top supplier, would invest up to $200 million over the next three years to set up a major new plant in India. In August 2018, the company announced POCO as a mid-range smartphone line, first launching in India. In Q4 of 2018, the Xiaomi Poco F1 became the best selling smartphone sold online in India. The Pocophone was sometimes referred to as the "flagship killer" for offering high-end specifications at an affordable price. In October 2019, the company announced that it would launch more than 10 5G phones in 2020, including the Mi 10/10 Pro with 5G functionality. On 17 January 2020, Poco became a separate sub-brand of Xiaomi with entry-level and mid-range devices. In March 2020, Xiaomi showcased its new 40W wireless charging solution, which was able to fully charge a smartphone with a 4,000mAh battery from flat in 40 minutes. In October 2020, Xiaomi became the third largest smartphone maker in the world by shipment volume, shipping 46.2 million handsets in Q3 2020. On 30 March 2021, Xiaomi announced that it will invest US$10 billion in electric vehicles over the following ten years. On 31 March 2021, Xiaomi announced a new logo for the company, designed by Kenya Hara. In July 2021, Xiaomi became the second largest smartphone maker in the world, according to Canalys. It also surpassed Apple for the first time in Europe, making it the second largest in Europe according to Counterpoint. In August 2021, the company acquired autonomous driving company Deepmotion for $77 million. Innovation and development In the 2021 review of WIPO's annual World Intellectual Property Indicators Xiaomi was ranked as 2nd in the world, with 216 designs in industrial design registrations being published under the Hague System during 2020. This position is up on their previous 3rd place ranking in 2019 for 111 industrial design registrations being published. On 8 February 2022, Lei released a statement on Weibo to announce plans for Xiaomi to enter the high-end smartphone market and surpass Apple as the top seller of premium smartphones in China in three years. To achieve that goal, Xiaomi will invest US$15.7 billion in R&D over the next five years, and the company will benchmark its products and user experience against Apple’s product lines. Lei described the new strategy as a "life-or-death battle for our development" in his Weibo post, after Xiaomi's market share in China contracted over consecutive quarters, from 17% to 14% between Q2 and Q3 2021, dipping further to 13.2% as of Q4 2021. Corporate identity Name etymology Xiaomi () is the Chinese word for "millet". In 2011 its CEO Lei Jun suggested there are more meanings than just the "millet and rice". He linked the "Xiao" () part to the Buddhist concept that "a single grain of rice of a Buddhist is as great as a mountain", suggesting that Xiaomi wants to work from the little things, instead of starting by striving for perfection, while "mi" () is an acronym for Mobile Internet and also "mission impossible", referring to the obstacles encountered in starting the company. He also stated that he thinks the name is cute. In 2012 Lei Jun said that the name is about revolution and being able to bring innovation into a new area. Xiaomi's new "Rifle" processor has given weight to several sources linking the latter meaning to the Communist Party of China's "millet and rifle" (小米加步枪) revolutionary idiom during the Second Sino-Japanese War. Logo and mascot Xiaomi's first logo consisted of a single orange square with the letters "MI" in white located in the center of the square. This logo was in use until 31 March 2021, when a new logo, designed by well-known Japanese designer Kenya Hara, replaced the old one, consisting of the same basic structure as the previous logo, but the square was replaced with a "squircle" with rounded corners instead, with the letters "MI" remaining identical to the previous logo, along with a slightly darker hue. Xiaomi's mascot, Mitu, is a white rabbit wearing an Ushanka (known locally as a "Lei Feng hat" in China) with a red star and a red scarf around its neck. Controversy, criticism and regulatory actions Imitation of Apple Inc. Xiaomi has been accused of imitating Apple Inc. The hunger marketing strategy of Xiaomi was described as riding on the back of the "cult of Apple". After reading a book about Steve Jobs in college, Xiaomi's chairman and CEO, Lei Jun, carefully cultivated a Steve Jobs image, including jeans, dark shirts, and Jobs' announcement style at Xiaomi's earlier product announcements. He was characterized as a "counterfeit Jobs." In 2012, the company was said to be counterfeiting Apple's philosophy and mindset. In 2013, critics debated how much of Xiaomi's products were innovative, and how much of their innovation was just really good public relations. Others point out that while there are similarities to Apple, the ability to customize the software based upon user preferences through the use of Google's Android operating system sets Xiaomi apart. Xiaomi has also developed a much wider range of consumer products than Apple. Violation of GNU General Public License In January 2018, Xiaomi was criticized for its non-compliance with the terms of the GNU General Public License. The Android project's Linux kernel is licensed under the copyleft terms of the GPL, which requires Xiaomi to distribute the complete source code of the Android kernel and device trees for every Android device it distributes. By refusing to do so, or by unreasonably delaying these releases, Xiaomi is operating in violation of intellectual property law in China, as a WIPO state. Prominent Android developer Francisco Franco publicly criticized Xiaomi's behaviour after repeated delays in the release of kernel source code. Xiaomi in 2013 said that it would release the kernel code. The kernel source code is available on the GitHub website. Privacy concerns and data collection As a company based in China, Xiaomi is obligated to share data with the Chinese government under the China Internet Security Law and National Intelligence Law. There were reports that Xiaomi's Cloud messaging service sends some private data, including call logs and contact information, to Xiaomi servers. Xiaomi later released an MIUI update that made cloud messaging optional and that no private data was sent to Xiaomi servers if the cloud messaging service was turned off. On 23 October 2014, Xiaomi announced that it was setting up servers outside of China for international users, citing improved services and compliance to regulations in several countries. On 19 October 2014, the Indian Air Force issued a warning against Xiaomi phones, stating that they were a national threat as they sent user data to an agency of the Chinese government. In April 2019, researchers at Check Point found a security breach in Xiaomi phone apps. The security flaw was reported to be preinstalled. On 30 April 2020, Forbes reported that Xiaomi extensively tracks use of its browsers, including private browser activity, phone metadata and device navigation, and more alarmingly, without secure encryption or anonymization, more invasively and to a greater extent than mainstream browsers. Xiaomi disputed the claims, while confirming that it did extensively collect browsing data, and saying that the data was not linked to any individuals and that users had consented to being tracked. Xiaomi posted a response stating that the collection of aggregated usage statistics data is used for internal analysis, and would not link any personally identifiable information to any of this data. However, after a followup by Gabriel Cirlig, the writer of the report, Xiaomi added an option to completely stop the information leak when using its browser in incognito mode. State administration of radio, film and television issue In November 2012, Xiaomi's smart set-top box stopped working one week after the launch due to the company having run foul of China's State Administration of Radio, Film, and Television. The regulatory issues were overcome in January 2013. Misleading sales figures The Taiwanese Fair Trade Commission investigated the flash sales and found that Xiaomi had sold fewer smartphones than advertised. Xiaomi claimed that the number of smartphones sold was 10,000 units each for the first two flash sales, and 8,000 units for the third one. However, FTC investigated the claims and found that Xiaomi sold 9,339 devices in the first flash sale, 9,492 units in the second one, and 7,389 for the third. It was found that during the first flash sale, Xiaomi had given 1,750 priority ‘F-codes’ to people who could place their orders without having to go through the flash sale, thus diminishing the stock that was publicly available. The FTC fined Xiaomi . Shut down of Australia store In March 2014, Xiaomi Store Australia (an unrelated business) began selling Xiaomi mobile phones online in Australia through its website, XiaomiStore.com.au. However, Xiaomi soon "requested" that the store be shut down by 25 July 2014. On 7 August 2014, shortly after sales were halted, the website was taken down. An industry commentator described the action by Xiaomi to get the Australian website closed down as unprecedented, saying, "I’ve never come across this [before]. It would have to be a strategic move." At the time this left only one online vendor selling Xiaomi mobile phones into Australia, namely Yatango (formerly MobiCity), which was based in Hong Kong. This business closed in late 2015. Temporary ban in India due to patent infringement On 9 December 2014, the High Court of Delhi granted an ex parte injunction that banned the import and sale of Xiaomi products in India. The injunction was issued in response to a complaint filed by Ericsson in connection with the infringement of its patent licensed under fair, reasonable, and non-discriminatory licensing. The injunction was applicable until 5 February 2015, the date on which the High Court was scheduled to summon both parties for a formal hearing of the case. On 16 December, the High Court granted permission to Xiaomi to sell its devices running on a Qualcomm-based processor until 8 January 2015. Xiaomi then held various sales on Flipkart, including one on 30 December 2014. Its flagship Xiaomi Redmi Note 4G phone sold out in six seconds. A judge extended the division bench's interim order, allowing Xiaomi to continue the sale of Qualcomm chipset-based handsets until March 2018. U.S. sanctions due to ties with People's Liberation Army In January 2021, the United States government named Xiaomi as a company "owned or controlled" by the People's Liberation Army and thereby prohibited any American company or individual from investing in it. However, the investment ban was blocked by a US court ruling after Xiaomi filed a lawsuit in the United States District Court for the District of Columbia, with the court expressing skepticism regarding the government's national security concerns. Xiaomi denied the allegations of military ties and stated that its products and services were of civilian and commercial use. In May 2021, Xiaomi reached an agreement with the Defense Department to remove the designation of the company as military-linked. Lawsuit by KPN alleging patent infringement On 19 January 2021, KPN, a Dutch landline and mobile telecommunications company, sued Xiaomi and others for patent infringement. KPN filed similar lawsuits against Samsung in 2014 and 2015 in a court in the US. Lawsuit by Wyze alleging invalid patent In July 2021, Xiaomi submitted a report to Amazon alleging that Wyze Labs had infringed upon its 2019 "Autonomous Cleaning Device and Wind Path Structure of Same" robot vacuum patent. On 15 July 2021, Wyze filed a lawsuit against Xiaomi in the U.S. District Court for the Western District of Washington, arguing that prior art exists and asking the court for a declaratory judgment that Xiaomi's 2019 robot vacuum patent is invalid. Censorship In September 2021, the Lithuanian Ministry of National Defence urged people to dispose the Chinese-made mobile phones and avoid buying new ones. This was after the National Cyber Security Centre of Lithuania found that Xiaomi devices have built-in censorship capabilities that can be turned on remotely. Xiaomi phones sold in Europe had a built-in ability to detect and censor terms such as "Free Tibet", "Long live Taiwan independence" or "democracy movement". This capability was discovered in Xiaomi's flagship phone Mi 10T 5G. The list of terms which could be censored by the Xiaomi phone's system apps, including the default internet browser, in September 2021 includes 449 terms in Chinese and the list was continuously updated. References External links 2018 initial public offerings Chinese brands Chinese companies established in 2010 Companies listed on the Hong Kong Stock Exchange Computer hardware companies Electronics companies established in 2010 Electronics companies of China Home automation companies Manufacturing companies established in 2010 Mobile phone companies of China Mobile phone manufacturers Multinational companies headquartered in China Networking hardware companies Telecommunication equipment companies of China
951087
https://en.wikipedia.org/wiki/Spheres%20of%20Chaos
Spheres of Chaos
Spheres of Chaos is a multidirectional shooter video game, created by Iain McLeod, with basic gameplay similar to the 1979 arcade game Asteroids. The game has bright colours and patterns, with many enemies on screen at once. The audio is similar to that of Robotron: 2084 and Defender. It was originally written for RISC OS on the Acorn Archimedes and released in 1993. In the 2000s it was ported to Linux, Microsoft Windows, and PS2 Linux. In October 2007, Spheres of Chaos was declared freeware. Gameplay The player controls a small grey spaceship. At the start of each level, enemies called common aliens appear that must be eradicated to complete the level. With each subsequent level, the enemies get more plentiful and powerful. When hit, enemies typically split into smaller versions which must also be eradicated. Black holes also appear that either attract or repel the spaceship. Many black holes at once may make the game field unnavigable. When a level isn't completed within a certain period of time, enemies called bugs appear that shoot at the spaceship. Many types of enemies appear in later levels, from spinshooters to multiplying daisies and bacteria. Often, when all common aliens are defeated, a boss appears that takes many hits to destroy. When defeated, some enemies provide power-ups that constantly change colour. A power-up's power is determined by its colour when the spaceship captures it. Up to eight players can take turns playing and have control over a wide variety of options, including quantities of specific enemies, frequency of power-ups, and visual effects. References External links * Review of Spheres of Chaos on Rock, Paper, Shotgun 2000 video games Multidirectional shooters Linux games Acorn Archimedes games PlayStation 2 games Windows games Video games developed in the United Kingdom
2514855
https://en.wikipedia.org/wiki/Information%20technology%20consulting
Information technology consulting
In management, information technology consulting (also called IT consulting, computer consultancy, business and technology services, computing consultancy, technology consulting, and IT advisory) is a field of activity which focuses on advising organizations on how best to use information technology (IT) in achieving their business objectives. Once a business owner defined the needs to take a business to the next level, a decision maker will define a scope, cost and a time frame of the project. The role of the IT consultancy company is to support and nurture the company from the very beginning of the project until the end, and deliver the project not only in the scope, time and cost but also with complete customer satisfaction. See also List of major IT consulting firms Consultant Grade (consulting) Outsourcing References Information technology management
37078341
https://en.wikipedia.org/wiki/Systers
Systers
Systers, founded by Anita Borg, is an international electronic mailing list for technical women in computing. The Syster community strives to increase the number of women in computer science and improve work environments for women. The mailing list has operated since 1987, making it the oldest of its kind for women in computer science. It is likely the largest email community of women in computing. The name 'Systers' originated from the combination of the words systems and sisters. History Systers was formed by Anita Borg in 1987 after a discussion with women at the Symposium on Operating Systems Principles (SOSP) in Austin. At the conference, Borg got the email addresses of 20 of the women attending and created Systers. The name came from combining systems with sisters. The administrator of Systers was Borg, who was called by users "her Systers' keeper". It was the first worldwide community for women working in the field of computer science. The group spread by word of mouth, growing to around 2,000 members in the mid 1990s. The group was accused of practicing "reverse discrimination" by others in 1993. Borg defended the group as a way for women who were often cut off from one another in the field to connect with one another. Many women did not have any other women in their own workplaces. It was refreshing to find a space where women were not "drowned out by the voices of men." The size of the group led Borg to create a system, called MECCA, which would allow members to opt in and out of various discussion topics. Later, the list would move to web-based technology. By 2004, women from 53 different countries were participating. Systers also influenced other similar mailing lists. As of 2012, more than 3000 members were subscribing to the Systers' mailing list. Previously, the mailing list was maintained by Her Systers' Keeper, Robin Jeffries, from 2000 to 2012. The next Systers' Keeper was Rosario Robinson. During #GHC18 at Houston Texas, it was announced that Zaza Soriano will be the new Systers' Keeper. Systers 25th Anniversary In 2012, Systers celebrated its 25th anniversary with Global Meet Ups and a celebration at the Grace Hopper Celebration of Women in Computing. About Systers was developed as an electronic mailing list for women working in computer science. It is one of the oldest communities for women in computer science. Women using the list must stay on topic (discussion women and computer science) and they are expected to treat each other with respect. Members are expected to be supportive of other members and topics discussed generally relate to women in computing. A notable exception was a 1992 discussion of a Barbie doll, whose recorded phrases included "Math class is tough!" Systers was credited as influential in persuading Mattel to remove the phrase. Other topics that have been covered included strategies for childcare on the job or at conferences, dealing with harassment both online and at work and technical questions. Women were able to ask questions about various topics and receive timely answers from their peers. Women also shared jokes about working in the computing or engineering fields. Other lists that have "spun off" from Systers are researcHers, system-entrepreneurs and a list for recent doctoral graduates. The Systers list runs on GNU Mailman. Systers members and Google Summer of Code participants customized the code to meet Systers' needs. Anita Borg Systers Pass-It-On Awards Program The Pass-It-On Awards program provides monetary support for women entering fields in technology through donations by women established in technological fields. The award honor's Anita Borg's vision of a network of women that support each other. Awards from $500.00 to $1000.00 USD are funded by online donations from the Systers community. Founding members Systers was founded in 1987 by Anita Borg and several other women who attended a Symposium on Operating Systems Principles (SOSP) conference. Anita Borg Stella Atkins Miche Baker-Harvey Carla Ellis Joan Francioni Susan Gerhart Anita K. Jones Rivka Ladin Barbara Liskov Sherri Menees Nichols Susan Owicki Liuba Shrira Karen Sollins See also Women's WIRE References CitationsSources External links Systers list homepage Systers, an Anita Borg Institute Community Electronic mailing lists Organizations for women in science and technology
52538236
https://en.wikipedia.org/wiki/Steve%20Schneider%20%28computer%20scientist%29
Steve Schneider (computer scientist)
Prof. Steve Schneider FBCS, CITP is an English computer scientist and Professor of Security. He is Director of the Surrey Centre for Cyber Security and Associate Dean (Research and Enterprise) at the University of Surrey. Biography Steve Schneider studied at Oxford University, joining the Oxford University Computing Laboratory (now the Oxford University Department of Computer Science) to study for a Doctorate on CSP, which was awarded in 1989, supervised by Mike Reed. He joined Royal Holloway, University of London as a lecturer in 1994, becoming a senior lecturer in 1999 and a professor in 2002. He moved to the University of Surrey in 2004, and was head of the Department of Computer Science from 2004 until 2010. Schneider is an expert in formal methods, including Communicating Sequential Processes (CSP) and the B-Method, and computer security. Selected books References External links Steve Schneider's University of Surrey home page Year of birth missing (living people) Living people Alumni of the University of Oxford Members of the Department of Computer Science, University of Oxford Academics of Royal Holloway, University of London Academics of the University of Surrey English computer scientists Formal methods people Computer science writers British textbook writers Fellows of the British Computer Society Computer security academics
60332587
https://en.wikipedia.org/wiki/USB3%20Vision
USB3 Vision
USB3 Vision is an interface standard introduced in 2013 for industrial cameras. It describes a specification on top of the USB standard, with a particular focus on supporting high-performance cameras based on USB 3.0. It is recognized as one of the fastest growing machine vision camera standards. As of October 2019, version 1.1 is the latest version of the standard. The standard is hosted by the AIA and developing a product implementing this standard must pass compliance tests and be licensed. As of late 2019, there are 42 companies that license this standard. The standard itself for reference or evaluation may be requested free of charge. The standard is built upon many of the same pieces as GigE Vision, being based on GenICam, but utilizes USB ports instead of Ethernet. Some of the benefits of this standard include simple plug and play usability, power over the cable, and high bandwidth. Additionally, it defines locking connectors that modify the standard USB connectors with additional screw-locks for industrial purposes. Technology The standard covers four major areas: Device Detection Register Access Streaming Data Event Handling The standard defines a specific USB Class ID (Class 0xEF, Subclass 0x05) for identifying the device. As the standard is defined at a protocol layer, the software vendor providing the driver may be a different entity than the company designing the camera. Register Access includes mandatory USB3 vision registers as well as camera specific registers which may control parameters such as shutter speed or integration time, gamma correction, white balance, etc. The later register types are diverse across cameras. The camera specific registers can be queried via a XML schema file which is part of the GenICam standard. The GenICam standard has a Standard Feature Naming Convention so that vendor agnostic software can be created. The GenICam standard is independent of the transfer protocol. This standard and GigE Vision are examples of wire protocols which pair with the GenICam standard. This contrasts with Camera Serial Interface; the Camera Command Set (CCS) is part of that standard for controlling camera parameters. For many real devices, the vendors provide alternate methods such as I2C to access the full set of parameters that a specific device may support. These can include lighting synchronization and separate motor controls for optical focusing elements. Implementations A complete list of companies offering products complying with this standard is available here: Companies that license USB3 Vision Open Source implementations: Linux kernel driver (NOTE: Basic register access and image streaming only. Significant application logic outside of this kernel module is needed to incorporate GenICam and be fully compatible with the USB3 Vision specification) Aravis uses libusb to implement the USB3 Vision protocol. Supports GenICam interface for register introspection. Basler Linux kernel modifications - Allows Usb3 zero copy streaming. Linux 4.9+ zero copy usbfs is supported by newer versions of libusb. References Cameras
19955386
https://en.wikipedia.org/wiki/National%20Resource%20Centre%20for%20Free/Open%20Source%20Software
National Resource Centre for Free/Open Source Software
National Resource Centre for Free/Open Source Software (NRCFOSS) is an organisation created and financed in India by the Department of Information Technology, Ministry of Communication and Information Technology, Government of India in April 2005. It is jointly administered by the Chennai Division of Centre for Development of Advanced Computing (C-DAC) and the AU-KBC Research Centre of Anna University. Some state governments, for example Kerala, already have programmes to popularize FOSS among the masses especially among the students. The founding of NRCFOSS is the first initiative by the Government of India in the direction of making efforts for increasing the acceptance of FOSS at a national level. NRCFOSS is designed to give a boost to the efforts to popularize FOSS products among lay computer users of India. Objectives NRCFOSS is mandated to work in areas related to Free/Open Source Software basically with the following objectives: Human resource development by training engineering teachers in the formal sector giving support to educational agencies in the nonformal sector. Development of technology, tools and solutions by making available certified tools to industry and users creating a repository of information on resources. Localisation for Indian languages by enhancing the presence and visibility of Indian Languages on the web coordinating with various local language communities. Policy formulation and global networking by providing support to the Government and other agencies in policy and program formulation joining global networks, especially with similarly placed nations, and participating in international events. Entrepreneurship development by creating appropriate business models. Initial achievements NRCFOSS has caused the introduction of elective papers in FOSS in the syllabi and curriculum of Anna University. The syllabi is applicable to around 250 engineering colleges affiliated to Anna University. It has prepared the entire course material for these elective papers and made it available for free download. NRCFOSS has organized a series of workshops and seminars in different parts of India to popularize the idea of FOSS. It developed the FOSS Lab Server as an archive of various resources that are essential for the students taking FOSS courses. It contains source code, documentation and mailing list archives. NRCFOSS developed Bharat Operating System Solutions a Linux distribution made specifically for the Indian environment. The latest version of this Free/Open Source Software, BOSS GNU/Linux v6.0, was released on March 4, 2015. This software supports eighteen Indian languages out of a total of twenty-two constitutionally recognized languages in India at the desktop level. References External links Bharat Operating System Solutions Information technology organisations based in India Free and open-source software organizations Non-profit organisations based in India Organizations established in 2005 Information technology in India
64090230
https://en.wikipedia.org/wiki/XOS%20%28operating%20system%29
XOS (operating system)
XOS is an Android-based operating system developed by Hong Kong mobile phone manufacturer Infinix Mobile, a subsidiary of Transsion Holdings, exclusively for their smartphones. XOS allows for a wide range of user customization without requiring rooting the mobile device. It was first introduced as XUI in 2015 and later as XOS in 2016. The operating system comes with utility applications that allow users to protect their privacy, improve speed, enhance experience among others. XOS comes with features like; XTheme, Scan to Recharge, Split Screen and XManager. History In 2015, Infinix Mobile released XUI 1.0, based on Android 5.0 "Lollipop", featuring XContacts, XTheme, XSloud and XShare. In July 2016, XOS 2.0 Chameleon was released based on Android 6.0 "Marshmallow", launching on HOT S and featuring XLauncher and fingerprint manager. An upgraded version XOS 2.2 Chameleon based on Android 7.0 "Nougat" was later launched in 2017 on Note 3 and Smart X5010. It features Scrollshot, Split Screen and Magazine Lockscreen. In August 2017, XOS 3.0 Hummingbird was released based on Android 7.0 as also seen in XOS 2.2, launching on Zero 5, it later launched in 2018 on Hot S3 based on Android 8.0 "Oreo". An upgraded version XOS 3.2 Hummingbird based on Android 8.1 was later launched on Hot 6. It features Eye Care, Multi-Accounts and Device Tracking. In May 2018, XOS 4.0 Honeybee was released based on Android 8.0, launching on Hot 7 and Zero 6, featuring Smart Screen Split, Notch Hiding, Scan To Recharge, Fingerprint Call Recording and Smart Text Classifier. An upgraded version XOS 4.1 Honeybee based on Android 8.1 was later launched on Hot 7 Pro. In 2019, XOS 5.0 Cheetah was released based on Android 9.0 "Pie", launching on Hot S4 and Hot 8, featuring Privacy Protection, AI Intelligence, Smart Panel, Data Switcher and Fingerprint Reset Password. In December 2019, an upgraded version XOS 5.5 Cheetah based on Android 9.0 was released to Hot 8, featuring Game Assistant, Social Turbo, Smart Screen Lifting and Game Anti-Interference.No In February 2020, XOS 6.0 Dolphin was released based on Android 10, launching on S5 Pro, Note 7. It features Dark Mode, Digital Wellbeing, Wi-Fi Share and Smart Gesture. See also HiOS References Mobile operating systems
49840256
https://en.wikipedia.org/wiki/Have%20I%20Been%20Pwned%3F
Have I Been Pwned?
Have I Been Pwned? (HIBP; with "Pwned" pronounced like "poned", and stylized in all lowercase as "&apos;;--have i been pwned?" on the website) is a website that allows Internet users to check whether their personal data has been compromised by data breaches. The service collects and analyzes hundreds of database dumps and pastes containing information about billions of leaked accounts, and allows users to search for their own information by entering their username or email address. Users can also sign up to be notified if their email address appears in future dumps. The site has been widely touted as a valuable resource for Internet users wishing to protect their own security and privacy. Have I Been Pwned? was created by security expert Troy Hunt on 4 December 2013. As of June 2019, Have I Been Pwned? averages around one hundred and sixty thousand daily visitors, the site has nearly three million active email subscribers and contains records of almost eight billion accounts. Features The primary function of Have I Been Pwned? since it was launched is to provide the general public a means to check if their private information has been leaked or compromised. Visitors to the website can enter an email address, and see a list of all known data breaches with records tied to that email address. The website also provides details about each data breach, such as the backstory of the breach and what specific types of data were included in it. Have I Been Pwned? also offers a "Notify me" service that allows visitors to subscribe to notifications about future breaches. Once someone signs up with this notification mailing service, they will receive an email message any time their personal information is found in a new data breach. In September 2014, Hunt added functionality that enabled new data breaches to be automatically added to HIBP's database. The new feature used Dump Monitor, a Twitter bot which detects and broadcasts likely password dumps found on pastebin pastes, to automatically add new potential breaches in real-time. Data breaches often show up on pastebins before they are widely reported on; thus, monitoring this source allows consumers to be notified sooner if they've been compromised. Along with detailing which data breach events the email account has been affected by, the website also points those who appear in their database search to install a password manager, namely 1Password, which Troy Hunt has recently endorsed. An online explanation on his website explains his motives and maintains that monetary gain is not the goal of this partnership. Pwned passwords In August 2017, Hunt made public 306 million passwords which could be accessed via a web search or downloadable in bulk. In February 2018, British computer scientist Junade Ali created a communication protocol (using k-anonymity and cryptographic hashing) to anonymously verify if a password was leaked without fully disclosing the searched password. This protocol was implemented as a public API in Hunt's service and is now consumed by multiple websites and services including password managers and browser extensions. This approach was later replicated by Google's Password Checkup feature. Ali worked with academics at Cornell University to formally analyse the protocol to identify limitations and develop two new versions of this protocol known as Frequency Size Bucketization and Identifier Based Bucketization. In March 2020, cryptographic padding was added to this protocol. History Launch In late 2013, web security expert Troy Hunt was analyzing data breaches for trends and patterns. He realized breaches could greatly impact users who might not even be aware their data was compromised, and as a result, began developing HIBP. "Probably the main catalyst was Adobe," said Hunt of his motivation for starting the site, referring to the Adobe Systems security breach that affected 153 million accounts in October 2013. Hunt launched Have I Been Pwned? on 4 December 2013 with an announcement on his blog. At that time, the site had just five data breaches indexed: Adobe Systems, Stratfor, Gawker, Yahoo! Voices, and Sony Pictures. However, the site now had the functionality to easily add future breaches as soon as they were made public. Hunt wrote: Data breaches Since its launch, the primary development focus of HIBP has been to add new data breaches as quickly as possible after they are leaked to the public. In July 2015, online dating service Ashley Madison, known for encouraging users to have extramarital affairs, suffered a data breach, and the identities of more than 30 million users of the service were leaked to the public. The data breach received wide media coverage, presumably due to the large number of impacted users and the perceived shame of having an affair. According to Hunt, the breach's publicity resulted in a 57,000% increase in traffic to HIBP. Following this breach, Hunt added functionality to HIBP by which breaches considered "sensitive" would not be publicly searchable, and would only be revealed to subscribers of the email notification system. This functionality was enabled for the Ashley Madison data, as well as for data from other potentially scandalous sites, such as Adult FriendFinder. In October 2015, Hunt was contacted by an anonymous source who provided him with a dump of 13.5 million users' email addresses and plaintext passwords, claiming it came from 000webhost, a free web hosting provider. Working with Thomas Fox-Brewster of Forbes, he verified that the dump was most likely legitimate by testing email addresses from it and by confirming sensitive information with several 000webhost customers. Hunt and Fox-Brewster attempted many times to contact 000webhost to further confirm the authenticity of the breach, but were unable to get a response. On 29 October 2015, following a reset of all passwords and the publication of Fox-Brewster's article about the breach, 000webhost announced the data breach via their Facebook page. In early November 2015, two breaches of gambling payment providers Neteller and Skrill were confirmed to be legitimate by the Paysafe Group, the parent company of both providers. The data included 3.6 million records from Neteller obtained in 2009 using an exploit in Joomla, and 4.2 million records from Skrill (then known as Moneybookers) that leaked in 2010 after a virtual private network was compromised. The combined 7.8 million records were added to HIBP's database. Later that month, electronic toy maker VTech was hacked, and an anonymous source privately provided a database containing nearly five million parents' records to HIBP. According to Hunt, this was the fourth largest consumer privacy breach to date. In May 2016, an unprecedented series of very large data breaches that dated back several years were all released in a short timespan. These breaches included 360 million Myspace accounts from circa 2009, 164 million LinkedIn accounts from 2012, 65 million Tumblr accounts from early 2013, and 40 million accounts from adult dating service Fling.com. These datasets were all put up for sale by an anonymous hacker named "peace_of_mind", and were shortly thereafter provided to Hunt to be included in HIBP. In June 2016, an additional "mega breach" of 171 million accounts from Russian social network VK was added to HIBP's database. In August 2017, BBC News featured Have I Been Pwned? on Hunt's discovery of a spamming operation that has been drawing on a list of 711.5 million email addresses. Unsuccessful effort to sell Midway June 2019, Hunt announced plans to sell Have I Been Pwned? to a yet to be determined organisation. In his blog, he outlined his wishes to reduce personal stress and expand the site beyond what he was able to accomplish himself. As of the release of the blog post, he was working with KPMG to find companies he deemed suitable which were interested in the acquisition. However, in March 2020, he announced on his blog that Have I Been Pwned? would remain independent for the foreseeable future. Open-sourcing On August 7, 2020, Hunt announced on his blog his intention to open-source the Have I Been Pwned? codebase. He started publishing some code on May 28, 2021. Branding The name "Have I Been Pwned?" is based on the script kiddie jargon term "pwn", which means "to compromise or take control, specifically of another computer or application." HIBP's logo includes the text ';--, which is a common SQL injection attack string. A hacker trying to take control of a website's database might use such an attack string to manipulate a website into running malicious code. Injection attacks are one of the most common vectors by which a database breach can occur; they are the #1 most common web application vulnerability on the OWASP Top 10 list. See also Firefox Monitor Database security References External links Have I Been Pwned? announcement blog post on troyhunt.com Internet security Database security 2013 establishments in Australia Technology websites English-language websites Australian websites
18331617
https://en.wikipedia.org/wiki/High-alert%20nuclear%20weapon
High-alert nuclear weapon
High-alert nuclear weapon(s) commonly refers to a launch-ready ballistic missile(s) armed with a nuclear warhead(s) whose launch can be ordered (through the National Command Authority) and executed (via a nuclear command and control system) within 15 minutes or less. This can include any weapon system capable of delivering a nuclear warhead in this time frame. Virtually all high-alert nuclear weapons are possessed by the U.S. and Russia. Both nations use automated command and control systems in conjunction with their early warning radar and/or satellites to facilitate the rapid launch of their land-based intercontinental ballistic missiles (ICBMs) and some submarine-launched ballistic missiles (SLBMs). Fear of a "disarming" nuclear first-strike that would destroy their command and control systems and nuclear forces led both nations to develop "launch-on-warning" capability, which requires high-alert nuclear weapons able to launch on a 30-minute (or less) tactical warning, the nominal flight time of ICBMs traveling between the U.S. and Russia. A definition of "high-alert" requires no specific explosive power of the weapon carried by the missile or weapon system, but in general, most high-alert missiles are armed with strategic nuclear weapons with yields equal to or greater than 100 kilotons. The U.S. and Russia have for decades possessed ICBMs and SLBMs capable of being launched in only a few minutes. The U.S. and Russia currently have a total of 900 missiles and 2581 strategic nuclear warheads on high-alert, launch-ready status. The total explosive power of these weapons is about 1185 Mt (megatons, or 1.185 billion tons of TNT equivalent explosive power). Notes and references Nuclear weapons Nuclear warfare
7700
https://en.wikipedia.org/wiki/Commodore%201571
Commodore 1571
The Commodore 1571 is Commodore's high-end 5¼" floppy disk drive, announced in the summer of 1985. With its double-sided drive mechanism, it has the ability to use double-sided, double-density (DS/DD) floppy disks, storing a total of 360 kB per floppy. It also implemented a "burst mode" that doubled transfer speeds, helping address the very slow performance of previous Commodore drives. Earlier Commodore drives used a custom group coded recording format that stored 170 kB per side of a disk. This made it fairly competitive in terms of storage, but limited it to only reading and writing disks from other Commodore machines. The 1571 was designed to partner with the new Commodore 128 (C128), which introduced support for CP/M. Adding double-density MFM encoding allowed the drive to read and write contemporary CP/M disks (and many others). In contrast to its single-sided predecessors, the 1541 and the briefly-available 1570, the 1571 can use both sides of the disk at the same time. Previously, users could only use the second side by manually flipping them over. Because flipping the disk also reverses the direction of rotation, the two methods are not interchangeable; disks which had their back side created in a 1541 by flipping them over would have to be flipped in the 1571 too, and the back side of disks written in a 1571 using the native support for two-sided operation could not be read in a 1541. Release and features The 1571 was released to match the Commodore 128, both design-wise and feature-wise. It was announced in the summer of 1985, at the same time as the C128, and became available in quantity later that year. The later C128D had a 1571 drive built into the system unit. A double-sided disk on the 1571 would have a capacity of 340 kB (70 tracks, 1,360 disk blocks of 256 bytes each); as 8 kB are reserved for system use (directory and block availability information) and, under of each block serve as pointers to the next logical block, = 337,312 B or about were available for user data. (However, with a program organizing disk storage on its own, all space could be used, e.g. for data disks.) The 1571 was designed to accommodate the C128's "burst" mode for 2x faster disk access, however the drive cannot use it if connected to older Commodore machines. This mode replaced the slow bit-banging serial routines of the 1541 with a true serial shift register implemented in hardware, thus dramatically increasing the drive speed. Although this originally had been planned when Commodore first switched from the parallel IEEE-488 interface to the CBM-488 custom serial interface, hardware bugs in the VIC-20's 6522 VIA shift register prevented it from working properly. When connected to a C128, the 1571 would default to double-sided mode, which allowed the drive to read its own 340k disks as well as single-sided 170 kB 1541 disks. If the C128 was switched into C64 mode by typing GO 64 from BASIC, the 1571 will stay in double-sided mode. If C64 mode was activated by holding down the C= key on power-up, the drive would automatically switch to single-sided mode, in which case it is unable to read 340 kB disks (also the default if a 1571 is used with a C64, Plus/4, VIC-20, or PET). A manual command can also be issued from BASIC to switch the 1571 between single and double sided mode. There is also an undocumented command which allows the user to independently control either of the read/write heads of the 1571, making it possible to format both sides of a diskette separate from each other, however the resultant disk cannot be read in a 1541 as it would be spinning in reverse direction when flipped upside down. In the same vein, "flippy" disks created with a 1541 cannot be read on a 1571 with this feature; they must be inserted upside down. The 1571 is not 100% low-level compatible with the 1541, however this isn't a problem except in some software that uses advanced copy protections such as the RapidLok system found on Microprose and Accolade games. The 1571 was noticeably quieter than its predecessor and tended to run cooler as well, even though, like the 1541, it had an internal power supply (later Commodore drives, like the 1541-II and the 3½" 1581, came with external power supplies). The 1541-II/1581 power supply makes mention of a 1571-II, hinting that Commodore may have intended to release a version of the 1571 with an external power supply. However, no 1571-IIs are known to exist. The embedded OS in the 1571 was an improvement over the Early 1571s had a bug in the ROM-based disk operating system that caused relative files to corrupt if they occupied both sides of the disk. A version 2 ROM was released, but though it cured the initial bug, it introduced some minor quirks of its own - particularly with the 1541 emulation. Curiously, it was also identified as V3.0. As with the 1541, Commodore initially could not meet demand for the 1571, and that lack of availability and the drive's relatively high price (about US$300) presented an opportunity for cloners. Two 1571 clones appeared, one from Oceanic and one from Blue Chip, but legal action from Commodore quickly drove them from the market. Commodore announced at the 1985 Consumer Electronics Show a dual-drive version of the 1571, to be called the Commodore 1572, but quickly canceled it, reportedly due to technical difficulties with the 1572 DOS. It would have had four times as much RAM as the 1571 (8 kB), and twice as much ROM (64 kB). The 1572 would have allowed for fast disk backups of non-copy-protected media, much like the old 4040, 8050, and 8250 dual drives. The 1571 built into the European plastic-case C128 D computer is electronically identical to the stand-alone version, but 1571 version integrated into the later metal-case C128 D (often called C128 DCR, for D Cost-Reduced) differs a lot from the stand-alone 1571. It includes a newer DOS, version 3.1, replaces the MOS Technology CIA interface chip, of which only a few features were used by the 1571 DOS, with a very much simplified chip called 5710, and has some compatibility issues with the stand-alone drive. Because this internal 1571 does not have an unused 8-bit input/output port on any chip, unlike most other Commodore drives, it is not possible to install a parallel cable in this drive, such as that used by SpeedDOS, DolphinDOS and some other fast third-party Commodore DOS replacements. Technical design The drive detects the motor speed and generates an internal data sampling clock signal that matches with the motor speed. The 1571 uses a saddle canceler when reading the data stream. A correction signal is generated when the raw data pattern on the disk consists of two consecutive zeros. With the GCR recording format a problem occurs in the read signal waveform. The worst case pattern 1001 may cause a saddle condition where a false data bit may occur. The original 1541 drives uses a one-shot to correct the condition. The 1571 uses a gate array to correct this digitally. The drive uses the MOS 6502 CPU, WD1770 or WD1772 floppy controller, 2x MOS Technology 6522 I/O controllers and 1x MOS Technology 6526. Disk format Unlike the 1541, which was limited to GCR formatting, the 1571 could read both GCR and MFM disk formats. The version of CP/M included with the C128 supported the following formats: IBM PC CP/M-86 Osborne 1 (double density upgrade) Epson QX10 Kaypro II, IV CBM CP/M FORMAT SS CBM CP/M FORMAT DS The 1571 can read any of the many CP/M -disk formats. If the CP/M BIOS is modified, it is possible to read any soft sector 40-track MFM format. Single density (FM) formats are not supported because the density selector pin on the MFM controller chip in the drive is disabled (wired to ground). A 1571 cannot boot from MFM disks; the user must boot CP/M from a GCR disk and then switch to MFM disks. With additional software, it was possible to read and write to MS-DOS-formatted floppies as well. Numerous commercial and public-domain programs for this purpose became available, the best-known being SOGWAP's "Big Blue Reader". Although the C128 could not run any DOS-based software, this capability allowed data files to be exchanged with PC users. Reading or disks was possible as well with special software, but the standard format, which used FM rather than MFM encoding, could not be handled by the 1571 hardware without modifying the drive circuitry as the control line that determines if FM or MFM encoding is used by the disc controller chip was permanently wired to ground (MFM mode) rather than being under software control. In the 1541 format, while 40 tracks are possible for a drive like the 154x/157x, only are used. Commodore chose not to use the upper five tracks by default (or at least to use more than 35) due to the bad quality of some of the drive mechanisms, which did not always work reliably on those tracks. For compatibility and ease of implementation, the 1571's double-sided format of one logical disk side with was created by putting together the lower 35 physical tracks on each of the physical sides of the disk rather than using two times even though there were no more quality problems with the mechanisms of the 1571 drives. References Citations Works cited Ellinger, Rainer (1986). 1571 Internals. Grand Rapids, MI: Abacus Software (translated from the original German edition, Düsseldorf: Data Becker GmbH). . External links Disk Preservation Project Discusses internal drive mechanics and copy protection RUN Magazine Issue 64 A photo of the 1572 dual drive, with a 1571 single drive shown for comparison The 1572 drive as shown on the Commodore Kuriositäten page (German) Information page about the Commodore 1572 (German) Secret Weapons of Commodore: The Disk Drives Beyond The 1541: Mass Storage for the 64 and 128 Commodore 64 CBM floppy disk drives CBM hardware
22091
https://en.wikipedia.org/wiki/Ncurses
Ncurses
ncurses (new curses) is a programming library providing an application programming interface (API) that allows the programmer to write text-based user interfaces in a terminal-independent manner. It is a toolkit for developing "GUI-like" application software that runs under a terminal emulator. It also optimizes screen changes, in order to reduce the latency experienced when using remote shells. ncurses is a free-software emulation of the System V Release 4.0 (SVr4) curses. There are bindings for ncurses in a variety of programming languages, including Ada, Python, Gambas, Ruby, PHP, JavaScript, and Perl. History As the new version, ncurses is a free-software emulation of the System V Release 4.0 (SVr4) curses, which was itself an enhancement over the discontinued 4.4 BSD curses. The XSI Curses standard issued by X/Open is explicitly and closely modeled on System V. curses The first curses library was developed at the University of California at Berkeley, for a BSD operating system, around 1980 to support Rogue, a text-based adventure game. It originally used the termcap library, which was used in other programs, such as the vi editor. The success of the BSD curses library prompted Bell Labs to release an enhanced curses library in their System V Release 2 Unix systems. This library was more powerful and instead of using termcap, it used terminfo. However, due to AT&T policy regarding source-code distribution, this improved curses library did not have much acceptance in the BSD community. pcurses Around 1982, Pavel Curtis started work on a freeware clone of the Bell Labs curses, named pcurses, which was maintained by various people through 1986. ncurses The pcurses library was further improved when Zeyd Ben-Halim took over the development effort in late 1991. The new library was released as ncurses in November 1993, with version 1.8.1 as the first major release. Subsequent work, through version 1.8.8 (M1995), was driven by Eric S. Raymond, who added the form and menu libraries written by Juergen Pfeifer. Since 1996, it has been maintained by Thomas E. Dickey. Most ncurses calls can be easily ported to the old curses. System V curses implementations can support BSD curses programs with just a recompilation. However, a few areas are problematic, such as handling terminal resizing, since no counterpart exists in the old curses. Terminal database Ncurses can use either terminfo (with extensible data) or termcap. Other implementations of curses generally use terminfo; a minority use termcap. Few (mytinfo was an older exception) use both. License Ncurses is a part of the GNU Project, but is not distributed under the GNU GPL or LGPL. Instead, it is distributed under a permissive free software licence, i.e., the MIT License. This is due to the agreement made with the Free Software Foundation at the time the developers assigned their copyright. When the agreement was made to pass on the rights to the FSF, there was a clause that stated: The Foundation promises that all distribution of the Package, or of any work "based on the Package", that takes place under the control of the Foundation or its agents or assignees, shall be on terms that explicitly and perpetually permit anyone possessing a copy of the work to which the terms apply, and possessing accurate notice of these terms, to redistribute copies of the work to anyone on the same terms. According to the maintainer Thomas E. Dickey, this precludes relicensing to the GPL in any version, since it would place restrictions on the programs that will be able to link to the libraries. Programs using ncurses There are hundreds of programs which use ncurses. Some, such as GNU Screen and w3m, use only the termcap interface and perform screen management themselves. Others, such as GNU Midnight Commander and YaST, use the curses programming interface. See also conio.h – A C header file used in MS-DOS compilers to create text user interfaces Curses Development Kit Dialog (software) PDCurses S-Lang (programming library) SMG$ – The screen-management library available under OpenVMS References External links C (programming language) libraries Free software programmed in Ada Free software programmed in C GNU Project software Software using the MIT license Termcap Terminfo
69678256
https://en.wikipedia.org/wiki/Elizabeth%20Jessup
Elizabeth Jessup
Elizabeth Redding Jessup is an American computer scientist specializing in numerical linear algebra and the generalized minimal residual method. She is a professor emerita of computer science at the University of Colorado Boulder. Education and career Jessup is one of three children of an Indianapolis tax attorney. She majored in mathematics at Williams College, and went to Yale University for graduate study, earning a master's degree in applied physics and a Ph.D. in computer science there. Her 1989 dissertation, Parallel Solution of the Symmetric Tridiagonal Eigenproblem, was supervised by Ilse Ipsen; she was Ipsen's first student. She joined the University of Colorado Boulder faculty in 1989, as the only woman on the computer science faculty. She became chair of the computer science department there twice, taking advantage of the position to focus on improving both faculty diversity and job satisfaction, before retiring in 2019. Contributions Jessup is a coauthor of the book An Introduction to High-Performance Scientific Computing (with Lloyd D. Fosdick, Carolyn J. C. Schauble, and , MIT Press, 1996). In 2008, she founded a biennial conference, the Rocky Mountain Celebration of Women in Computing. References External links Home page Year of birth missing (living people) Living people American computer scientists American women computer scientists Williams College alumni Yale University alumni
27054977
https://en.wikipedia.org/wiki/Definitive%20Media%20Library
Definitive Media Library
A Definitive Media Library is a secure Information Technology repository in which an organisation's definitive, authorised versions of software media are stored and protected. Before an organisation releases any new or changed application software into its operational environment, any such software should be fully tested and quality assured. The Definitive Media Library provides the storage area for software objects ready for deployment and should only contain master copies of controlled software media configuration items (CIs) that have passed appropriate quality assurance checks, typically including both procured and bespoke application and gold build source code and executables. In the context of the ITIL best practice framework, the term Definitive Media Library supersedes the term definitive software library referred to prior to version ITIL v3. In conjunction with the configuration management database (CMDB), it effectively provides the DNA of the data center i.e. all application and build software media connected to the CMDB record of installation and configuration. The Definitive Media Library (DML) is a primary component of an organisation's release and provisioning framework and service continuity plan. Background In a controlled IT environment it is crucial that only authorised versions of software are allowed into production. The consequences of unauthorised software versions finding their way into the live environment can be serious. Typically, in a mature organisation, stringent Change and Release Management processes will exist to prevent this occurring, but such processes require a place where the authorised software versions can be safely stored and accessed. The solution put forward by ITIL in its third version is called the Definitive Media Library or DML (replacing the previously named Definitive Software Library or DSL in version two). ITIL proposes that the DML can be either a physical or virtual store and there are benefits and drawbacks with either method. Clearly, however, there are key factors in the success of any DML solution i.e. software required to be deployed into production should be rigorously tested, assured and licensed to perform and also packaged in such a way that it will safely and consistently deploy. Also, the DML should be easily accessed by those, and only those, authorised to do so. In this way, a virtual (electronic) storage area will almost always provide a superior solution, meaning the DML can be centralised and accessed remotely or outside normal business hours if the need arises (see distribution). Scope The DML plays a critical role in supporting the transition from development to production phases and DML solutions should be distinguished from other software and source code repositories e.g. Software Configuration Management or SCM (sometimes referred to as Software Change and Configuration Management) that supports the development or software evolution phase. This is an important distinction and often causes some confusion. In essence, whereas SCM tools or repositories store and manage all development versions and revisions of code (or work products) up to but not including the final authorised product, the DML stores only the final authorised versions of the code or product. This is analogous to a high-street product lifecycle where the product moves from design house to factory, through to warehouse and then shop, i.e. records (metadata) are kept about how a product is designed developed and built. This enables the tracking down of which process is to blame where faulty products are discovered either during quality control or even in later service. records (metadata) are kept in a CMDB about where the software is installed and deployed from the DML and into the production environment. Each installation or deployment should be authorised by a corresponding production change request and the resulting change recorded in the CMDB as a relationship between the DML artefact and the platform where it has been deployed. In a more mature or evolved state there is no distinction drawn between the two forms of configuration management and the process is continuous supporting the whole service delivery and service operation lifecycle. This has been referred to as Enterprise Configuration Management. Even here though the development-based artefacts should still be distinguished from and kept separate from the management of quality assured, definitive master versions available for deployment. In an outsourced or multi-vendor arrangement the existence or otherwise of a consistent and secure form of supplier access will dictate whether or not the software configuration management is performed passively (externally by suppliers adopting their own SCM tools and then delivering the finished product) or actively (overseen internally with suppliers utilising the centrally hosted SCM tool). All finished products, however, (application software) in their authorised deployable form should be stored within the central DML. Typical CIs that a DML will store include: Packaged in-house application software Commercial off-the-shelf (COTS) raw media Customised COTS software (containing enhancements, tailored configuration etc) Release packages Patches (see patch (computing)) Gold builds (clients, servers, network and storage devices etc) System images Across multiple technology stacks and distribution technologies (e.g. Wintel, UNIX, ORACLE, mainframe, network, storage etc) Media Release Lifecycle (see "Definitive Media Library & CMDB in the context of the Release Management Process" diagram above) The media release lifecycle steps are: Demand for new service or product arises. Decision is made to make or buy the product (service, build or application) based on functional requirements extracted from the requirements traceability tool. Product is created or selected from the service/ product catalogue in accordance with architectural design policies (Service Design). COTS product is procured and stored in the DML with asset status ‘procured’. If new, the product is added to the Approved Products Catalogue. In-house created application source code is managed directly in the software configuration management repository. If COTS product or gold build is being packaged, media is extracted from the DML. Product is packaged or developed and packaged (in which case add-on functionality is treated in the same way as in-house applications and builds). Stub records or original baselines are created in the software configuration management tool. Development code revisions and package revisions are recorded in the software configuration management tool throughout development. Unit testing is carried out. Packaging is completed to create the release package. Product package is quality assured (inc testing, staging and any rework). Completed media package (build, service or application) is lodged back in the DML as authorised media ready for deployment. Following Change Management approval, product is released to the estate via the appropriate distribution system with logical installations being recorded via due process in the CMS (CMDB). DML entities are archived as soon as: CMS or CMDB indicates that packaged release is no longer in use at any location (a period of grace is required following the last decommission or upgrade to allow for any necessary regressions) and The DML entity has been removed from the technical or user (service) catalogue as a selectable item Distribution Even though the DML as an authorised store for media implies a degree of centralisation, Local Media Libraries (LMLs) will be required in order to achieve a global model. In this way, release and deployment of physical instances of media can be achieved in country in a timely manner by avoiding constant downloads over the global network. Replication of authorised media in non-prime windows would make required packages available locally as required, but the DML would remain as ‘master’ for process control reasons. The DML/LML hierarchy is synonymous with the master/secondary distribution layers seen within many distribution technologies and package management systems. However, whereas distribution tools tend to be biased towards a particular technology stack (e.g. Wintel, Unix, Mainframe etc), one of the main benefits of a DML is its technology-agnostic nature and a true central store for all authorised software. In this way, the distribution tools would connect to the DML to obtain the software package. Application packaging involves the preparation of standard, structured software installations targeted for automated deployment. Packaging is also required for bought-in (COTS) software, as packaging allows software to be configured to run efficiently on a particular platform or environment. Even a slight change in this platform (such as the swapping-out of disk) can prevent a package from successfully deploying so retention of the raw media (ISO) version of software is critical as this will be needed (often in an emergency) should the packaged version no longer deploy e.g. following the upgrade or replacement of the operating platform. Benefits The DML supports; Release & Deployment Management as a foundation and the central storage area for all releasable deployment packages Availability and Service Continuity by providing the source of all packaged applications and raw media for use in service restoration and Disaster recovery procedures Automated server provisioning and rationalisation through the storage of gold builds Asset Management by providing metadata records and licence keys relating to COTS software licence provision. Instances of media and the authorised media set stored together with licences and licence conditions will allow optimised management of software allocations and external compliance in terms of Sarbane-Oxley and BSA recommendations. Catalogued request fulfilment, either in terms of single-user client-end product requests or repeated requests for deployments of an existing multi-user service/application to other hosting locations. See also Application lifecycle management Product lifecycle management Software Lifecycle Management Systems management System deployment Software release Software deployment Software repository References External links http://wiki.en.it-processmaps.com/index.php/ITIL_Glossary http://www.itsmwatch.com/itil/article.php/3887361/How-to-Set-Up-and-Manage-a-Definitive-Media-Library.htm http://www.itsmwatch.com/itil/article.php/3729141/Benefits-of-a-Definitive-Media-Library-DML.htm http://www.ibm.com/developerworks/rational/library/edge/09/mar09/rader/ ITIL
68338611
https://en.wikipedia.org/wiki/Chris%20Gobrecht
Chris Gobrecht
Christianne Geiger Gobrecht (born February 9, 1955) is an American basketball coach who is currently the head coach of the United States Air Force Academy women's basketball team. A coach since 1977, she has been a head coach at the high school, junior college, and NCAA levels, and is known for only hiring female assistant coaches in order to protect opportunities for women. Coaching career Gobrecht began her coaching career at Santa Fe Springs High School for one season before named the head coach at Pasadena City College, where she won a conference championship in her lone season there. She was also the head coach at Cal State Fullerton for six seasons prior to accepting the head coaching position at Washington, where she won two Pac-10 Conference titles and was named Pac-10 coach of the year twice. She was also the head coach at Florida State for one season prior to joining her alma mater USC in 1997. She led the Trojans to two WNIT appearances before she was fired at the end of the 2003–04 season. Gobrecht took the 2004–05 season off to spend time with family, accepting the head coaching position at Yale in 2005. Gobrecht was named the head coach at Air Force on April 14, 2015. She signed a contract extension after the 2017–18 season that extended her contract through the 2022–23 season. Head coaching record Personal life Gobrecht was married to Bob Gobrecht, who died in 2018 from an undisclosed illness. The couple had two children; Eric and Mady. Eric attended the Air Force Academy and is a Major stationed at Beale Air Force Base in California, while Mady played for her mother at Yale and is currently a nurse in Colorado Springs. References External links Air Force Falcons profile 1955 births Living people Sportspeople from Toledo, Ohio Basketball players from Ohio Basketball coaches from Ohio USC Trojans women's basketball players High school basketball coaches in California Cal State Fullerton Titans women's basketball coaches Washington Huskies women's basketball coaches Florida State Seminoles women's basketball coaches USC Trojans women's basketball coaches Yale Bulldogs women's basketball coaches Air Force Falcons women's basketball coaches
24656236
https://en.wikipedia.org/wiki/Journal%20of%20Software%3A%20Evolution%20and%20Process
Journal of Software: Evolution and Process
The Journal of Software: Evolution and Process is a peer-reviewed scientific journal covering all aspects of software development and evolution. It is published by John Wiley & Sons. The journal was established in 1989 as the Journal of Software Maintenance: Research and Practice, renamed in 2001 to Journal of Software Maintenance and Evolution: Research and Practice, and obtained its current title in 2012. The editors-in-chief are Gerardo Canfora (University of Sannio), Darren Dalcher (University of Hertfordshire), and David Raffo (Portland State University). Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2012 impact factor of 1.273, ranking it 30th out of 105 journals in the category "Computer Science, Software Engineering". References External links Computer science journals Software engineering publications Software maintenance Publications established in 1989 Wiley (publisher) academic journals Monthly journals English-language journals
25020534
https://en.wikipedia.org/wiki/Microsoft%20India
Microsoft India
Microsoft India Private Limited is a subsidiary of American software company Microsoft Corporation, headquartered in Hyderabad, India. The company first entered the Indian market in 1990 and has since worked closely with the Indian government, the IT industry, academia and the local developer community to usher in some of the early successes in the IT market. Microsoft currently has offices in the 11 cities of Ahmedabad, Bangalore, Chennai, Hyderabad, Kochi, Kolkata, Mumbai, the NCR (New Delhi, Noida and Gurgaon) and Pune. Microsoft in India employs about 8,000 people and has six business units representing the complete Microsoft product portfolio. Microsoft India Development Center Located in Hyderabad, the Microsoft India Development Center (MSIDC) is Microsoft’s largest software development center outside of their headquarters in Redmond, Washington. The MSIDC teams focus on strategic and IP sensitive software product development. Business units Microsoft India operates the following six business units in India. Microsoft India (R&D) Private Limited Microsoft Research India (MSR India) Microsoft Services Global Delivery (MSGD) Microsoft Corporation India Pvt. Ltd. (MCIPL) Microsoft IT India (MSIT India) Microsoft India Global Technical Support Center (IGTSC) Notes External links Official site Software companies of India Microsoft local divisions Indian subsidiaries of foreign companies Information technology companies of India International information technology consulting firms Companies based in Hyderabad, India Information technology companies of New Delhi Software companies based in Mumbai Software companies established in 1999 1999 establishments in Andhra Pradesh
1492548
https://en.wikipedia.org/wiki/Dave%20Sifry
Dave Sifry
Dave Sifry is an American software entrepreneur and blogosphere icon known for founding Technorati in 2004, formerly a leading blog search engine. He also lectures widely on wireless technology and policy, weblogs, and open source software. Early years Sifry grew up on Long Island, and learned to program on a Commodore PET. While in his teens, he decided that someday he would move to Silicon Valley and start a company. After studying computer science at Johns Hopkins University, he worked for Mitsubishi. Career Sifry cofounded Sputnik, a Wi-Fi gateway company, Linuxcare, and Offbeat Guides. He has been a founding member of the board of Linux International, and a technical advisor to the National Cybercrime Training Partnership for law enforcement. Dave worked as a business developer for Mozilla/Mozillamessaging, trying to bring partners to Mozilla Thunderbird. Personal life David is married to Noriko and has two kids, Melody and Noah. Awards 2006 Best Blog Guide – Technorati – Web 2.0 Awards 2006 Best of Show – Technorati – SXSW Awards 2006 Best Technical Achievement – Technorati – SXSW Awards References External links Sifry’s Alerts, Dave's personal blog. Technorati management team official page, reference for much of the above David Sifry podcast, PodLeaders, 2006-04-05 Ten Questions with David Sifry, Signal Without Noise, 2006-07-09 David Sifry on Technorati and entrepreneurship Video David Sifry on the state of the Net, Video interview, 2008-02-09 David Sifry in Inc. magazine, January 2006 1968 births Living people American bloggers American computer businesspeople Linux people People from Long Island Johns Hopkins University alumni
43743989
https://en.wikipedia.org/wiki/Stanford%E2%80%93USC%20football%20rivalry
Stanford–USC football rivalry
The Stanford–USC football rivalry is an American college football rivalry between the Stanford Cardinal and the USC Trojans, both members of the Pac-12 Conference and the only private schools in the conference. The two teams first played in 1905 and have met nearly every year since 1919 (missing only 1921, 1924, and the World War II years 1943–1945), frequently vying for the conference championship and a berth in the Rose Bowl. Stanford is USC's oldest current rival. Series history Early rivalry The rivalry began in earnest in the 1930s after USC had won three national championships in five years. A group of Stanford freshmen, after a stinging 1932 loss to an undefeated USC team, promised never to lose to USC again. The "Vow Boys" made good on their promise, winning their next three games against the Trojans, beginning with the 1933 win that broke USC's 27-game undefeated streak. Notable games and incidents For most of its history, USC dominated the series, and overall USC has won about two-thirds of the games, but the rivalry has been marked with notable incidents and expressions of disdain between the two schools. In 1972, USC coach John McKay accused Stanford and its fans of having "no class" and said he'd "like to beat Stanford by 2,000 points"; Stanford coach Jack Christiansen responded that he wouldn't "get into a urinating contest with a skunk". Two years earlier, when McKay's son J.K. and his high school teammate Pat Haden had told him they were considering going to Stanford, he replied, "If it was between Stanford and Red China, I would pay your way to Peking." Both played at USC under McKay, as the Trojans won national titles in 1972 and 1974. In 1979, Stanford came back in the last four minutes to tie #1 USC 21–21 on October 13. This game, considered one of the greatest of the 20th century, effectively cost USC a national title (they dropped to #4 in the polls afterwards). USC finished 11–0–1, but was ranked #2 in both polls due to the tie. In 1980, the Stanford Band marched onto the field accompanied by a horse skeleton on wheels, being ridden by a Trojan-helmeted human skeleton, in a parody of USC's Traveler mascot. For the 2012 game, the Stanford band leader inexplicably showed up dressed as the USC Trojan mascot. Recent history USC cemented its margin in the series between 1958 and 1990, going 29–3–1, but the teams have split the 32 decisions since. The competitive atmosphere of the rivalry increased in the early 1990s when Bill Walsh returned for his second tenure as Stanford's head coach, and particularly heated up in 2007 after Stanford hired head coach Jim Harbaugh. 1–3, unranked Stanford (who had been 1–11 the prior season under head coach Walt Harris) entered the 2007 game as a 41-point underdog against #2 USC, but pulled out a 24–23 win in what has been called one of the biggest college football upsets of all time. The 2009 game was marked by a post-game verbal confrontation between Harbaugh and USC head coach Pete Carroll, after #25 Stanford capped off its convincing 55–21 win over #11 USC with a late 2-point conversion attempt and another touchdown; Carroll came off the field saying "What's your deal?" at Harbaugh, who responded, "What's your deal?" Stanford then adopted the phrase as a slogan for its season ticket packages. In recent years, the rivalry has been memorable for its upsets. The lower-ranked team pulled off an upset six out of nine years from 2007–2015 (four by Stanford and two by USC), including four years in a row from 2012–2015 (two each for Stanford and USC). In 2012, #21 Stanford knocked off the #2 Trojans, 21–14. In 2013, unranked USC defeated #5 Stanford 20–17, ending Stanford's longest winning streak in the series at four, and possibly costing the Cardinal a trip to the national championship game. In 2015, unranked Stanford went on the road and upset #6 USC 41–31. In 2021, unranked Stanford went on the road and upset #14 USC 42-28, resulting in USC head coach Clay Helton's firing immediately after. In 2010, the then-Pac-10 Conference expanded to 12 teams and split into north and south divisions, moving Stanford and USC into different divisions. This move threatened the annual rivalry, since teams from each division were not scheduled to play each other every year; however, the conference elected to maintain the "historic California rivalries", including both the Stanford–USC rivalry and the Cal-UCLA rivalry. Both teams being ranked entering the game was once a rare occurrence but has become the norm in recent years. There have been 15 games where both Stanford and USC were ranked, with 2 from 1940–1953, 4 from 1968–1972, just 1 from 1973–2008, and 8 since 2009. USC leads the series 62–34–3; they have led since the third game. USC holds the longest win streak in the series, with 12 wins from 1958–1969. USC also went 14–0–1 from 1976–1990, with the two teams playing to a tie in 1979. Stanford's longest win streak was 4 from 2009–2012. The early years of the series had near-parity, with USC leading 17–15–2 from 1905–1957. USC then went 39–9–1 from 1958–2006; however, Stanford is 10–6 since then, including 5 of 8 in the L.A. Coliseum. The teams met twice in 2015 – once in the regular season and, after #7 Stanford and #24 USC won their divisions, a rematch in the Pac-12 Championship Game, which #7 Stanford won 41–22, also ending the four-year streak of upsets in the series. In 2017 the teams, both ranked in the top 15, again met for the conference championship, with USC winning 31-28, the first win by a southern division team since the Pac-12 adopted the championship game format. Game results See also List of NCAA college football rivalry games List of most-played college football series in NCAA Division I References College football rivalries in the United States Stanford Cardinal football USC Trojans football 1905 establishments in California
6023289
https://en.wikipedia.org/wiki/Wilmagate
Wilmagate
WilmaGate is a collection of open-source tools for Authentication, Authorization and Accounting on an Open Access Network. It has been initially developed by the Computer Networks and Mobility Group at the University of Trento (Italy). Its development has been part of the locally funded Wilma Project and is now being pursued by the Twelve Project under the name Uni-Fy. It is currently being used for wireless authentication at the Faculty of Science at the University of Trento and by the UniWireless network of Italian research groups participating in the Twelve Project. Features The system has been designed in order to separate the user authentication phase (which is usually performed by a possibly remote ISP) from internet access provided at the user's current location by a local carrier. Therefore, a multiplicity of authentication providers and of access providers is envisioned. The WilmaGate system provides code for both purposes and for a variety of authentication methods. Its modular and object-oriented structure allows programmers to easily add plug-in code for new authentication or accounting protocols. See this article for details. Steps The following steps are performed in a normal user connection. The user's mobile terminal (laptop or PDA) physically connects to a network, either by plugging in a cable (Ethernet or FireWire) or by associating with a wireless access point via Wi-Fi or Bluetooth. The terminal automatically issues a DHCP handshake in order to set up an appropriate configuration for the network it is entering. By this action, the mobile terminal's existence is recognized by the Gateway component. The client starts some form of authentication process, either by opening a web browser and having it redirected to an authentication provider of the admin's choice, or through some pre-installed authentication program. After authentication the client has possibly full Internet access; however, some authentication-based restrictions are applicable. Code The access gateway is written in C++ and is executable both in Linux and Windows/Cygwin environments. The sample Captive portal authentication system is written in PHP. Further reading Mauro Brunato, Renato Lo Cigno, Danilo Severina. Managing Wireless HotSpots: the Uni-Fy Approach. MedHocNet 2006, Lipari (Italy), June 14–17, 2006. Mauro Brunato, Danilo Severina. WilmaGate: a New Open Access Gateway for Hotspot Management. ACM WMASH 2005, Cologne (Germany), September 2, 2005. Roberto Battiti, Mauro Brunato, Renato Lo Cigno, Alessandro Villani, Roberto Flor, Gianni Lazzari. WILMA: An Open Lab for 802.11 Hotspots. Proceedings of PWC2003, Venice (Italy), September 23–25, 2003. External links The Uni-Fy page at the TWELVE Project Computer access control
40254
https://en.wikipedia.org/wiki/Genetic%20algorithm
Genetic algorithm
In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems by relying on biologically inspired operators such as mutation, crossover and selection. Some examples of GA applications include optimizing decision trees for better performance, automatically solve sudoku puzzles, hyperparameter optimization, etc. Methodology Optimization problems In a genetic algorithm, a population of candidate solutions (called individuals, creatures, organisms, or phenotypes) to an optimization problem is evolved toward better solutions. Each candidate solution has a set of properties (its chromosomes or genotype) which can be mutated and altered; traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible. The evolution usually starts from a population of randomly generated individuals, and is an iterative process, with the population in each iteration called a generation. In each generation, the fitness of every individual in the population is evaluated; the fitness is usually the value of the objective function in the optimization problem being solved. The more fit individuals are stochastically selected from the current population, and each individual's genome is modified (recombined and possibly randomly mutated) to form a new generation. The new generation of candidate solutions is then used in the next iteration of the algorithm. Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population. A typical genetic algorithm requires: a genetic representation of the solution domain, a fitness function to evaluate the solution domain. A standard representation of each candidate solution is as an array of bits (also called bit set or bit string). Arrays of other types and structures can be used in essentially the same way. The main property that makes these genetic representations convenient is that their parts are easily aligned due to their fixed size, which facilitates simple crossover operations. Variable length representations may also be used, but crossover implementation is more complex in this case. Tree-like representations are explored in genetic programming and graph-form representations are explored in evolutionary programming; a mix of both linear chromosomes and trees is explored in gene expression programming. Once the genetic representation and the fitness function are defined, a GA proceeds to initialize a population of solutions and then to improve it through repetitive application of the mutation, crossover, inversion and selection operators. Initialization The population size depends on the nature of the problem, but typically contains several hundreds or thousands of possible solutions. Often, the initial population is generated randomly, allowing the entire range of possible solutions (the search space). Occasionally, the solutions may be "seeded" in areas where optimal solutions are likely to be found. Selection During each successive generation, a portion of the existing population is selected to breed a new generation. Individual solutions are selected through a fitness-based process, where fitter solutions (as measured by a fitness function) are typically more likely to be selected. Certain selection methods rate the fitness of each solution and preferentially select the best solutions. Other methods rate only a random sample of the population, as the former process may be very time-consuming. The fitness function is defined over the genetic representation and measures the quality of the represented solution. The fitness function is always problem dependent. For instance, in the knapsack problem one wants to maximize the total value of objects that can be put in a knapsack of some fixed capacity. A representation of a solution might be an array of bits, where each bit represents a different object, and the value of the bit (0 or 1) represents whether or not the object is in the knapsack. Not every such representation is valid, as the size of objects may exceed the capacity of the knapsack. The fitness of the solution is the sum of values of all objects in the knapsack if the representation is valid, or 0 otherwise. In some problems, it is hard or even impossible to define the fitness expression; in these cases, a simulation may be used to determine the fitness function value of a phenotype (e.g. computational fluid dynamics is used to determine the air resistance of a vehicle whose shape is encoded as the phenotype), or even interactive genetic algorithms are used. Genetic operators The next step is to generate a second generation population of solutions from those selected, through a combination of genetic operators: crossover (also called recombination), and mutation. For each new solution to be produced, a pair of "parent" solutions is selected for breeding from the pool selected previously. By producing a "child" solution using the above methods of crossover and mutation, a new solution is created which typically shares many of the characteristics of its "parents". New parents are selected for each new child, and the process continues until a new population of solutions of appropriate size is generated. Although reproduction methods that are based on the use of two parents are more "biology inspired", some research suggests that more than two "parents" generate higher quality chromosomes. These processes ultimately result in the next generation population of chromosomes that is different from the initial generation. Generally, the average fitness will have increased by this procedure for the population, since only the best organisms from the first generation are selected for breeding, along with a small proportion of less fit solutions. These less fit solutions ensure genetic diversity within the genetic pool of the parents and therefore ensure the genetic diversity of the subsequent generation of children. Opinion is divided over the importance of crossover versus mutation. There are many references in Fogel (2006) that support the importance of mutation-based search. Although crossover and mutation are known as the main genetic operators, it is possible to use other operators such as regrouping, colonization-extinction, or migration in genetic algorithms. It is worth tuning parameters such as the mutation probability, crossover probability and population size to find reasonable settings for the problem class being worked on. A very small mutation rate may lead to genetic drift (which is non-ergodic in nature). A recombination rate that is too high may lead to premature convergence of the genetic algorithm. A mutation rate that is too high may lead to loss of good solutions, unless elitist selection is employed. An adequate population size ensures sufficient genetic diversity for the problem at hand, but can lead to a waste of computational resources if set to a value larger than required. Heuristics In addition to the main operators above, other heuristics may be employed to make the calculation faster or more robust. The speciation heuristic penalizes crossover between candidate solutions that are too similar; this encourages population diversity and helps prevent premature convergence to a less optimal solution. Termination This generational process is repeated until a termination condition has been reached. Common terminating conditions are: A solution is found that satisfies minimum criteria Fixed number of generations reached Allocated budget (computation time/money) reached The highest ranking solution's fitness is reaching or has reached a plateau such that successive iterations no longer produce better results Manual inspection Combinations of the above The building block hypothesis Genetic algorithms are simple to implement, but their behavior is difficult to understand. In particular, it is difficult to understand why these algorithms frequently succeed at generating solutions of high fitness when applied to practical problems. The building block hypothesis (BBH) consists of: A description of a heuristic that performs adaptation by identifying and recombining "building blocks", i.e. low order, low defining-length schemata with above average fitness. A hypothesis that a genetic algorithm performs adaptation by implicitly and efficiently implementing this heuristic. Goldberg describes the heuristic as follows: "Short, low order, and highly fit schemata are sampled, recombined [crossed over], and resampled to form strings of potentially higher fitness. In a way, by working with these particular schemata [the building blocks], we have reduced the complexity of our problem; instead of building high-performance strings by trying every conceivable combination, we construct better and better strings from the best partial solutions of past samplings. "Because highly fit schemata of low defining length and low order play such an important role in the action of genetic algorithms, we have already given them a special name: building blocks. Just as a child creates magnificent fortresses through the arrangement of simple blocks of wood, so does a genetic algorithm seek near optimal performance through the juxtaposition of short, low-order, high-performance schemata, or building blocks." Despite the lack of consensus regarding the validity of the building-block hypothesis, it has been consistently evaluated and used as reference throughout the years. Many estimation of distribution algorithms, for example, have been proposed in an attempt to provide an environment in which the hypothesis would hold. Although good results have been reported for some classes of problems, skepticism concerning the generality and/or practicality of the building-block hypothesis as an explanation for GAs efficiency still remains. Indeed, there is a reasonable amount of work that attempts to understand its limitations from the perspective of estimation of distribution algorithms. Limitations There are limitations of the use of a genetic algorithm compared to alternative optimization algorithms: Repeated fitness function evaluation for complex problems is often the most prohibitive and limiting segment of artificial evolutionary algorithms. Finding the optimal solution to complex high-dimensional, multimodal problems often requires very expensive fitness function evaluations. In real world problems such as structural optimization problems, a single function evaluation may require several hours to several days of complete simulation. Typical optimization methods cannot deal with such types of problem. In this case, it may be necessary to forgo an exact evaluation and use an approximated fitness that is computationally efficient. It is apparent that amalgamation of approximate models may be one of the most promising approaches to convincingly use GA to solve complex real life problems. Genetic algorithms do not scale well with complexity. That is, where the number of elements which are exposed to mutation is large there is often an exponential increase in search space size. This makes it extremely difficult to use the technique on problems such as designing an engine, a house or a plane . In order to make such problems tractable to evolutionary search, they must be broken down into the simplest representation possible. Hence we typically see evolutionary algorithms encoding designs for fan blades instead of engines, building shapes instead of detailed construction plans, and airfoils instead of whole aircraft designs. The second problem of complexity is the issue of how to protect parts that have evolved to represent good solutions from further destructive mutation, particularly when their fitness assessment requires them to combine well with other parts. The "better" solution is only in comparison to other solutions. As a result, the stop criterion is not clear in every problem. In many problems, GAs have a tendency to converge towards local optima or even arbitrary points rather than the global optimum of the problem. This means that it does not "know how" to sacrifice short-term fitness to gain longer-term fitness. The likelihood of this occurring depends on the shape of the fitness landscape: certain problems may provide an easy ascent towards a global optimum, others may make it easier for the function to find the local optima. This problem may be alleviated by using a different fitness function, increasing the rate of mutation, or by using selection techniques that maintain a diverse population of solutions, although the No Free Lunch theorem proves that there is no general solution to this problem. A common technique to maintain diversity is to impose a "niche penalty", wherein, any group of individuals of sufficient similarity (niche radius) have a penalty added, which will reduce the representation of that group in subsequent generations, permitting other (less similar) individuals to be maintained in the population. This trick, however, may not be effective, depending on the landscape of the problem. Another possible technique would be to simply replace part of the population with randomly generated individuals, when most of the population is too similar to each other. Diversity is important in genetic algorithms (and genetic programming) because crossing over a homogeneous population does not yield new solutions. In evolution strategies and evolutionary programming, diversity is not essential because of a greater reliance on mutation. Operating on dynamic data sets is difficult, as genomes begin to converge early on towards solutions which may no longer be valid for later data. Several methods have been proposed to remedy this by increasing genetic diversity somehow and preventing early convergence, either by increasing the probability of mutation when the solution quality drops (called triggered hypermutation), or by occasionally introducing entirely new, randomly generated elements into the gene pool (called random immigrants). Again, evolution strategies and evolutionary programming can be implemented with a so-called "comma strategy" in which parents are not maintained and new parents are selected only from offspring. This can be more effective on dynamic problems. GAs cannot effectively solve problems in which the only fitness measure is a single right/wrong measure (like decision problems), as there is no way to converge on the solution (no hill to climb). In these cases, a random search may find a solution as quickly as a GA. However, if the situation allows the success/failure trial to be repeated giving (possibly) different results, then the ratio of successes to failures provides a suitable fitness measure. For specific optimization problems and problem instances, other optimization algorithms may be more efficient than genetic algorithms in terms of speed of convergence. Alternative and complementary algorithms include evolution strategies, evolutionary programming, simulated annealing, Gaussian adaptation, hill climbing, and swarm intelligence (e.g.: ant colony optimization, particle swarm optimization) and methods based on integer linear programming. The suitability of genetic algorithms is dependent on the amount of knowledge of the problem; well known problems often have better, more specialized approaches. Variants Chromosome representation The simplest algorithm represents each chromosome as a bit string. Typically, numeric parameters can be represented by integers, though it is possible to use floating point representations. The floating point representation is natural to evolution strategies and evolutionary programming. The notion of real-valued genetic algorithms has been offered but is really a misnomer because it does not really represent the building block theory that was proposed by John Henry Holland in the 1970s. This theory is not without support though, based on theoretical and experimental results (see below). The basic algorithm performs crossover and mutation at the bit level. Other variants treat the chromosome as a list of numbers which are indexes into an instruction table, nodes in a linked list, hashes, objects, or any other imaginable data structure. Crossover and mutation are performed so as to respect data element boundaries. For most data types, specific variation operators can be designed. Different chromosomal data types seem to work better or worse for different specific problem domains. When bit-string representations of integers are used, Gray coding is often employed. In this way, small changes in the integer can be readily affected through mutations or crossovers. This has been found to help prevent premature convergence at so-called Hamming walls, in which too many simultaneous mutations (or crossover events) must occur in order to change the chromosome to a better solution. Other approaches involve using arrays of real-valued numbers instead of bit strings to represent chromosomes. Results from the theory of schemata suggest that in general the smaller the alphabet, the better the performance, but it was initially surprising to researchers that good results were obtained from using real-valued chromosomes. This was explained as the set of real values in a finite population of chromosomes as forming a virtual alphabet (when selection and recombination are dominant) with a much lower cardinality than would be expected from a floating point representation. An expansion of the Genetic Algorithm accessible problem domain can be obtained through more complex encoding of the solution pools by concatenating several types of heterogenously encoded genes into one chromosome. This particular approach allows for solving optimization problems that require vastly disparate definition domains for the problem parameters. For instance, in problems of cascaded controller tuning, the internal loop controller structure can belong to a conventional regulator of three parameters, whereas the external loop could implement a linguistic controller (such as a fuzzy system) which has an inherently different description. This particular form of encoding requires a specialized crossover mechanism that recombines the chromosome by section, and it is a useful tool for the modelling and simulation of complex adaptive systems, especially evolution processes. Elitism A practical variant of the general process of constructing a new population is to allow the best organism(s) from the current generation to carry over to the next, unaltered. This strategy is known as elitist selection and guarantees that the solution quality obtained by the GA will not decrease from one generation to the next. Parallel implementations Parallel implementations of genetic algorithms come in two flavors. Coarse-grained parallel genetic algorithms assume a population on each of the computer nodes and migration of individuals among the nodes. Fine-grained parallel genetic algorithms assume an individual on each processor node which acts with neighboring individuals for selection and reproduction. Other variants, like genetic algorithms for online optimization problems, introduce time-dependence or noise in the fitness function. Adaptive GAs Genetic algorithms with adaptive parameters (adaptive genetic algorithms, AGAs) is another significant and promising variant of genetic algorithms. The probabilities of crossover (pc) and mutation (pm) greatly determine the degree of solution accuracy and the convergence speed that genetic algorithms can obtain. Instead of using fixed values of pc and pm, AGAs utilize the population information in each generation and adaptively adjust the pc and pm in order to maintain the population diversity as well as to sustain the convergence capacity. In AGA (adaptive genetic algorithm), the adjustment of pc and pm depends on the fitness values of the solutions. In CAGA (clustering-based adaptive genetic algorithm), through the use of clustering analysis to judge the optimization states of the population, the adjustment of pc and pm depends on these optimization states. It can be quite effective to combine GA with other optimization methods. A GA tends to be quite good at finding generally good global solutions, but quite inefficient at finding the last few mutations to find the absolute optimum. Other techniques (such as simple hill climbing) are quite efficient at finding absolute optimum in a limited region. Alternating GA and hill climbing can improve the efficiency of GA while overcoming the lack of robustness of hill climbing. This means that the rules of genetic variation may have a different meaning in the natural case. For instance – provided that steps are stored in consecutive order – crossing over may sum a number of steps from maternal DNA adding a number of steps from paternal DNA and so on. This is like adding vectors that more probably may follow a ridge in the phenotypic landscape. Thus, the efficiency of the process may be increased by many orders of magnitude. Moreover, the inversion operator has the opportunity to place steps in consecutive order or any other suitable order in favour of survival or efficiency. A variation, where the population as a whole is evolved rather than its individual members, is known as gene pool recombination. A number of variations have been developed to attempt to improve performance of GAs on problems with a high degree of fitness epistasis, i.e. where the fitness of a solution consists of interacting subsets of its variables. Such algorithms aim to learn (before exploiting) these beneficial phenotypic interactions. As such, they are aligned with the Building Block Hypothesis in adaptively reducing disruptive recombination. Prominent examples of this approach include the mGA, GEMGA and LLGA. Problem domains Problems which appear to be particularly appropriate for solution by genetic algorithms include timetabling and scheduling problems, and many scheduling software packages are based on GAs. GAs have also been applied to engineering. Genetic algorithms are often applied as an approach to solve global optimization problems. As a general rule of thumb genetic algorithms might be useful in problem domains that have a complex fitness landscape as mixing, i.e., mutation in combination with crossover, is designed to move the population away from local optima that a traditional hill climbing algorithm might get stuck in. Observe that commonly used crossover operators cannot change any uniform population. Mutation alone can provide ergodicity of the overall genetic algorithm process (seen as a Markov chain). Examples of problems solved by genetic algorithms include: mirrors designed to funnel sunlight to a solar collector, antennae designed to pick up radio signals in space, walking methods for computer figures, optimal design of aerodynamic bodies in complex flowfields In his Algorithm Design Manual, Skiena advises against genetic algorithms for any task: History In 1950, Alan Turing proposed a "learning machine" which would parallel the principles of evolution. Computer simulation of evolution started as early as in 1954 with the work of Nils Aall Barricelli, who was using the computer at the Institute for Advanced Study in Princeton, New Jersey. His 1954 publication was not widely noticed. Starting in 1957, the Australian quantitative geneticist Alex Fraser published a series of papers on simulation of artificial selection of organisms with multiple loci controlling a measurable trait. From these beginnings, computer simulation of evolution by biologists became more common in the early 1960s, and the methods were described in books by Fraser and Burnell (1970) and Crosby (1973). Fraser's simulations included all of the essential elements of modern genetic algorithms. In addition, Hans-Joachim Bremermann published a series of papers in the 1960s that also adopted a population of solution to optimization problems, undergoing recombination, mutation, and selection. Bremermann's research also included the elements of modern genetic algorithms. Other noteworthy early pioneers include Richard Friedberg, George Friedman, and Michael Conrad. Many early papers are reprinted by Fogel (1998). Although Barricelli, in work he reported in 1963, had simulated the evolution of ability to play a simple game, artificial evolution only became a widely recognized optimization method as a result of the work of Ingo Rechenberg and Hans-Paul Schwefel in the 1960s and early 1970s – Rechenberg's group was able to solve complex engineering problems through evolution strategies. Another approach was the evolutionary programming technique of Lawrence J. Fogel, which was proposed for generating artificial intelligence. Evolutionary programming originally used finite state machines for predicting environments, and used variation and selection to optimize the predictive logics. Genetic algorithms in particular became popular through the work of John Holland in the early 1970s, and particularly his book Adaptation in Natural and Artificial Systems (1975). His work originated with studies of cellular automata, conducted by Holland and his students at the University of Michigan. Holland introduced a formalized framework for predicting the quality of the next generation, known as Holland's Schema Theorem. Research in GAs remained largely theoretical until the mid-1980s, when The First International Conference on Genetic Algorithms was held in Pittsburgh, Pennsylvania. Commercial products In the late 1980s, General Electric started selling the world's first genetic algorithm product, a mainframe-based toolkit designed for industrial processes. In 1989, Axcelis, Inc. released Evolver, the world's first commercial GA product for desktop computers. The New York Times technology writer John Markoff wrote about Evolver in 1990, and it remained the only interactive commercial genetic algorithm until 1995. Evolver was sold to Palisade in 1997, translated into several languages, and is currently in its 6th version. Since the 1990s, MATLAB has built in three derivative-free optimization heuristic algorithms (simulated annealing, particle swarm optimization, genetic algorithm) and two direct search algorithms (simplex search, pattern search). Related techniques Parent fields Genetic algorithms are a sub-field: Evolutionary algorithms Evolutionary computing Metaheuristics Stochastic optimization Optimization Related fields Evolutionary algorithms Evolutionary algorithms is a sub-field of evolutionary computing. Evolution strategies (ES, see Rechenberg, 1994) evolve individuals by means of mutation and intermediate or discrete recombination. ES algorithms are designed particularly to solve problems in the real-value domain. They use self-adaptation to adjust control parameters of the search. De-randomization of self-adaptation has led to the contemporary Covariance Matrix Adaptation Evolution Strategy (CMA-ES). Evolutionary programming (EP) involves populations of solutions with primarily mutation and selection and arbitrary representations. They use self-adaptation to adjust parameters, and can include other variation operations such as combining information from multiple parents. Estimation of Distribution Algorithm (EDA) substitutes traditional reproduction operators by model-guided operators. Such models are learned from the population by employing machine learning techniques and represented as Probabilistic Graphical Models, from which new solutions can be sampled or generated from guided-crossover. Genetic programming (GP) is a related technique popularized by John Koza in which computer programs, rather than function parameters, are optimized. Genetic programming often uses tree-based internal data structures to represent the computer programs for adaptation instead of the list structures typical of genetic algorithms. There are many variants of Genetic Programming, including Cartesian genetic programming, Gene expression programming, Grammatical Evolution, Linear genetic programming, Multi expression programming etc. Grouping genetic algorithm (GGA) is an evolution of the GA where the focus is shifted from individual items, like in classical GAs, to groups or subset of items. The idea behind this GA evolution proposed by Emanuel Falkenauer is that solving some complex problems, a.k.a. clustering or partitioning problems where a set of items must be split into disjoint group of items in an optimal way, would better be achieved by making characteristics of the groups of items equivalent to genes. These kind of problems include bin packing, line balancing, clustering with respect to a distance measure, equal piles, etc., on which classic GAs proved to perform poorly. Making genes equivalent to groups implies chromosomes that are in general of variable length, and special genetic operators that manipulate whole groups of items. For bin packing in particular, a GGA hybridized with the Dominance Criterion of Martello and Toth, is arguably the best technique to date. Interactive evolutionary algorithms are evolutionary algorithms that use human evaluation. They are usually applied to domains where it is hard to design a computational fitness function, for example, evolving images, music, artistic designs and forms to fit users' aesthetic preference. Swarm intelligence Swarm intelligence is a sub-field of evolutionary computing. Ant colony optimization (ACO) uses many ants (or agents) equipped with a pheromone model to traverse the solution space and find locally productive areas. Although considered an Estimation of distribution algorithm, Particle swarm optimization (PSO) is a computational method for multi-parameter optimization which also uses population-based approach. A population (swarm) of candidate solutions (particles) moves in the search space, and the movement of the particles is influenced both by their own best known position and swarm's global best known position. Like genetic algorithms, the PSO method depends on information sharing among population members. In some problems the PSO is often more computationally efficient than the GAs, especially in unconstrained problems with continuous variables. Evolutionary algorithms combined with Swarm intelligence Mayfly optimization algorithm (MA) combines major advantages of evolutionary algorithms and swarm intelligence algorithms. Other evolutionary computing algorithms Evolutionary computation is a sub-field of the metaheuristic methods. Memetic algorithm (MA), often called hybrid genetic algorithm among others, is a population-based method in which solutions are also subject to local improvement phases. The idea of memetic algorithms comes from memes, which unlike genes, can adapt themselves. In some problem areas they are shown to be more efficient than traditional evolutionary algorithms. Bacteriologic algorithms (BA) inspired by evolutionary ecology and, more particularly, bacteriologic adaptation. Evolutionary ecology is the study of living organisms in the context of their environment, with the aim of discovering how they adapt. Its basic concept is that in a heterogeneous environment, there is not one individual that fits the whole environment. So, one needs to reason at the population level. It is also believed BAs could be successfully applied to complex positioning problems (antennas for cell phones, urban planning, and so on) or data mining. Cultural algorithm (CA) consists of the population component almost identical to that of the genetic algorithm and, in addition, a knowledge component called the belief space. Differential evolution (DE) inspired by migration of superorganisms. Gaussian adaptation (normal or natural adaptation, abbreviated NA to avoid confusion with GA) is intended for the maximisation of manufacturing yield of signal processing systems. It may also be used for ordinary parametric optimisation. It relies on a certain theorem valid for all regions of acceptability and all Gaussian distributions. The efficiency of NA relies on information theory and a certain theorem of efficiency. Its efficiency is defined as information divided by the work needed to get the information. Because NA maximises mean fitness rather than the fitness of the individual, the landscape is smoothed such that valleys between peaks may disappear. Therefore it has a certain "ambition" to avoid local peaks in the fitness landscape. NA is also good at climbing sharp crests by adaptation of the moment matrix, because NA may maximise the disorder (average information) of the Gaussian simultaneously keeping the mean fitness constant. Other metaheuristic methods Metaheuristic methods broadly fall within stochastic optimisation methods. Simulated annealing (SA) is a related global optimization technique that traverses the search space by testing random mutations on an individual solution. A mutation that increases fitness is always accepted. A mutation that lowers fitness is accepted probabilistically based on the difference in fitness and a decreasing temperature parameter. In SA parlance, one speaks of seeking the lowest energy instead of the maximum fitness. SA can also be used within a standard GA algorithm by starting with a relatively high rate of mutation and decreasing it over time along a given schedule. Tabu search (TS) is similar to simulated annealing in that both traverse the solution space by testing mutations of an individual solution. While simulated annealing generates only one mutated solution, tabu search generates many mutated solutions and moves to the solution with the lowest energy of those generated. In order to prevent cycling and encourage greater movement through the solution space, a tabu list is maintained of partial or complete solutions. It is forbidden to move to a solution that contains elements of the tabu list, which is updated as the solution traverses the solution space. Extremal optimization (EO) Unlike GAs, which work with a population of candidate solutions, EO evolves a single solution and makes local modifications to the worst components. This requires that a suitable representation be selected which permits individual solution components to be assigned a quality measure ("fitness"). The governing principle behind this algorithm is that of emergent improvement through selectively removing low-quality components and replacing them with a randomly selected component. This is decidedly at odds with a GA that selects good solutions in an attempt to make better solutions. Other stochastic optimisation methods The cross-entropy (CE) method generates candidate solutions via a parameterized probability distribution. The parameters are updated via cross-entropy minimization, so as to generate better samples in the next iteration. Reactive search optimization (RSO) advocates the integration of sub-symbolic machine learning techniques into search heuristics for solving complex optimization problems. The word reactive hints at a ready response to events during the search through an internal online feedback loop for the self-tuning of critical parameters. Methodologies of interest for Reactive Search include machine learning and statistics, in particular reinforcement learning, active or query learning, neural networks, and metaheuristics. See also Genetic programming List of genetic algorithm applications Genetic algorithms in signal processing (a.k.a. particle filters) Propagation of schema Universal Darwinism Metaheuristics Learning classifier system Rule-based machine learning References Bibliography Rechenberg, Ingo (1994): Evolutionsstrategie '94, Stuttgart: Fromman-Holzboog. Schwefel, Hans-Paul (1974): Numerische Optimierung von Computer-Modellen (PhD thesis). Reprinted by Birkhäuser (1977). External links Resources Provides a list of resources in the genetic algorithms field An Overview of the History and Flavors of Evolutionary Algorithms Tutorials Genetic Algorithms - Computer programs that "evolve" in ways that resemble natural selection can solve complex problems even their creators do not fully understand An excellent introduction to GA by John Holland and with an application to the Prisoner's Dilemma An online interactive Genetic Algorithm tutorial for a reader to practise or learn how a GA works: Learn step by step or watch global convergence in batch, change the population size, crossover rates/bounds, mutation rates/bounds and selection mechanisms, and add constraints. A Genetic Algorithm Tutorial by Darrell Whitley Computer Science Department Colorado State University An excellent tutorial with much theory "Essentials of Metaheuristics", 2009 (225 p). Free open text by Sean Luke. Global Optimization Algorithms – Theory and Application Genetic Algorithms in Python Tutorial with the intuition behind GAs and Python implementation. Genetic Algorithms evolves to solve the prisoner's dilemma. Written by Robert Axelrod. Evolutionary algorithms Search algorithms Cybernetics Digital organisms Machine learning sv:Genetisk programmering#Genetisk algoritm
21022902
https://en.wikipedia.org/wiki/Baltic%20and%20International%20Maritime%20Council
Baltic and International Maritime Council
BIMCO is one of the largest of the international shipping associations representing shipowners. BIMCO states that its membership represents approximately 60 percent of the world's merchant shipping tonnage and that it has members in more than 130 countries, including managers, brokers and agents. BIMCO states that its primary objective is to protect its global membership through the provision of information and advice, while promoting fair business practices and facilitating harmonisation and standardisation of commercial shipping practices and contracts. BIMCO's headquarters is in Bagsværd, a suburb of Copenhagen, Denmark. The current President is Sabrina Chao, who took over as the 45th President of BIMCO in May 2021. The current Secretary General and CEO is David Loosley, who was previously CEO at IMarEST. To support the development and refinement of maritime regulations, BIMCO is accredited as a Non-Governmental Organisation (NGO) with all relevant United Nations organs, specifically the International Maritime Organization. In an effort to promote its agenda and objectives, the association maintains a close dialogue with governments and diplomatic representations around the world, including maritime administrations, regulatory institutions, and other stakeholders within the areas of EU, the United States, and Asia. BIMCO also conducts various training programmes around the world for the Maritime community. History BIMCO was founded in 1905 in Copenhagen by a group of shipowners who came together to agree timber freight rates. In 1913, the organisation created the first draft of a standard charter party agreement. By 2016, the organisation had 2,200 member companies. Publications BIMCO produces industry guidance and publications in partnership with the Witherby Publishing Group. For example, cyber security has come under increased focus in the maritime industry since the IMO required cyber security to be addressed under the International Safety Management Code and in 2019, BIMCO, the International Chamber of Shipping, and Witherbys published the Cyber Security Workbook for Onboard Ship Use. The second edition of the nautical workbook was published in 2021. In 2021, with Witherbys, BIMCO published an updated guidance title on contractual risks entitled Check Before Fixing. BIMCO publishes industry standard contracts for ocean towage, including TOWCON and TOWHIRE which were updated in 2021. The organisation also publishes shipbuilding contracts. References External links Web Shipping trade associations Organizations established in 1905 1905 establishments in Denmark Trade associations based in Denmark Non-profit organizations based in Copenhagen Companies based in Gladsaxe Municipality
4791442
https://en.wikipedia.org/wiki/Challenge%E2%80%93response%20spam%20filtering
Challenge–response spam filtering
A challenge–response (or C/R) system is a type of spam filter that automatically sends a reply with a challenge to the (alleged) sender of an incoming e-mail. It was originally designed in 1997 by Stan Weatherby, and was called Email Verification. In this reply, the purported sender is asked to perform some action to assure delivery of the original message, which would otherwise not be delivered. The action to perform typically takes relatively little effort to do once, but great effort to perform in large numbers. This effectively filters out spammers. Challenge–response systems only need to send challenges to unknown senders. Senders that have previously performed the challenging action, or who have previously been sent e-mail(s) to, would be automatically whitelisted. The challenge in challenge–response systems C/R systems attempt to provide challenges that can be fulfilled easily for legitimate senders and non-easily for spammers. Two characteristics that differ between legitimate senders and spammers are exploited to achieve this goal: Legitimate senders have a valid return address, while spammers usually forge a return address. This means that most spammers won't get the challenge, making them automatically fail any required action. Spammers send e-mail in large quantities and have to perform challenging actions in large numbers, while legitimate senders have to perform it at most once for every new e-mail contact. Listed below are examples of challenges that are or could be used to exploit these differences: Simply sending an (unmodified) reply to the challenging message. A challenge that includes a web URL, which can be loaded in an appropriate web browsing tool to respond to the challenge, so simply clicking on the link is sufficient to respond to the challenge. A challenge requiring reading natural language instructions on how to reply, with the inclusion of a special string or pass-code in the reply. For example, converting a date string (such as 'Thu Jan 12 08:45:44 2012') into its corresponding timestamp (1326379544). Other Turing Test approaches include a simple problem, or answering a simple question about the text or the recipient. Systems can attempt to produce challenges for which auto response is very difficult, or even an unsolved Artificial Intelligence problem. One example (also found in many web sites) is a "CAPTCHA" test in which the sender is required to view an image containing a word or phrase and respond with that word or phrase in text. Nowadays C/R systems are not used widely enough to make spammers bother to (automatically) respond to challenges. Therefore, C/R systems generally just rely on a simple challenge that would be made more complicated if spammers ever build such automated responders. Recommendations for C/R systems C/R systems should ideally: Allow users to view and act on messages in the holding queue. Comply with the requirements and recommendations of . Obey a detailed list of principles maintained by Brad Templeton, including allowing for the creation of “tagged” addresses or allow pass-codes placed in either the header or the body of the message—any of which lets messages be accepted without being challenged. For example, the TMDA system can create "tagged" addresses that permit: mail sent from a particular address mail that contains a certain “keyword” mail that is sent within a pre–set length of time, to allow correspondence related to an online order, but which then expires to disallow future marketing e-mail. Problems with sending challenges to forged email addresses can be reduced if the challenges are only sent when: the message header is properly formed the message is sent from an IP address with an associated domain the server has passed a greetpause test the server has passed a greylisting test the originating IP address is not found on trusted blacklists the sender's email address has not failed an E-mail authentication test, using techniques such as SPF and DKIM. Criticisms Critics of C/R systems have raised several issues regarding their legitimacy and usefulness as an email defense. A number of these issues relate to all programs which auto-respond to E-mail, including mailing list managers, vacation programs and bounce messages from mail servers. Challenges sent to forged email addresses Spammers can use a fake, non-existent address as sender address (in the field) in the e-mail header, but can also use a forged, existing sender address (a valid, but an arbitrary person's address without this person's consent). The latter would become increasingly common if e.g. callback verification would become more popular to detect spam. C/R systems challenging a message with a forged sender address would send their challenge as a new message to the person whose address was forged. This would create e-mail backscatter, which would effectively shift the burden from the person who would have received the spam to the person whose address was forged and which may be treated the same as any other Unsolicited Bulk Email (UBE) by the receiving system, possibly leading to blacklisting of the mail server or even listing on a DNSBL. In addition, if the forged sender decided to validate the challenge, the C/R user would receive the spam anyway and the forged sender address would be whitelisted. Though definitely an undesirable side-effect, this issue would be non-existent if people, whose email address was used as a forged address in spam, happen to run a C/R system themselves. In this case, one of the C/R users must implement some form of return address signing (such as Bounce Address Tag Validation) to ensure that the challenge goes through. Also, if systems like SPF and DKIM became common, forged sender addresses would be recognized by these systems before reaching a C/R system. In some cases, C/R systems can be tricked into becoming spam relays. To be useful, some part of the message under challenge is generally included in the challenge message. A spammer, knowing that he's sending to a C/R using system, could design his message so that his "spam payload" is in the part of the message that the challenge message includes. In this case, the forged sender is the actual recipient of the spam, and the C/R system unwittingly is the relay. Social issues Disseminating an ordinary email address that is protected by a C/R system results in challenges to those who send mail to that address. Some C/R critics consider it rude to give people your email address, then require them (unless previously whitelisted, which might not always be possible) to answer the challenge before they can send you mail. Advocates of C/R systems argue that the benefits by far outweigh the 'burden' of an incidental challenge, and that there will probably never be a final solution against spam without laying some kind of burden on the e-mail sender. They reason that the more widespread the use of C/R systems is, the more understood, accepted and appreciated they are. In an analogy with snail mail, the sender is prepared to pay for the stamp, in an analogy with phone calls, the caller is prepared to pay for the outgoing call. Interaction with mailing lists or other automated mailers Some C/R systems interact badly with mailing list software. If a person subscribed to a mailing list begins to use C/R software, posters to the mailing list may be confronted by challenge messages. Order confirmations, billing statements and delivery notices from online shopping systems are usually sent via automated systems. Email challenges sent to such systems can be lost, and legitimate mail sent by these systems may not reach the C/R system user. Advocates of C/R systems argue that, though it takes extra effort, solutions for these problems exist if the end-user behind the C/R system does these simple things: Whitelist a mailinglist address manually as soon as one subscribes to it. Note: for many email groups, the new member won't know the group's address until after receipt of the "welcome" email, thus making this recommendation unworkable. Use 'tagged email addresses' for mailinglists or automated mailers like the above, that can be recognized and cleared automatically by the C/R system. Manually inspect the message queue and overriding the C/R process in case where the C/R system holds an expected message from an automated mailer. False positives C/R advocates claim that such systems have a lower rate of false positives than other systems for automatically filtering unsolicited bulk email. Critics argue that typical users of C/R systems still need to review their challenged mail regularly, looking for non-bulk mail or solicited bulk email for which the sender has not responded to the challenge. This issue is particularly notable with newsletters, transactional messages, and other solicited bulk email, as such senders do not usually check for challenges to their mail. However, if the bulk email in question was solicited, then the C/R user could be expected to have added it to the whitelist. If the bulk email was not solicited, then by definition it is spam, and is filtered by the C/R system. Implementations Tagged Message Delivery Agent Channel email <- Just wants a reply, doesn't actually try to determine if the user is human (thus getting rid of the spammers that don't use legitimate emails and doesn't require costly processing). FairUCE ("Fair use of Unsolicited Commercial Email"), developed by IBM, tried to find a relationship connecting the envelope sender's domain name and the IP address of the client delivering the mail, using a series of cached DNS look-ups. If a relationship could be found, FairUCE checked the recipient's whitelist and blacklist, as well as the domain's reputation, to determine whether to accept, reject, challenge on reputation, or present the user with a set of whitelist/blacklist options. As of 2010, the project is listed as "retired" technology. Notes References External links SpamHelp Challenge/Response Services A listing of challenge/response filtering service providers When Spam Filters Aren't Enough, Walt Mossberg of Wall Street Journal, March 22, 2007 Why Challenge-Response is a Bad Idea July 2006 Challenge-Response systems make matters worse February 2006 Challenge-Response Anti-Spam Systems Considered Harmful December 29, 2003 John Levine: Challenge-response systems are as harmful as spam May 2003 A Challenging Response to Challenge-Response May 2003 What You Need to Know About Challenge – Response Spam Filters 2003 Anti-spam Spam filtering Spamming
11041361
https://en.wikipedia.org/wiki/Autodesk%20Simulation
Autodesk Simulation
Autodesk Simulation is a general-purpose multiphysics finite element analysis software package initially developed by ALGOR Incorporated and acquired by Autodesk in January 2009. It is intended for use with Microsoft Windows and Linux operating systems. It is distributed in a number of different core packages to cater to specific applications, such as mechanical event simulation and computational fluid dynamics. Under the ALGOR name the software was used by many scientists and engineers worldwide. It has found application in aerospace, and it has received many favorable reviews. Typical uses Typical uses include bending, mechanical contact, thermal (conduction, convection, radiation), fluid dynamics, and coupled or uncoupled multiphysics. Materials and elements database Autodesk Simulation's library of material models includes metals and alloys, plastics, glass, foams, fabrics, elastomers, Concrete (with rebar), soils and user-defined materials. Autodesk Simulation's element library depends on the geometry and the type of analysis performed. It includes 8 and 4 node solid, 8 and 4 node shell, as well as beam and rod elements. References External links Autodesk simulation products page Finite element software Science software for Linux Finite element software for Linux
27848
https://en.wikipedia.org/wiki/Steve%20Wozniak
Steve Wozniak
Stephen Gary Wozniak (; born August 11, 1950), also known by his nickname "Woz", is an American electronics engineer, computer programmer, philanthropist, and technology entrepreneur. In 1976, with business partner Steve Jobs, he co-founded Apple Inc., which later became the world's largest information technology company by revenue and the largest company in the world by market capitalization. Through his work at Apple in the 1970s and 1980s, he is widely recognized as one of the prominent pioneers of the personal-computer revolution. In 1975, Wozniak started developing the Apple I into the computer that launched Apple when he and Jobs first began marketing it the following year. He primarily designed the Apple II, introduced in 1977, known as one of the first highly successful mass-produced microcomputers, while Jobs oversaw the development of its foam-molded plastic case and early Apple employee Rod Holt developed its switching power supply. With software engineer Jef Raskin, Wozniak had a major influence over the initial development of the original Apple Macintosh concepts from 1979 to 1981, when Jobs took over the project following Wozniak's brief departure from the company due to a traumatic airplane accident. After permanently leaving Apple in 1985, Wozniak founded CL 9 and created the first programmable universal remote, released in 1987. He then pursued several other businesses and philanthropic ventures throughout his career, focusing largely on technology in K–12 schools. , Wozniak has remained an employee of Apple in a ceremonial capacity since stepping down in 1985. In recent years, he has helped fund multiple entrepreneurial efforts dealing in areas such as telecommunications, flash memory, technology and pop culture conventions, ecology, satellites, technical education and more. Early life Stephen Gary Wozniak was born on August 11, 1950, in San Jose, California. His mother, Margaret Louise Wozniak (née Kern) (1923–2014), was from Washington state, and his father, Francis Jacob "Jerry" Wozniak (1925–1994) of Michigan, was an engineer for the Lockheed Corporation. Wozniak graduated from Homestead High School in 1968, in Cupertino, California. Steve has one brother, Mark Wozniak, a former tech executive who lives in Menlo Park. He also has one sister, Leslie Wozniak. She attended Homestead High School in Cupertino. She is a grant adviser at Five Bridges Foundation, which helps at-risk youths in San Francisco. She once said it was her mother who introduced activism to her and her siblings. The name on Wozniak's birth certificate is "Stephan Gary Wozniak", but his mother said that she intended it to be spelled "Stephen", which is what he uses. Wozniak has mentioned his surname being Polish. In the early 1970s, Wozniak's blue box design earned him the nickname "Berkeley Blue" in the phreaking community. Wozniak has credited watching Star Trek and attending Star Trek conventions while in his youth as a source of inspiration for his starting Apple Inc. Career In 1969, Wozniak returned to the San Francisco Bay Area after being expelled from the University of Colorado Boulder in his first year for hacking the university's computer system. He re-enrolled at De Anza College in Cupertino before transferring to the University of California, Berkeley, in 1971. In June of that year, for a self-taught engineering project, Wozniak designed and built his first computer with his friend Bill Fernandez. Predating useful microprocessors, screens, and keyboards, and using punch cards and only 20 TTL chips donated by an acquaintance, they named it "Cream Soda" after their favorite beverage. A newspaper reporter stepped on the power supply cable and blew up the computer, but it served Wozniak as "a good prelude to my thinking 5 years later with the Apple I and Apple II computers". Before focusing his attention on Apple, he was employed at Hewlett-Packard (HP), where he designed calculators. It was during this time that he dropped out of Berkeley and befriended Steve Jobs. Wozniak was introduced to Jobs by Fernandez, who attended Homestead High School with Jobs in 1971. Jobs and Wozniak became friends when Jobs worked for the summer at HP, where Wozniak, too, was employed, working on a mainframe computer. Their first business partnership began later that year when Wozniak read an article titled "Secrets of the Little Blue Box" from the October 1971 issue of Esquire, and started to build his own "blue boxes" that enabled one to make long-distance phone calls at no cost. Jobs, who handled the sales of the blue boxes, managed to sell some two hundred of them for $150 each, and split the profit with Wozniak. Jobs later told his biographer that if it hadn't been for Wozniak's blue boxes, "there wouldn't have been an Apple." In 1973, Jobs was working for arcade game company Atari, Inc. in Los Gatos, California. He was assigned to create a circuit board for the arcade video game Breakout. According to Atari co-founder Nolan Bushnell, Atari offered $100 () for each chip that was eliminated in the machine. Jobs had little knowledge of circuit board design and made a deal with Wozniak to split the fee evenly between them if Wozniak could minimize the number of chips. Wozniak reduced the number of chips by 50, by using RAM for the brick representation. Too complex to be fully comprehended at the time, the fact that this prototype also had no scoring or coin mechanisms meant Woz's prototype could not be used. Jobs was paid the full bonus regardless. Jobs told Wozniak that Atari gave them only $700 and that Wozniak's share was thus $350 (). Wozniak did not learn about the actual $5,000 bonus () until ten years later. While dismayed, he said that if Jobs had told him about it and had said he needed the money, Wozniak would have given it to him. In 1975, Wozniak began designing and developing the computer that would eventually make him famous, the Apple I. On June 29 of that year, he tested his first working prototype, displaying a few letters and running sample programs. It was the first time in history that a character displayed on a TV screen was generated by a home computer. With the Apple I, Wozniak was largely working to impress other members of the Palo Alto-based Homebrew Computer Club, a local group of electronics hobbyists interested in computing. The club was one of several key centers which established the home hobbyist era, essentially creating the microcomputer industry over the next few decades. Unlike other custom Homebrew designs, the Apple had an easy-to-achieve video capability that drew a crowd when it was unveiled. Apple formation and success By March 1, 1976, Wozniak completed the basic design of the Apple I computer. He alone designed the hardware, circuit board designs, and operating system for the computer. Wozniak originally offered the design to HP while working there, but was denied by the company on five occasions. Jobs then advised Wozniak to start a business of their own to build and sell bare printed circuit boards of the Apple I. Wozniak, at first skeptical, was later convinced by Jobs that even if they were not successful they could at least say to their grandchildren that they had had their own company. To raise the money they needed to build the first batch of the circuit boards, Wozniak sold his HP scientific calculator while Jobs sold his Volkswagen van. On April 1, 1976, Jobs and Wozniak formed the Apple Computer Company (now called Apple Inc.) along with administrative supervisor Ronald Wayne, whose participation in the new venture was short-lived. The two decided on the name "Apple" shortly after Jobs returned from Oregon and told Wozniak about his time spent on an apple orchard there. After the company was formed, Jobs and Wozniak made one last trip to the Homebrew Computer Club to give a presentation of the fully assembled version of the Apple I. Paul Terrell, who was starting a new computer shop in Mountain View, California, called the Byte Shop, saw the presentation and was impressed by the machine. Terrell told Jobs that he would order 50 units of the Apple I and pay $500() each on delivery, but only if they came fully assembled, as he was not interested in buying bare printed circuit boards. Together the duo assembled the first boards in Jobs's parents' Los Altos home; initially in his bedroom and later (when there was no space left) in the garage. Wozniak's apartment in San Jose was filled with monitors, electronic devices, and computer games that he had developed. The Apple I sold for $666.66. Wozniak later said he had no idea about the relation between the number and the mark of the beast, and that he came up with the price because he liked "repeating digits". They sold their first 50 system boards to Terrell later that year. In November 1976, Jobs and Wozniak received substantial funding from a then-semi-retired Intel product marketing manager and engineer named Mike Markkula. At the request of Markkula, Wozniak resigned from his job at HP and became the vice president in charge of research and development at Apple. Wozniak's Apple I was similar to the Altair 8800, the first commercially available microcomputer, except the Apple I had no provision for internal expansion cards. With expansion cards, the Altair could attach to a computer terminal and be programmed in BASIC. In contrast, the Apple I was a hobbyist machine. Wozniak's design included a $25 CPU (MOS 6502) on a single circuit board with 256 bytes of ROM, 4K or 8K bytes of RAM, and a 40-character by 24-row display controller. Apple's first computer lacked a case, power supply, keyboard, and displayall components that had to be provided by the user. Eventually about 200 Apple I computers were produced in total. After the success of the Apple I, Wozniak designed the Apple II, the first personal computer with the ability to display color graphics, and BASIC programming language built in. Inspired by "the technique Atari used to simulate colors on its first arcade games", Wozniak found a way of putting colors into the NTSC system by using a chip, while colors in the PAL system are achieved by "accident" when a dot occurs on a line, and he says that to this day he has no idea how it works. During the design stage, Jobs argued that the Apple II should have two expansion slots, while Wozniak wanted eight. After a heated argument, during which Wozniak threatened that Jobs should "go get himself another computer", they decided to go with eight slots. Jobs and Wozniak introduced the Apple II at the April 1977 West Coast Computer Faire. Wozniak's first article about the Apple II was in Byte magazine in May 1977. It became one of the first highly successful mass-produced personal computers in the world. Wozniak also designed the Disk II floppy disk drive, released in 1978 specifically for use with the Apple II series to replace the slower cassette tape storage. In 1980, Apple went public to instant and significant financial profitability, making Jobs and Wozniak both millionaires. The Apple II's intended successor, the Apple III, released the same year, was a commercial failure and was discontinued in 1984. According to Wozniak, the Apple III "had 100 percent hardware failures", and that the primary reason for these failures was that the system was designed by Apple's marketing department, unlike Apple's previous engineering-driven projects. During the early design and development phase of the original Macintosh, Wozniak had a heavy influence over the project along with Jef Raskin, who conceived the computer. Later named the "Macintosh 128k", it would become the first mass-market personal computer featuring an integral graphical user interface and mouse. The Macintosh would also go on to introduce the desktop publishing industry with the addition of the Apple LaserWriter, the first laser printer to feature vector graphics. In a 2013 interview, Wozniak said that in 1981, "Steve [Jobs] really took over the project when I had a plane crash and wasn't there." Plane crash and temporary leave from Apple On February 7, 1981, the Beechcraft Bonanza A36TC which Wozniak was piloting (and not qualified to operate ) crashed soon after takeoff from the Sky Park Airport in Scotts Valley, California. The airplane stalled while climbing, then bounced down the runway, broke through two fences, and crashed into an embankment. Wozniak and his three passengers—then-fiancée Candice Clark, her brother Jack Clark, and Jack's girlfriend, Janet Valleau—were injured. Wozniak sustained severe face and head injuries, including losing a tooth, and also suffered for the following five weeks from anterograde amnesia, the inability to create new memories. He had no memory of the crash, and did not remember his name while in the hospital or the things he did for a time after he was released. He would later state that Apple II computer games were what helped him regain his memory. The National Transportation Safety Board investigation report cited premature liftoff and pilot inexperience as probable causes of the crash. Wozniak did not immediately return to Apple after recovering from the airplane crash, seeing it as a good reason to leave. Infinite Loop characterized this time: "Coming out of the semi-coma had been like flipping a reset switch in Woz's brain. It was as if in his thirty-year old body he had regained the mind he'd had at eighteen before all the computer madness had begun. And when that happened, Woz found he had little interest in engineering or design. Rather, in an odd sort of way, he wanted to start over fresh." UC Berkeley and US Festivals Later in 1981, after recovering from the plane crash, Wozniak enrolled back at UC Berkeley to complete his degree. Because his name was well known at this point, he enrolled under the name Rocky Raccoon Clark, which is the name listed on his diploma, although he did not officially receive his degree in electrical engineering and computer science until 1987. In May 1982 and 1983, Wozniak, with help from professional concert promoter Bill Graham, founded the company Unuson, an abbreviation of "unite us in song", which sponsored two US Festivals, with "US" pronounced like the pronoun, not as initials. Initially intended to celebrate evolving technologies, the festivals ended up as a technology exposition and a rock festival as a combination of music, computers, television, and people. After losing several million dollars on the 1982 festival, Wozniak stated that unless the 1983 event turned a profit, he would end his involvement with rock festivals and get back to designing computers. Later that year, Wozniak returned to Apple product development, desiring no more of a role than that of an engineer and a motivational factor for the Apple workforce. Return to Apple product development In the mid-1980s he designed the Apple Desktop Bus, a proprietary bit-serial peripheral bus that became the basis of all Macintosh and NeXT computer models. Starting in the mid-1980s, as the Macintosh experienced slow but steady growth, Apple's corporate leadership, including Steve Jobs, increasingly disrespected its flagship cash cow Apple II seriesand Wozniak along with it. The Apple II divisionother than Wozniakwas not invited to the Macintosh introduction event, and Wozniak was seen kicking the dirt in the parking lot. Although Apple II products provided about 85% of Apple's sales in early 1985, the company's January 1985 annual meeting did not mention the Apple II division or its employees, a typical situation that frustrated Wozniak. Final departure from Apple workforce Even with the success he had helped to create at Apple, Wozniak believed that the company was hindering him from being who he wanted to be, and that it was "the bane of his existence". He enjoyed engineering, not management, and said that he missed "the fun of the early days". As other talented engineers joined the growing company, he no longer believed he was needed there, and by early 1985, Wozniak left Apple again, stating that the company had "been going in the wrong direction for the last five years". He then sold most of his stock. The Apple II platform financially carried the company well into the Macintosh era of the late 1980s; it was made semi-portable with the Apple IIc of 1984, was extended, with some input from Wozniak, by the 16-bit Apple IIGS of 1986, and was discontinued altogether when the Apple IIe was discontinued on November 15, 1993 (although the Apple IIe card, which allowed compatible Macintosh computers to run Apple II software and use certain Apple II peripherals, was produced until May 1995). Post-Apple career After his career at Apple, Wozniak founded CL 9 in 1985, which developed and brought the first programmable universal remote control to market in 1987, called the "CORE". Beyond engineering, Wozniak's second lifelong goal had always been to teach elementary school because of the important role teachers play in students' lives. Eventually, he did teach computer classes to children from the fifth through ninth grades, and teachers as well. Unuson continued to support this, funding additional teachers and equipment. In 2001, Wozniak founded Wheels of Zeus (WOZ) to create wireless GPS technology to "help everyday people find everyday things much more easily". In 2002, he joined the board of directors of Ripcord Networks, Inc., joining Apple alumni Ellen Hancock, Gil Amelio, Mike Connor, and Wheels of Zeus co-founder Alex Fielding in a new telecommunications venture. Later the same year he joined the board of directors of Danger, Inc., the maker of the Hip Top. In 2006, Wheels of Zeus was closed, and Wozniak founded Acquicor Technology, a holding company for acquiring technology companies and developing them, with Apple alumni Hancock and Amelio. From 2009 through 2014 he was chief scientist at Fusion-io. In 2014 he became chief scientist at Primary Data, which was founded by some former Fusion-io executives. Silicon Valley Comic Con (SVCC) is an annual pop culture and technology convention at the San Jose McEnery Convention Center in San Jose, California. The convention was co-founded by Wozniak and Rick White, with Trip Hunter as CEO. Wozniak announced the annual event in 2015 along with Marvel legend Stan Lee. In October 2017, Wozniak founded Woz U, an online educational technology service for independent students and employees. As of December 2018, Woz U was licensed as a school with the Arizona state board. Though permanently leaving Apple as an active employee in 1985, Wozniak chose to never remove himself from the official employee list, and continues to represent the company at events or in interviews. Today he receives a stipend from Apple for this role, estimated in 2006 to be per year. He is also an Apple shareholder. He maintained a friendly acquaintance with Steve Jobs until Jobs's death in October 2011. However, in 2006, Wozniak stated that he and Jobs were not as close as they used to be. In a 2013 interview, Wozniak said that the original Macintosh "failed" under Steve Jobs, and that it was not until Jobs left that it became a success. He called the Apple Lisa group the team that had kicked Jobs out, and that Jobs liked to call the Lisa group "idiots for making [the Lisa computer] too expensive". To compete with the Lisa, Jobs and his new team produced a cheaper computer, one that, according to Wozniak, was "weak", "lousy" and "still at a fairly high price". "He made it by cutting the RAM down, by forcing you to swap disks here and there", says Wozniak. He attributed the eventual success of the Macintosh to people like John Sculley "who worked to build a Macintosh market when the Apple II went away". At the end of 2020, Wozniak announced the launch of a new company helmed by him. Efforce is described as a marketplace for funding ecologically friendly projects. It used a WOZX cryptocurrency token for funding and blockchain to redistribute the profit to token holders and businesses engaged on the platform. In its first week trading, the WOZX cryptocurrency token increased 1,400%. In September 2021, it was reported that Wozniak was also starting a company alongside co-founder Alex Fielding named Privateer Space to address the problem of space debris. Patents Wozniak is listed as the sole inventor on the following Apple patents: US Patent No. 4,136,359: "Microcomputer for use with video display"—for which he was inducted into the National Inventors Hall of Fame. US Patent No. 4,210,959: "Controller for magnetic disc, recorder, or the like" US Patent No. 4,217,604: "Apparatus for digitally controlling PAL color display" US Patent No. 4,278,972: "Digitally-controlled color signal generation means for use with display" Philanthropy In 1990, Wozniak helped found the Electronic Frontier Foundation, providing some of the organization's initial funding and serving on its founding Board of Directors. He is the founding sponsor of the Tech Museum, Silicon Valley Ballet and Children's Discovery Museum of San Jose. Also since leaving Apple, Wozniak has provided all the money, and much onsite technical support, for the technology program in his local school district in Los Gatos. Un.U.Son. (Unite Us In Song), an organization Wozniak formed to organize the two US festivals, is now primarily tasked with supporting his educational and philanthropic projects. In 1986, Wozniak lent his name to the Stephen G. Wozniak Achievement Awards (popularly known as "Wozzie Awards"), which he presented to six Bay Area high school and college students for their innovative use of computers in the fields of business, art, and music. Wozniak is the subject of a student-made film production of his friend's (Joe Patane) nonprofit Dream Camp Foundation for high-level-need youth entitled Camp Woz: The Admirable Lunacy of Philanthropy. Honors and awards In 1979, Wozniak was awarded the ACM Grace Murray Hopper Award. In 1985, both he and Steve Jobs received the National Medal of Technology from US President Ronald Reagan. Later he donated funds to create the "Woz Lab" at the University of Colorado at Boulder. In 1998, he was named a Fellow of the Computer History Museum "for co-founding Apple Computer and inventing the Apple I personal computer." In September 2000, Wozniak was inducted into the National Inventors Hall of Fame, and in 2001 he was awarded the 7th Annual Heinz Award for Technology, the Economy and Employment. The American Humanist Association awarded him the Isaac Asimov Science Award in 2011. In 2004, Wozniak was given the 5th Annual Telluride Tech Festival Award of Technology. He was awarded the Global Award of the President of Armenia for Outstanding Contribution to Humanity Through IT in 2011. On February 17, 2014, in Los Angeles, Wozniak was awarded the 66th Hoover Medal from IEEE President & CEO J. Roberto de Marca. The award is presented to an engineer whose professional achievements and personal endeavors have advanced the well-being of humankind and is administered by a board representing five engineering organizations: The American Society of Mechanical Engineers; the American Society of Civil Engineers; the American Institute of Chemical Engineers; the American Institute of Mining, Metallurgical, and Petroleum Engineers; and Institute of Electrical and Electronics Engineers. The New York City Chapter of Young Presidents' Organization presented their 2014 Lifetime Achievement Award to Wozniak on October 16, 2014, at the American Museum of Natural History. In November 2014, Industry Week added Wozniak to the Manufacturing Hall of Fame. On June 19, 2015, Wozniak received the Legacy for Children Award from the Children's Discovery Museum of San Jose. The Legacy for Children Award honors an individual whose legacy has significantly benefited the learning and lives of children. The purpose of the Award is to focus Silicon Valley's attention on the needs of our children, encouraging us all to take responsibility for their well-being. Candidates are nominated by a committee of notable community members involved in children's education, health care, human and social services, and the arts. The city of San Jose named a street "Woz Way" in his honor. The street address of the Children's Discovery Museum of San Jose is 180 Woz Way. On June 20, 2015, The Cal Alumni Association (UC Berkeley's Alumni Association) presented Wozniak with the 2015 Alumnus of the Year Award. "We are honored to recognize Steve Wozniak with CAA's most esteemed award", said CAA President Cynthia So Schroeder '91. "His invaluable contributions to education and to UC Berkeley place him among Cal's most accomplished and respected alumni." In March 2016, High Point University announced that Wozniak will serve as their Innovator in Residence. Wozniak was High Point University's commencement speaker in 2013. Through this ongoing partnership, Wozniak will connect with High Point University students on a variety of topics and make campus-visits periodically. In March 2017, Wozniak was listed by UK-based company Richtopia at number 18 on its list of the 200 Most Influential Philanthropists and Social Entrepreneurs. Wozniak is the 2021 recipient of the IEEE Masaru Ibuka Consumer Electronics Award "for pioneering the design of consumer-friendly personal computers." Honorary degrees For his contributions to technology, Wozniak has been awarded a number of Honorary Doctor of Engineering degrees, which include the following: University of Colorado Boulder: 1989 North Carolina State University: 2004 Kettering University: 2005 Nova Southeastern University, Fort Lauderdale: 2005 ESPOL University in Ecuador: 2008 Michigan State University, in East Lansing 2011 Concordia University in Montreal, Canada: June 22, 2011 State Engineering University of Armenia: November 11, 2011 Santa Clara University: June 16, 2012 University Camilo José Cela in Madrid, Spain: November 8, 2013 In media Steve Wozniak has been mentioned, represented, or interviewed countless times in media from the founding of Apple to the present. Wired magazine described him as a person of "tolerant, ingenuous self-esteem" who interviews with "a nonstop, singsong voice". Documentaries Steve Jobs: The Man in the Machine (2015) Camp Woz: The Admirable Lunacy of Philanthropy a 2009 documentary Geeks On Board a 2007 documentary The Secret History of Hacking a 2001 documentary film featuring Wozniak and other phreakers and computer hackers. Triumph of the Nerds a 1996 PBS documentary series about the rise of the personal computer. Steve Wozniak's Formative Moment a March 15, 2016, original short feature film from Reddit Formative Moment Feature films 1999: Pirates of Silicon Valley a TNT film directed by Martyn Burke. Wozniak is portrayed by Joey Slotnick while Jobs is played by Noah Wyle. 2013: Jobs a film directed by Joshua Michael Stern. Wozniak is portrayed by Josh Gad, while Jobs is portrayed by Ashton Kutcher. 2015: Steve Jobs a feature film by Danny Boyle, with a screenplay written by Aaron Sorkin. Wozniak is portrayed by Seth Rogen, while Jobs is portrayed by Michael Fassbender. 2015: Steve Jobs vs. Bill Gates: The Competition to Control the Personal Computer, 1974–1999: Original film from the National Geographic Channel for the American Genius series. Television TechTV - The Screen Savers 2002-09-27 (Steve Wozniak and Kevin Mitnik a convicted hacker) Featuring an interview with Adrian Lamo https://www.youtube.com/watch?v=PMDI4-DNecw After seeing her stand-up performance in Saratoga, California, Wozniak began dating comedian Kathy Griffin. Together, they attended the 2007 Emmy Awards, and subsequently made many appearances on the fourth season of her show Kathy Griffin: My Life on the D-List. Wozniak is on the show as her date for the Producers Guild of America award show. However, on a June 19, 2008 appearance on The Howard Stern Show, Griffin confirmed that they were no longer dating and decided to remain friends. Wozniak portrays a parody of himself in the first episode of the television series Code Monkeys; he plays the owner of Gameavision before selling it to help fund his next enterprise. He later appears again in the 12th episode when he is in Las Vegas at the annual Video Game Convention and sees Dave and Jerry. He also appears in a parody of the "Get a Mac" ads featured in the final episode of Code Monkeys second season. Wozniak is also interviewed and featured in the documentary Hackers Wanted and on the BBC. Wozniak competed on Season 8 of Dancing with the Stars in 2009 where he danced with Karina Smirnoff. Though Wozniak and Smirnoff received 10 combined points from the three judges out of 30, the lowest score of the evening, he remained in the competition. He later posted on a social networking site that he believed that the vote count was not legitimate and suggested that the Dancing with the Stars judges had lied about the vote count to keep him on the show. After being briefed on the method of judging and vote counting, he retracted and apologized for his statements. Though suffering a pulled hamstring and a fracture in his foot, Wozniak continued to compete, but was eliminated from the competition on March 31, with a score of 12 out of 30 for an Argentine Tango. On September 30, 2010, he appeared as himself on The Big Bang Theory season 4 episode "The Cruciferous Vegetable Amplification". While dining in The Cheesecake Factory where Penny works, he is approached by Sheldon via telepresence on a Texai robot. Leonard tries to explain to Penny who Wozniak is, but she says she already knows him from Dancing with the Stars. On September 30, 2013, he appeared along with early Apple employees Daniel Kottke and Andy Hertzfeld on the television show John Wants Answers to discuss the movie Jobs. In April 2021, Wozniak became a panelist for the new TV series Unicorn Hunters, a business investment show from the makers of the series The Masked Singer. Views on artificial superintelligence In March 2015, Wozniak stated that while he had originally dismissed Ray Kurzweil's opinion that machine intelligence would outpace human intelligence within several decades, Wozniak had changed his mind: Wozniak stated that he had started to identify a contradictory sense of foreboding about artificial intelligence, while still supporting the advance of technology. By June 2015, Wozniak changed his mind again, stating that a superintelligence takeover would be good for humans: In 2016, Wozniak changed his mind again, stating that he no longer worried about the possibility of superintelligence emerging because he is skeptical that computers will be able to compete with human "intuition": "A computer could figure out a logical endpoint decision, but that's not the way intelligence works in humans". Wozniak added that if computers do become superintelligent, "they're going to be partners of humans over all other species just forever". Personal life Wozniak lives in Los Gatos, California. He applied for Australian citizenship in 2012, and has stated that he would like to live in Melbourne, Australia in the future. Wozniak has been referred to frequently by the nickname "Woz", or "The Woz"; he has also been called "The Wonderful Wizard of Woz" and "The Second Steve" (in regard to his early business partner and longtime friend, Steve Jobs). "WoZ" (short for "Wheels of Zeus") is the name of a company Wozniak founded in 2002; it closed in 2006. Wozniak describes his impetus for joining the Freemasons in 1979 as being able to spend more time with his then-wife, Alice Robertson, who belonged to the Order of the Eastern Star, associated with the Masons. Wozniak has said that he quickly rose to a third degree Freemason because, whatever he does, he tries to do well. He was initiated in 1979 at Charity Lodge No. 362 in Campbell, California, now part of Mt. Moriah Lodge No. 292 in Los Gatos. Today he is no longer involved: "I did become a Freemason and know what it's about but it doesn't really fit my tech/geek personality. Still, I can be polite to others from other walks of life. After our divorce was filed I never attended again but I did contribute enough for a lifetime membership." Wozniak was married to slalom canoe gold-medalist Candice Clark from June 1981 to 1987. They have three children together, the youngest being born after their divorce was finalized. After a high-profile relationship with actress Kathy Griffin, who described him on Tom Green's House Tonight in 2008 as "the biggest techno-nerd in the Universe", Wozniak married Janet Hill, his current spouse. On his religious views, Wozniak has called himself an "atheist or agnostic". He is a member of a Segway Polo team, the Silicon Valley Aftershocks, and is considered a "super fan" of the NHL ice hockey team San Jose Sharks. In 2006, he co-authored with Gina Smith his autobiography, iWoz: From Computer Geek to Cult Icon: How I Invented the Personal Computer, Co-Founded Apple, and Had Fun Doing It. The book made The New York Times Best Seller list. Wozniak's favorite video game is Tetris for Game Boy, and he had a high score for Sabotage. In the 1990s he submitted so many high scores for Tetris to Nintendo Power that they would no longer print his scores, so he started sending them in under the reversed name "Evets Kainzow". Prior to the release of Game Boy, Wozniak called Gran Trak 10 his "favorite game ever" and said that he played the arcade game while developing hardware for the first version of Breakout for Atari. In 1985, Steve Jobs referred to Wozniak as a Gran Trak 10 "addict". Wozniak has expressed his personal disdain for money and accumulating large amounts of wealth. He told Fortune magazine in 2017, "I didn't want to be near money, because it could corrupt your values ... I really didn't want to be in that super 'more than you could ever need' category." He also said that he only invests in things "close to his heart". When Apple first went public in 1980, Wozniak offered $10 million of his own stock to early Apple employees, something Jobs refused to do. Wozniak has the condition prosopagnosia (face blindness). He has expressed support for the electronics right to repair movement. In July 2021, Wozniak made a Cameo video in response to right to repair activist Louis Rossmann, in which he described the issue as something that has "really affected me emotionally", and credited Apple's early breakthroughs to open technology of the 1970s. See also Apple IIGS (limited edition case molded with Woz's signature) Group coded recording Hackers: Heroes of the Computer Revolution (1984 book) Woz Cup (segway polo world championship) References Notes External links Steve Wozniak @ Andy Hertzfeld's The Original Macintosh (folklore.org) "Jul.23 -- Apple Inc. co-founder Steve Wozniak says YouTube has for months allowed scammers to use his name and likeness as part of a phony bitcoin giveaway. He speaks with Bloomberg's Emily Chang." Photographs Edwards, Jim (December 26, 2013). "These Pictures Of Apple's First Employees Are Absolutely Wonderful", Business Insider "Macintosh creators rekindle the 'Twiggy Mac'". CNET "Twiggy Lives! At the Computer Museum: Happiness is a good friend – Woz and Rod Holt". The Twiggy Mac Pages 1950 births Living people Amateur radio people American agnostics American atheists American computer businesspeople American computer programmers American computer scientists Engineers from California American inventors American people of Polish descent American technology company founders Apple II family Apple Inc. people Apple Inc. executives Apple Fellows Atari people Businesspeople from San Jose, California Computer designers De Anza College alumni Education activists Grace Murray Hopper Award laureates Hewlett-Packard people Internet activists Steve Jobs Members of the United States National Academy of Engineering National Medal of Technology recipients Nerd culture People from Los Gatos, California People with traumatic brain injuries Personal computing Philanthropists from California Survivors of aviation accidents or incidents UC Berkeley College of Engineering alumni University of Colorado Boulder alumni University of Technology Sydney faculty
1729908
https://en.wikipedia.org/wiki/Learning%20management%20system
Learning management system
A learning management system (LMS) is a software application for the administration, documentation, tracking, reporting, automation, and delivery of educational courses, training programs, or learning and development programs. The learning management system concept emerged directly from e-Learning. Learning management systems make up the largest segment of the learning system market. The first introduction of the LMS was in the late 1990s. Learning management systems have faced a massive growth in usage due to the emphasis on remote learning during the COVID-19 pandemic. Learning management systems were designed to identify training and learning gaps, using analytical data and reporting. LMSs are focused on online learning delivery but support a range of uses, acting as a platform for online content, including courses, both asynchronous based and synchronous based. In the higher education space, an LMS may offer classroom management for instructor-led training or a flipped classroom. Modern LMSs include intelligent algorithms to make automated recommendations for courses based on a user's skill profile as well as extract metadata from learning materials to make such recommendations even more accurate. Characteristics Purpose An LMS delivers and manages all types of content, including video, courses, and documents. In the education and higher education markets, an LMS will include a variety of functionality that is similar to corporate but will have features such as rubrics, teacher and instructor-facilitated learning, a discussion board, and often the use of a syllabus. A syllabus is rarely a feature in the corporate LMS, although courses may start with heading-level index to give learners an overview of topics covered. History There are several historical phases of distance education that preceded the development of the LMS: Correspondence teaching The first known document of correspondence teaching dates back to 1723, through the advertisement in the Boston Gazette of Caleb Phillips, professor of shorthand, offering teaching materials and tutorials. The first testimony of a bi-directional communication organized correspondence course comes from England, in 1840, when Isaac Pitman initiated a shorthand course, wherein he sent a passage of the Bible to students, who would send it back in full transcription. The success of the course resulted in the foundation of the phonographic correspondence society in 1843. The pioneering milestone in distance language teaching was in 1856 by Charles Toussaint and Gustav Langenscheidt, who began the first European institution of distance learning. This is the first known instance of the use of materials for independent language study. Multimedia teaching: The emergence and development of the distance learning idea The concept of e-learning began developing in the early 20th century, marked by the appearance of audio-video communication systems used for remote teaching. In 1909, E.M. Forster published his story 'The Machine Stops' and explained the benefits of using audio communication to deliver lectures to remote audiences. In 1924, Sidney L. Pressey developed the first teaching machine which offered multiple types of practical exercises and question formats. Nine years later, University of Alberta's Professor M.E. Zerte transformed this machine into a problem cylinder able to compare problems and solutions. This, in a sense, was "multimedia", because it made use of several media formats to reach students and provide instruction. Later, printed materials would be joined by telephone, radio broadcasts, TV broadcasts, audio, and videotapes. The earliest networked learning system was the Plato Learning Management system (PLM) developed in the 1970s by Control Data Corporation. Telematic teaching In the 1980s, modern telecommunications started to be used in education. Computers became prominent in the daily use of higher education institutions, as well as instruments to student learning. Computer aided teaching aimed to integrate technical and educational means. The trend then shifted to video communication, as a result of which Houston University decided to hold telecast classes to their students for approximately 13–15 hours a week. The classes took place in 1953, while in 1956, Robin McKinnon Wood and Gordon Pask released the first adaptive teaching system for corporate environments SAKI. The idea of automating teaching operations also inspired the University of Illinois experts to develop their Programmed Logic for Automated Teaching Operations (PLATO) which enabled users to exchange content regardless of their location. In the period between 1970 and 1980, educational venues were rapidly considering the idea of computerizing courses, including the Western Behavioral Sciences Institute from California that introduced the first accredited online-taught degree. Teaching through the internet: The appearance of the first LMS The history of the application of computers to education is filled with broadly descriptive terms such as computer-managed instruction (CMI), and integrated learning systems (ILS), computer-based instruction (CBI), computer-assisted instruction (CAI), and computer-assisted learning (CAL). These terms describe drill-and-practice programs, more sophisticated tutorials, and more individualized instruction, respectively. The term is currently used to describe a number of different educational computer applications. FirstClass by SoftArc, used by the United Kingdom's Open University in the 1990s and 2000s to deliver online learning across Europe, was one of the earliest internet-based LMSs. The first fully-featured Learning Management System (LMS) was called EKKO, developed and released by Norway's NKI Distance Education Network in 1991. Three years later, New Brunswick's NB Learning Network presented a similar system designed for DOS-based teaching, and devoted exclusively to business learners. Technical aspects An LMS can be either hosted locally or by a vendor. A vendor-hosted cloud system tends to follow a SaaS (software as a service) model. All data in a vendor-hosted system is housed by the supplier and accessed by users through the internet, on a computer or mobile device. Vendor-hosted systems are typically easier to use and require less technical expertise. An LMS that is locally hosted sees all data pertaining to the LMS hosted internally on the users′ internal servers. Locally hosted LMS software will often be open-source, meaning users will acquire (either through payment or free of charge) the LMS software and its code. With this, the user is able to modify and maintain the software through an internal team. Individuals and smaller organizations tend to stick with cloud-based systems due to the cost of internal hosting and maintenance. There are a variety of integration strategies for embedding content into LMSs, including AICC, xAPI (also called 'Tin Can'), SCORM (Sharable Content Object Reference Model) and LTI (Learning Tools Interoperability). Through an LMS, teachers may create and integrate course materials, articulate learning goals, align content and assessments, track studying progress, and create customized tests for students. An LMS allows the communication of learning objectives, and organize learning timelines. An LMS perk is that it delivers learning content and tools straight to learners, and assessment can be automated. It can also reach marginalized groups through special settings. Such systems have built-in customizable features including assessment and tracking. Thus, learners can see in real time their progress and instructors can monitor and communicate the effectiveness of learning. One of the most important features of LMS is trying to create a streamline communication between learners and instructors. Such systems, besides facilitating online learning, tracking learning progress, providing digital learning tools, managing communication, and maybe selling content, may be used to provide different communication features. Features Managing courses, users and roles Learning management systems may be used to create professionally structured course content. The teacher can add, text, images, videos, pdfs, tables, links and text formatting, interactive tests, slideshows etc. Moreover, they can create different types of users, such as teachers, students, parents, visitors and editors (hierarchies). It helps control which content a student can access, track studying progress and engage student with contact tools. Teachers can manage courses and modules, enroll students or set up self-enrollment. Online assessment An LMS can enable instructors to create automated assessments and assignments for learners, which are accessible and submitted online. Most platforms allow a variety of different question types such as: one/multi-line answer; multiple choice answer; ordering; free text; matching; essay; true or false/yes or no; fill in the gaps; agreement scale and offline tasks. What is User feedback Students' exchange of feedback both with teachers and their peers is possible through LMS. Teachers may create discussion groups to allow students feedback, share their knowledge on topics and increase the interaction in course. Students' feedback is an instrument which help teachers to improve their work, helps identify what to add or remove from a course, and ensures students feel comfortable and included. Synchronous and Asynchronous Learning Students can either learn asynchronously (on demand, self-paced) through course content such as pre-recorded videos, PDF, SCORM (Sharable Content Object Reference Model) or they can undertake synchronous learning through mediums such as Webinars. Learning Analytics Learning management systems will often incorporate dashboards to track student or user progress. They can then report on key items such as completion rates, attendance data and success likelihood. Utilising these metrics can help facilitators better understand gaps in user knowledge. Learning management industry In the relatively new LMS market, commercial providers for corporate applications and education range from new entrants to those that entered the market in the 1990. In addition to commercial packages, many open source solutions are available. In the U.S. higher education market as of spring 2021, the top three LMSs by number of institutions were Canvas (38%), Blackboard (25%), and Moodle (15%). Worldwide, the picture was different, with Moodle having over 50% of market share in Europe, Latin America, and Oceania. Many users of LMSs use an authoring tool to create content, which is then hosted on an LMS. In some cases, LMSs that do use a standard include a primitive authoring tool for basic content manipulation. More modern systems, in particular SaaS solutions have decided not to adopt a standard and have rich course authoring tools. There are several standards for creating and integrating complex content into an LMS, including AICC, SCORM, xAPI and Learning Tools Interoperability. However, using SCORM or an alternative standardized course protocol is not always required and can be restrictive when used unnecessarily. Evaluation of LMSs is a complex task and significant research supports different forms of evaluation, including iterative processes where students' experiences and approaches to learning are evaluated. Advantages and disadvantages Advantages There are six major advantages of LMS: interoperability, accessibility, reusability, durability, maintenance ability and adaptability, which in themselves constitute the concept of LMS. Disadvantages Teachers have to be willing to adapt their curricula from face-to-face lectures to online lectures. There is the potential for instructors to try to directly translate existing support materials into courses which can result in very low interactivity and engagement for learners if not done well. COVID-19 and Learning Management Systems The suspension of in-school learning caused by the COVID-19 pandemic started a dramatic shift in the way teachers and students at all levels interact with each other and learning materials. UNESCO estimated that as of May 25, 2020, approximately 990,324,537 learners, or  56.6% of the total enrolled students have been affected by COVID-19 related school closures. In many countries, online education through the use of Learning Management Systems became the focal point of teaching and learning. For example, statistics taken from a university’s LMS during the initial school closure period (March to June 2020) indicate that student submissions and activity nearly doubled from pre-pandemic usage levels. Student satisfaction with LMS usage during this period is closely tied to the information quality contained within LMS modules and maintaining student self-efficacy. From the teacher perspective, a study of K-12 teachers in Finland reported high levels of acceptance for LMS technology, however, training support and developing methods for maintaining student engagement are key to long-term success. In developing nations, the transition to LMS usage faced many challenges, which included a lower number of colleges and universities using LMSs before the pandemic, technological infrastructure limitations, and negative attitudes toward technology amongst users. See also (e-learning) – Learning Activity Management System s Massive open online course Moodle References Bibliography Further reading Connolly, P. J. (2001). A standard for success. InfoWorld, 23(42), 57-58. EDUCAUSE Evolving Technologies Committee (2003). Course Management Systems (CMS). Retrieved 25 April 2005, from http://www.educause.edu/ir/library/pdf/DEC0302.pdf A field guide to learning management systems. (2005). Retrieved 12 November 2006, from http://www.learningcircuits.org/NR/rdonlyres/BFEC9F41-66C2-42EFBE9D-E4FA0D3CE1CE/7304/LMS_fieldguide1.pdf Gibbons, A. S., Nelson, J. M., & Richards, R. (2002). The nature and origin of instructional objects. In D. A. Wiley (Ed.), The instructional use of learning objects: Online version. Retrieved 5 April 2005, from http://reusability.org/read/chapters/gibbons.doc Gilhooly, K. (2001). Making e-learning effective. Computerworld, 35(29), 52-53. Hodgins, H. W. (2002). The future of learning objects. In D. A. Wiley (Ed.), The instructional use of learning objects: Online version. Retrieved 13 March 2005, from http://reusability.org/read/chapters/hodgins.doc Wiley, D. (2002). Connecting learning objects to instructional design theory: A definition, a metaphor, and a taxonomy. In D. A. Wiley (Ed.), The instructional use of learning objects: Online version. Retrieved 13 March 2005, from http://reusability.org/read/chapters/wiley.doc Learning Educational software Learning management systems
4264762
https://en.wikipedia.org/wiki/Mental%20Images
Mental Images
Mental Images GmbH (stylized as mental images) was a German computer generated imagery (CGI) software firm based in Berlin, Germany, and was acquired by Nvidia in 2007, then rebranded as Nvidia Advanced Rendering Center (ARC), and is still providing similar products and technology. The company provides rendering and 3D modeling technology for entertainment, computer-aided design, scientific visualization and architecture. The company was founded by the physicists and computer scientists Rolf Herken, Hans-Christian Hege, Robert Hödicke and Wolfgang Krüger and the economists Günter Ansorge, Frank Schnöckel and Hans Peter Plettner as a company with limited liability & private limited partnership (GmbH & Co. KG) in April 1986 in Berlin, Germany. The Mental Ray software project started in 1986. The first versions of the rendering software were influenced, tested and used for production by Mental Images' then operating large commercial computer animation division, led by the visual effects supervisors John Andrew Berton (1986-1989), 2000 Academy Award winner John Nelson (1987-1989), and 1996 and 2000 Academy Award nominee Stefen Fangmeier (1988-1990). In 2003 Mental Images completed an investment round led by ViewPoint Ventures and another large international private equity investor. Since Dec 2007 Mental Images GmbH is a wholly owned subsidiary of the Nvidia corporation with headquarters in Berlin, subsidiaries in San Francisco (Mental Images Inc.) and Melbourne (Mental Images Pty. Ltd.) as well as an office in Stockholm. After acquisition by Nvidia the company has been renamed Nvidia Advanced Rendering Center (Nvidia ARC GmbH). Products Mental Images is the developer of the rendering software Mental Ray, iray, mental mill, RealityServer, and DiCE. Filmography Mental Images (1987) (a short film of the same name) Asterix in America (1994) (3D computer animation "Storm Sequence" and digital effects, software development) Heaven (2002) (images computed with Mental Ray) References External links Technical Oscars: The 75th Scientific & Technical Awards 2002 / 2003 mental images office at the Kant Dreieck tower 1986 establishments in Germany 2007 mergers and acquisitions 3D graphics software 3D imaging Computer-aided design software Nvidia Software companies established in 1986 Software companies of Germany
6194613
https://en.wikipedia.org/wiki/MeshLab
MeshLab
MeshLab is a 3D mesh processing software system that is oriented to the management and processing of unstructured large meshes and provides a set of tools for editing, cleaning, healing, inspecting, rendering, and converting these kinds of meshes. MeshLab is free and open-source software, subject to the requirements of the GNU General Public License (GPL), version 2 or later, and is used as both a complete package and a library powering other software. It is well known in the more technical fields of 3D development and data handling. Overview MeshLab is developed by the ISTI - CNR research center; initially MeshLab was created as a course assignment at the University of Pisa in late 2005. It is a general-purpose system aimed at the processing of the typical not-so-small unstructured 3D models that arise in the 3D scanning pipeline. The automatic mesh cleaning filters includes removal of duplicated, unreferenced vertices, non-manifold edges, vertices, and null faces. Remeshing tools support high quality simplification based on quadric error measure, various kinds of subdivision surfaces, and two surface reconstruction algorithms from point clouds based on the ball-pivoting technique and on the Poisson surface reconstruction approach. For the removal of noise, usually present in acquired surfaces, MeshLab supports various kinds of smoothing filters and tools for curvature analysis and visualization. It includes a tool for the registration of multiple range maps based on the iterative closest point algorithm. MeshLab also includes an interactive direct paint-on-mesh system that allows users to interactively change the color of a mesh, to define selections and to directly smooth out noise and small features. MeshLab is available for most platforms, including Linux, Mac OS X, Windows and, with reduced functionality, on Android and iOS and even as a pure client-side JavaScript application called MeshLabJS. The system supports input/output in the following formats: PLY, STL, OFF, OBJ, 3DS, VRML 2.0, X3D and COLLADA. MeshLab can also import point clouds reconstructed using Photosynth. MeshLab is used in various academic and research contexts, like microbiology, cultural heritage, surface reconstruction, paleontology, for rapid prototyping in orthopedic surgery, in orthodontics, and desktop manufacturing. Additional images See also Geometry processing 3D scanner List of free and open-source software packages References External links Github repository for Meshlab MeshLabJS homepage of the experimental, client based, JavaScript, version of MeshLab that runs inside a browser. MeshLab Stuff Blog Development blog, with tutorials and example of use of MeshLab. MeshLab for iOS page dedicated to the iPad and iPhone version of MeshLab. MeshLab for Android page dedicated to the Android version of MeshLab 2005 software 3D graphics software 3D graphics software that uses Qt 3D modeling software for Linux Computer-aided design software Computer-aided design software for Linux Free 3D graphics software Free computer-aided design software Free graphics software Free software programmed in C++ Video game development software
33957332
https://en.wikipedia.org/wiki/Android%20Ice%20Cream%20Sandwich
Android Ice Cream Sandwich
Android Ice Cream Sandwich (or Android 4.0) is the 9th major version of the Android mobile operating system developed by Google. Unveiled on October 19, 2011, Android 4.0 builds upon the significant changes made by the tablet-only release Android Honeycomb, in an effort to create a unified platform for both smartphones and tablets. Android 4.0 was focused on simplifying and modernizing the overall Android experience around a new set of human interface guidelines. As part of these efforts, it introduced a new visual appearance codenamed "Holo", which is built around a cleaner, minimalist design, and a new default typeface named Roboto. It also introduced a number of other new features, including a refreshed home screen, near-field communication (NFC) support and the ability to "beam" content to another user using the technology, an updated web browser, a new contacts manager with social network integration, the ability to access the camera and control music playback from the lock screen, visual voicemail support, face recognition for device unlocking ("Face Unlock"), the ability to monitor and limit mobile data usage, and other internal improvements. Android 4.0 received positive reviews by critics, who praised the cleaner, revamped appearance of the operating system in comparison to previous versions, along with its improved performance and functionality. However, critics still felt that some of Android 4.0's stock apps were still lacking in quality and functionality in comparison to third-party equivalents, and regarded some of the operating system's new features, particularly the "face unlock" feature, as being gimmicks. , statistics issued by Google indicate that 0.2% of all Android devices accessing Google Play run Ice Cream Sandwich. Development Following the tablet-only release "Honeycomb", it was announced at Google I/O 2011 that the next version of Android, code named "Ice Cream Sandwich" (ICS), would be emphasized providing a unified user experience between both smartphones and tablets. In June 2011, details also began to surface surrounding a then-new Nexus phone by Samsung to accompany ICS, which would notably exclude hardware navigation keys. Android blog RootzWiki released photos in August 2011 showing a Nexus S running a build of ICS, depicting a new application menu layout resembling that of Honeycomb, and a new interface with blue-colored accenting. An official launch event for Android 4.0 and the new Nexus phone was originally scheduled for October 11, 2011, at a CTIA trade show in San Diego. However, out of respect for the death of Apple co-founder Steve Jobs, Google and Samsung postponed the event to October 19, 2011, in Hong Kong. Android 4.0 and its launch device, the Galaxy Nexus, were officially unveiled on October 19, 2011. Andy Rubin explained that 4.0 was intended to provide a "enticing and intuitive" user experience across both smartphones and tablets. Matias Duarte, Google's vice president of design, explained that development of Ice Cream Sandwich was based around the question "What is the soul of the new machine?"; user studies concluded that the existing Android interface was too complicated, and thus prevented users from being "empowered" by their devices. The overall visual appearance of Android was streamlined for Ice Cream Sandwich, building upon the changes made on the tablet-oriented Android 3.0, his first project at Google; Duarte admitted that his team had cut back support for smaller screens on Honeycomb to prioritize sufficient tablet support, as he wanted Android OEMs to "stop doing silly things like taking a phone UI and stretching it out to a 10-inch tablet." Judging Android's major competitors, Duarte felt that the interface of iOS was too skeuomorphic and kitschy, Windows Phone's Metro design language looked too much like "airport lavatory signage", and that both operating systems tried too hard to enforce conformity, "[without] leaving any room for the content to express itself." For Ice Cream Sandwich, his team aimed to provide interface design guidelines which would evoke a modern appearance, while still allowing flexibility for application developers. He characterized the revised look of Ice Cream Sandwich as having "toned down the geeky nerd quotient" in comparison to Honeycomb, which carried a more futuristic appearance that was compared by critics to the aesthetics of Tron. In January 2012, following the official launch of Ice Cream Sandwich, Duarte and Google launched an Android Design portal, which features human interface guidelines, best practices, and other resources for developers building Android applications designed for Ice Cream Sandwich. Release The Galaxy Nexus was the first Android device to ship with Android 4.0. Android 4.0.3 was released on December 16, 2011, providing bug fixes, a new social stream API, and other internal improvements. The same day, Google began a rollout of Ice Cream Sandwich to the predecessor of the Galaxy Nexus, the Nexus S. However, on December 20, 2011, the Nexus S roll-out was "paused" so the company could "monitor feedback" related to the update. On March 29, 2012, Android 4.0.4 was released, adding several performance improvements to the camera and screen rotation, and other bug fixes. Google Play Services support for 4.0 ended in February 2019. Features Visual design The user interface of Android4.0 represents an evolution of the design introduced by Honeycomb, although the futuristic aesthetics of Honeycomb were scaled back in favor of flat design with neon blue accenting, hard edges, and drop shadows for depth. Ice Cream Sandwich also introduced a new default system font, Roboto; designed in-house to replace the Droid font family, Roboto is primarily optimized for use on high-resolution mobile displays. The new visual appearance of Ice Cream Sandwich is implemented by a widget toolkit known as "Holo"; to ensure access to the Holo style across all devices—even if they use a customized interface skin elsewhere, all Android devices certified to ship with Google Play Store (formerly Android Market) must provide the capability for apps to use the unmodified Holo theme. As with Honeycomb, devices can now render navigation buttons—"Back", "Home", and "Recent apps"—on a "system bar" across the bottom of the screen, removing the need for physical equivalents. The "Menu" button that was present on previous generations of Android devices is deprecated, in favor of presenting buttons for actions within apps on "action bars", and menu items which do not fit on the bar in "action overflow" menus, designated by three vertical dots. Hardware "Search" buttons are also deprecated, in favor of search buttons within action bars. On devices without a "Menu" key, a temporary "Menu" key is displayed on-screen while running apps that are not coded to support the new navigation scheme. On devices that use a hardware "Menu" key, action overflow buttons are hidden in apps and are mapped to the "Menu" key. User experience The default home screen of Ice Cream Sandwich displays a persistent Google Search bar across the top of the screen, a dock across the bottom containing the app drawer button in the middle, and four slots for app shortcuts alongside it. Folders of apps can be made by dragging an app and hovering it over another. The app drawer is split into two tabs; one for apps, and the latter holding widgets to be placed on home screen pages. Widgets themselves can be resizable and contain scrolling content. Android4.0 contains an increased use of swiping gestures; apps and notifications can now be removed from the recent apps menu and dismissed from the notifications area by sliding them away, and a number of stock and Google apps now use a new form of tabs, in which users can navigate between different panes by either tapping their name on a strip, or swiping left and right. The phone app was updated with a Holo design, the ability to send pre-configured text message responses in response to incoming calls, and visual voicemail integration within the call log display. The web browser app incorporates updated versions of WebKit and V8, supports syncing with Google Chrome, has an override mode for loading a desktop-oriented version of a website rather than a mobile-oriented version, as well as offline browsing. The "Contacts" section of the phone app was split off into a new "People" app, which offers integration with social networks such as Google+ to display recent posts and synchronize contacts, and a "Me" profile for the device's user. The camera app was redesigned, with a reduction in shutter lag, face detection, a new panorama mode, and the ability to take still photos from a video being recorded in camcorder mode. The photo gallery app now contains basic photo editing tools. The lock screen now supports "Face Unlock", includes a shortcut for launching the camera app, and can house playback controls for music players. The keyboard incorporates improved autocomplete algorithms, and improvements to voice input allow for continuous dictation. The ability to take screenshots by holding down the power and "Volume down" buttons together was also added. On devices supporting near-field communication (NFC), "Android Beam" allows users to share links to content from compatible apps by holding the back of their device up against the back of another NFC-equipped Android device, and tapping the screen when prompted. Certain "System" apps (particularly those pre-loaded by carriers) that cannot be uninstalled can now be disabled. This hides the application and prevents it from launching, but the application is not removed from storage. Android4.0 introduced features for managing data usage over mobile networks; users can display the total amount of data they have used over a period of time, and display data usage per app. Background data usage can be disabled globally or on a per-app basis, and a cap can be set to automatically disable data if usage reaches a certain quota as calculated by the device. Platform Android 4.0 inherits platform additions from Honeycomb, and also adds support for ambient temperature and humidity sensors, Bluetooth Health Device Profile, near-field communication (NFC), and Wi-Fi Direct. The operating system also provides improved support for stylus and mouse input, along with new accessibility, calendar, keychain, spell checking, social networking, and virtual private network APIs. For multimedia support, Android4.0 also adds support for ADTS AAC, Matroska containers for Vorbis and VP8, WebP, streaming of VP8, OpenMAX AL, and HTTP Live Streaming 3.0. Reception Android4.0 was released to positive reception: Ars Technica praised the Holo user interface for having a "sense of identity and visual coherence that were previously lacking" in comparison to previous versions of Android, also believing that the new interface style could help improve the quality of third-party apps. The stock apps of Android4.0 were also praised for having slightly better functionality in comparison to previous versions. Other features were noted, such as the improvements to text and voice input, along with the data usage controls (especially given the increasing use of metered data plans), and its overall performance improvements in comparison to Gingerbread. However, the Face Unlock feature was panned for being an insecure gimmick, and although providing an improved experience over the previous version, some of its stock applications (such as its email client) were panned for still being inferior to third-party alternatives. Engadget also acknowledged the maturing quality of the Android experience on Ice Cream Sandwich, and praised the modern feel of its new interface in comparison to Android 2.3, along with some of the new features provided by Google's stock apps and the operating system itself. In conclusion, Engadget felt that Android4.0 was "a gorgeous OS that offers great performance and—for the most part—doesn't feel like a half-baked effort." However, Engadget still felt that some of Android4.0's new features (such as Face Unlock) had a "beta feel" to them, noted the lack of Facebook integration with the new People app, and that the operating system was still not as intuitive for new users as its competitors. PC Magazine acknowledged influence from Windows Phone 7 in the new "People" app and improved benchmark performance on the web browser, but considered both Android Beam and Face Unlock to be gimmicks, and criticized the lack of support for certain apps and Adobe Flash on launch. See also Android version history iOS 5 Windows Phone 7 Windows 7 References External links Android (operating system) 2011 software
38414958
https://en.wikipedia.org/wiki/William%20Cheswick
William Cheswick
William R. "Bill" Cheswick (also known as "Ches") is a computer security and networking researcher. Education Cheswick graduated from Lawrenceville School in 1970 and received a B.S. in Fundamental Science in 1975 from Lehigh University. While at Lehigh, working with Doug Price and Steve Lidie, Cheswick co-authored the Senator line-oriented text editor. Career Cheswick's early career included contracting in Bethlehem, PA between 1975 and 1977. He was a Programmer for American Newspaper Publishers Association / Research Institute in Easton, PA between 1976 and 1977 and a Systems Programmer for Computer Sciences Corporation in Warminster, PA between 1977 and 1978. Following this, Cheswick joined Systems and Computer Technology Corporation where he served as a Systems Programmer and Consultant between 1978 and 1987. Much of Cheswick's early career was related to his expertise with Control Data Corporation (CDC) mainframes, their operating systems such as SCOPE and NOS, and the related COMPASS assembly language. Cheswick initially worked with CDC systems as a student at Lehigh University. Cheswick joined Bell Labs in 1987. Shortly thereafter, he and Steven M. Bellovin created one of the world's first network firewalls. The resulting research and papers lead to their publication of the seminal book Firewalls and Internet Security, one of the first to describe the architecture of a firewall in detail. Cheswick and Bellovin also created one of the world's first honeypots in the course of detecting and trapping an attempted intruder into their network. In 1998, Cheswick, still at Bell Labs (by then controlled by Lucent) started the Internet Mapping Project, assisted by Hal Burch. The research allowed large scale mapping of the internet for the first time, using tracerouting techniques to learn the connectivity graph of global networks. The work ultimately led to the founding in 2000 of a spinoff company, Lumeta, where Cheswick was a co-founder and held the title of Chief Scientist. He joined AT&T Shannon Lab in 2007, where he remained until 2012. Hobbies, interests, and personal projects Cheswick currently lives in New Jersey with his wife. He has two children. His home is a farmhouse in Flemington, New Jersey, which is an electronic smart house, equipped with a voice synthesizer that reports relevant information, from mailbox status to evening stock news. Cheswick has developed a few interactive exhibits for science museums, including the Liberty Science Center in New Jersey. Cheswick also enjoys model rocketry, and lock picking (both electronic and physical). He is interested in developing better passwords as discussed in his article "Rethinking Passwords" (Communications of the ACM 56.2 (2013)). Cheswick has also been seeking permission from filmmakers to publish his visualizations of their movies. References External links William R. Cheswick's home page biography Home page for "Firewalls and Internet Security" Computer security specialists Living people Year of birth missing (living people) Lawrenceville School alumni Lehigh University alumni People from Flemington, New Jersey
838676
https://en.wikipedia.org/wiki/Charles%20Morgan%20%28businessman%29
Charles Morgan (businessman)
Charles Morgan (April 21, 1795 – May 8, 1878) was an American railroad and shipping magnate. He played a leading role in the development of transportation and commerce in the Southern United States through the mid- to late-19th century. Morgan started working in New York City at the age of fourteen. He managed both wholesale and retail businesses before specializing in marine shipping. He invested in sailing vessels as early as 1819, while managing all aspects of the business from his office at the wharf in New York City. He started his first partnership for a packet company in 1831. During the 1830s, he held stakes in companies shipping to Kingston, Jamaica, and Charleston, South Carolina, from New York; and a stake in a company shipping between New Orleans and Galveston, Texas. During this time, he invested more in steamships than sailing ships. The LouisianaTexas packets became so successful that he gradually withdrew from the Atlantic trade in the late-1830s. Charles Morgan's Steamships primary cargo from Rhode Island to the South were Slaves. In the 1840s and 1850s, Morgan expanded his shipping business in the Gulf of Mexico, expanding service to Mexico, Florida, and adding stops in Texas. Texas statehood and the Mexican War were boons to his enterprises of shipping mail, troops, and war material. By 1846, he sold his last stake in a sailing vessel. After 1849 and through part of the 1850s, Morgan responded to transportation demand to California driven by the gold rush. He offered passenger service from New York City to San Francisco via Panama. These activities pulled him into business alliances and rivalries with Cornelius Vanderbilt which culminated in Morgan's support of William Walker's filibuster in Nicaragua. Morgan also attempted to damage Vanderbilt's investments and his business positions before they agreed to a truce in 1858. In the late 1850s, a lucrative business developed out of a segment of railroad running between New Orleans and Berwick Bay, Louisiana. The railroad terminated in a desolate part of the state, and Morgan's steamers enabled the railroad to create revenue from an expensive infrastructure investment. In return, the railroad shortened Morgan's Louisiana to Texas route by about 150 miles. In 1855, Morgan incorporated his assets for the first time, co-founding the Southern Steamship Company. With family and other close friends managing the company, Morgan accelerated his investments in steamboats. During the Civil War, he lost some of his investments to seizures by the North and the South. Most of the steamers seized were corporately owned by the Southern Steamship Company, which led to its liquidation in 1863. Morgan, however, prospered during the war despite these losses. He ran blockade runners for the Confederacy, but his most profitable venture during this period was the Morgan Iron Works. This shop built engines for thirteen ships sold to the Union Navy. At the end of the war, he repurchased some of these steamships at favorable prices. During Reconstruction, Morgan sold his interest in the Morgan Iron Works. He also took advantage of the buyer's market for steamships and expanded his fleet. He resumed his Gulf packet service between New Orleans and Texas, and between New Orleans and Mobile, Alabama. In 1869, he acquired his first railroad. Two more railroad acquisitions followed in the 1870s. He died on May 8, 1878, at his home in New York City. Just prior to his death, he incorporated Morgan's Louisiana and Texas Railroad and Steamship Company and distributed his shares to several family members. Family life Charles Morgan was born April 21, 1795, to George and Betsy Morgan. His native town was then known as Killingworth, Connecticut, but is now known as Clinton. Charles had a younger sister named Wealthy Ann (b. September 6, 1798) and older brothers, Elias (b. September 26, 1790) and John (b. December 3, 1791). George Morgan was a successful farmer, and Charles had a comfortable upbringing in a rural area. However, a changing economy induced Charles to follow his brothers who had moved to New York City in 1809. He married Emily Reeves in 1817, several years after establishing himself as a merchant, and they had five children together: Emily Ann (1818), Frances Eliza (1823), Charles W. (1825), Henry R. (1827), and Maria Louise (1832). They remained married until Emily's death in 1850. Morgan's daughter Frances Eliza married George W. Quintard in either 1843 or early in 1844. Like her father, Quintard grew up in Connecticut and left his hometown as a teenager seeking opportunity in New York City. Concurrent with his marriage, Quintard opened a shop, combining a grocery and ship chandlery. In 1847, he joined the marine engineering firm of T. F. Secor & Co., in which Charles Morgan was a partner, and in 1850, Quintard and Morgan purchased the firm outright and renamed it the Morgan Iron Works. Morgan's eldest daughter Emily Ann married Israel C. Harris of New Orleans. In December 1847, Harris founded a partnership with Charles Morgan's youngest son Henry, and the firm of Harris & Morgan assumed agency for all of Morgan's ships. In 1853, Maria Louise Morgan married Charles A. Whitney, a shipping agent. Out of Morgan's family, his sons-in-laws assumed the most active roles in his transportation businesses. His eldest son, Charles W., eschewed the shipping business and opened his own grocery in 1849. His youngest son, Henry Morgan, married Laura Mallard of New Orleans in 1854. On June 24, 1851, Morgan married his second wife, Mary Jane Sexton, who taught math and French in New York. Charles and Mary Jane Morgan commissioned the construction of a large mansion at 7 East 26th Street, where they resided from 1852 until their deaths. Early career Charles Morgan left home for New York City at age fourteen to work for a merchant, but he later started his own import business. His arrival in the city and coming of age coincided with the growing importance of New York as a port. He conducted business as both a grocer and a chandler, but his early business activities included importer and exporter, retailer and wholesaler, and shipping merchant. Morgan owned stakes in eighteen sailing packet ships and fifteen sailing tramp vessels between 1819 and 1846. In addition to equity shares, he acted as ship's husband for seven vessels of The Ship Line, and for thirteen sailing vessels in which he had owned shares. His duties as husband included bookkeeping, dispatching, maintenance, and outfitter. In 1831, Morgan partnered with Benjamin Aymars to establish the first packet service from New York to Kingston, Jamaica. He also served as line master, who was responsible for coordination of all ships within the shipping line. Shipping companies New York and Charleston Steam Packet Company The New York and Charleston Steam Packet Company was formed in June 1834 by Charles Morgan, James P. Allaire, and John Haggerty. Allaire owned an iron foundry which counted Robert Fulton among its clients, and he acquired Fulton's shop after his death and combined it with his existing foundry. He had the side-wheeler David Brown built and dispatched it to run between New York and Charleston, South Carolina, in November 1833. The firm acquired the William Gibbons the same year, and by March 1, 1834, both steamers were making weekly runs. At this time, sailing vessels still dominated ocean navigation, including the coastal trade, but Morgan and his brother-in-law John Haggerty joined with Allaire to develop coastal steam packets, focusing on the route between New York and Charleston. Morgan was the operations manager of the New YorkCharleston line, as well as the Jamaica packet line. By June 1835, he was dispatching two steamers per week to Charleston, David Brown and the newly acquired Columbia. The packet company had won the bid for the United States Mail contract, worth $7,200 annually, and its ships were earning more than $1,000 per trip in profits. However, the risks faced by ocean-going steamers threatened the company. William Gibbons sank in October 1836, which caused a loss of public confidence in addition to the direct financial loss. Some of the investors sold David Brown and then abandoned the company, leaving Morgan, Allaire, and Haggerty as the sole partners and Columbia as their only ship. Yet the partnership built new steamers of the line which they christened Home and New York, and they accomplished this despite the Panic of 1837. They also supplemented their packet revenue with contracts to carry mail along their line, and contracts to carry military troops to Florida. Another lost ship threatened the New York and Charleston Steam Packet Company. Home started its last voyageand only its third overallon October 7, 1837. Home had cost nearly $90,000 to build and included a cabin with luxury amenities to accommodate as many as 120 first-class passengers. She ran aground while departing New York Harbor, but continued to the ocean without any apparent damage. However, a storm with rough seas battered the ship near Cape Hatteras, North Carolina; she started taking on water and her engine failed, and her sails were not able to drive and control the ship. The storm pushed Home toward the shore and ran it aground, and 99 people died trying to reach land. Gulf coast packets In 1837, Morgan started running steamboat service between New Orleans and Galveston, Texas, which was the first packet boat service running between these cities on a regular schedule. However, this had not been Morgan's first foray in shipping in the Gulf of Mexico. The previous year, the New York and Charleston Steam Packet Company deployed the steamship David Brown to sail a route from New Orleans to Key West, Florida, and Havana, Cuba. The Home catastrophe forced a reorganization of the partnership, with the new company shifting its emphasis to the Gulf trade, and Columbia sailed out of the port of New Orleans for Galveston on November 18, 1837, about five weeks after the Home wreck. Allaire and several investors divested themselves from the New York and Charleston Steam Packet Company, leaving Morgan and John Haggerty as the sole partners: thus the Morgan Line was born and Columbia was its first ship, committed to the New OrleansGalveston route. During this era, New Orleans was the main port for Gulf traffic, and the newly formed Republic of Texas attracted many immigrants and demand for trade goods. On June 8, 1838, the owners of Columbia and the owners of the steamship Cuba formed a cartel known as the New Orleans and Texas Line, which included coordination of rates and scheduling of shipping between New Orleans and Galveston. The New Orleans and Texas Line dominated transportation in the two cities after the last half of 1838. Gulf trade experienced seasonal cycles. June marked the beginning of the slack season, which lasted throughout the summer and part of the fall, and business picked up again in early October. Morgan would remove some of his ships from service during slack times and send them to New York for refitting and repairs. He continued to maintain New York City as his residence and the main location for shipbuilding and repair, even though his main shipping market was located on the Gulf coast. He sold the last of his sailing vessels in 1846, while he increased his investment in T. F. Secor & Company, a builder of marine steam engines. Mexican War During the Mexican War, Morgan accepted various contracts to move U.S. troops, deploying New York and Galveston for military transport, and collecting nearly $80,000 in less than six months in 1846. This compromised packet service, including its contract for carrying the U.S. mail. Morgan had won a contract to transport mail for the Republic of Texas, which he continued after Texas received statehood and the U.S. Mail Service took over in 1846.Morgan lost the New York in a hurricane in September 1846, and he required a larger steamship fleet to meet the demands of passenger service, civilian trade, mail service, and military transportation. He acquired five steamers in 1847, the first of which was the New Orleans, launched from New York on January 12, 1847. He paid $120,000 for the 869-ton steamship new, but it produced only $33,000 in revenue for moving troops. He sold it five months after its launch for $125,000. The other 1847 steamer purchases were for steamships already in use. Palmetto was built in 1846, and most of its interest sold to Morgan in March 1847. Captain Jeremiah Smith commanded the 533-ton steamer and spent about three months on charters for the United States Army. Afterward, Palmetto entered packet service for New Orleans and Galveston. Captain Smith maintained a small interest in Palmetto and became an investment partner with Morgan in the 249-ton steamship Yacht, a much smaller vessel better suited for navigating the sandbars of the Texas coast. Portland was a steamer built in 1835 which originally served the coast of Maine; she was brought into the Morgan fleet late in 1847 and assigned to move military troops and equipment out of New Orleans. The fifth and last acquisition in 1847 was Globe, which was placed immediately into packet service. After the war, Morgan obtained the charter business for taking troops home. Later, he deployed his steamers to expanded packet service. Service in Texas expanded to Indianola, Port Lavaca, and Brazos St. Iago. The last of these ports facilitated outbound shipments of precious metals, hides, and wool. Starting in 1850, the Morgan earned $15,000 annually for carrying mail in and out of Brazos St. Iago, and $12,000 for the mail route from New Orleans to Galveston and Indianola. Gold rush Panama After the discovery of gold in California of January 1848, news emanated to the eastern United States in the last quarter of that year. With financial capital and the means of production still concentrated near the Eastern seaboard, there was demand for transportation to carry people and supplies to the Pacific coast. In January 1849, no fewer than ninety marine vessels departed eastern ports bound for California. Through mid-April 1849, the port of New York dispatched 226 California-bound vessels carrying a total of about 20,000 people. The shortest wholly oceanic route from New York was over 14,000 nautical miles, and even a few hundred miles longer from New Orleans. Yet this distance from New York could be reduced to less than 5,000 miles with a multi-modal short-cut across the Isthmus of Panama. The Royal Steam Packet Company, one of three companies operating across Central American just prior to the gold rush, ran a route from London to Panama, then overland to the Pacific. Two companies, the U.S. Mail Steamship Company and the Pacific Mail Steamship Company, serviced US government contracts using a similar transport corridor. The heightened transportation demand made these enterprises highly lucrative in 1849. At the same time, demand outstripped supply in early 1849, especially on the Atlantic side, and Charles Morgan first entered this market as a major shareholder of Crescent City, which his agent J. Howard & Son dispatched to Chagres, Panama, from New York. Morgan acquired the newly built Empire City, adding to the Panama service, with the two ships forming the Empire City Line. Morgan and John T. Howard increased their investment in westbound transportation throughout 1849 and 1850, first buying the screw-steamer Sarah Sands, then four more ships of the line, three of which sailed the Pacific side from Panama to San Francisco. A business collaboration between Morgan and Cornelius Vanderbilt was spurred by a disabled ship. By accident, the two men departed New York on Morgan's Crescent City on December 14, 1849. Both were travelling in order to investigate the transportation business in Central America. A failing engine caused the steamer to limp into Charleston for repairs. Each man improvised transportation toward the Gulf of Mexico. Vanderbilt chartered a ship for Havana and carried with him eight other passengers after observing that Morgan chartered a ship for himself and several other passengers, sealing the deal by buying the ship's cargo of lumber. Nicaragua Morgan and Howard operated three steamers from the Pacific side of Panama in 1850, compared to four operated by U.S. Mail Steamship Company. Pacific Mail Steamship Company added four ships to its Pacific fleet for a total of seven, creating a competitive market for passengers in 1850. Morgan withdrew from the competition for the trade across Panama late in 1850, though he did not withdraw from the trade across the isthmus. Seeking to shorten the trip from New York to the west coast, he probed crossings several hundred miles west through a water and land route via Nicaragua. This led initially to a business alliance between Morgan and Vanderbilt, who had previously secured comprehensive right-of-way charters with the Nicaraguan government. On August 27, 1849, American Atlantic and Pacific Ship-Canal Company gained a concession for exclusive rights to construct a canal to the Pacific. In exchange for this and additional concessions, Nicaragua received cash, a right to stock in the canal, and annual cash payments. Under a separate deal, Vanderbilt also gained the right-of-way through Nicaragua using any means of transportation, which would remain in effect even if his company did not complete the canal. However, this second deal was signed after a civil war split Nicaragua into rival governments. The agreement was negotiated between Joseph L. White and the conservative faction which ruled out of the city of Granada, and conveyed the rights to construct a canal to the Vanderbilt-controlled Accessory Transit Company. The parties signed on August 14, 1851. Vanderbilt began operating routes from New York to San Francisco on his New and Independent Line. Morgan placed his own ships into service of Vanderbilt's company, running his Empire City Line steamers from New York, as well as Mexico from New Orleans, all of which put in at the port of San Juan del Norte. In September 1852, Vanderbilt resigned as president of the company and divested most of his interests the following December. He retained a commission as the company's agent and a percentage of the company's business across Nicaragua. Morgan was part of a short-lived alliance with Vanderbilt as late as February 1853. Accessory Transit Company reported net revenue in excess of $535,000 in its semi-annual report at the end of 1853; however, this report failed to account for stock dilution, accounts payable to the government of Nicaragua, and the debt settlements to Vanderbilt. In the same year, Morgan deposed Vanderbilt as the agent of the line while the Commodore was vacationing. An anti-Vanderbilt faction seized the control of the board, and they appointed Morgan president of the Accessory Transit Company. The New York Times reported decades later in its obituary of Vanderbilt that he penned the following missive directed at his enemies, “You have undertaken to cheat me. I won’t sue you for the law is too slow. I’ll ruin you.” According to two historians, this is at best apocryphal. In the fall of 1853, Vanderbilt entered the Atlantic–Pacific trade to compete with his former partners. He formed the Independent Opposition Line with North Star sailing the New York–Nicaragua leg, and partnered with Edward Mills and his two steamers to run on the Nicaragua–San Francisco leg. The Independent Opposition Line offered aggressively priced fares, triggering a three-way price war between Vanderbilt, Morgan's Accessory Transit Company, and the U.S. Pacific Mail Lines. Vanderbilt's low fares damaged his competitors. After less than a year, the Morgan-led Accessory Transit Company and the U.S. Pacific Mail Lines agreed to pay over $1 million to Vanderbilt, in return for Vanderbilt leaving the market. This payment included a total of $800,000 for his two ships on the Pacific side and $40,000 per month in cash for a non-compete agreement. With Vanderbilt out of the field, the remaining carriers were able to restore their rates. Morgan and C. K. Garrison participated in a scheme devised by Edmund Randolph to support the filibuster of William Walker in Nicaragua. Randolph and Walker were old friends. Walker readily accepted Randolph's plan to invalidate the old transit concessions to Accessory Transit Company so that the new transit concession would eventually convey to a new company headed by Garrison and Morgan. They divested of their Accessory Transit Company stock, while Morgan even took short positions. Patricio Rivas of the Liberal-faction assumed the title of Provisional President of Nicaragua, under the supervision of Walker. Vanderbilt and two other investors accumulated enough Accessory Transit Company stock to gain a majority of its shares, recapturing control of the company and causing losses on Morgan's short positions. With Vanderbilt back in control as an investor, Morgan resigned his directorship on December 21, 1855. Yet Morgan responded to this financial and professional loss by doubling down. In the first part of 1856, Morgan went on another short selling binge, matched each time by Vanderbilt and his allies. Morgan failed in weakening the price of Vanderbilt's stock and suffered great financial losses on the process. By 1858, Morgan withdrew from running steamers on the Pacific and from transit in Nicaragua. At the same time, Vanderbilt sold his fleet of Gulf packets to Morgan, ending the feud between Morgan and the Commodore. Southern Steamship Company Morgan incorporated the Southern Steamship Company on March 14, 1855. Chartered in Louisiana, it opened some of his Gulf shipping interests to outside investors. Morgan was a minority shareholder, controlling only 500 of 4,000 shares. However, two of the members of the board were Morgan's New Orleans agents, Israel Harris (his son-in-law) and Henry R. Morgan (his son). Their partnership, Harris & Morgan, was also the largest shareholder, controlling 890 of 4,000 shares, so the triumvirate of Charles Morgan, Henry Morgan, and Israel Harris controlled 1,390 of 4,000 shares. A third board member, Cornelius B. Payne, was a junior partner of Harris & Morgan and owned 250 shares of stock in the Southern Steamship Company. The fourth board member was a close friend of Charles Morgan, a New Orleans grocer, E.J. Hart. Morgan completed his requirements to capitalize the corporation at $400,000 by transferring several assets on June 10, 1856. The assets received by the Southern Steamship Company were four steamships, the New OrleansGalvestonIndianola mail contract, eight enslaved persons, and sundry equipment. Through incorporation, Morgan attracted new investors while still maintaining a large portfolio of assets apart from the Southern Steamship Company. Incorporation also represented a partial withdrawal from management of Morgan's interests and a new role as a passive investor. On October 1, 1856, the Southern Steamship Company secured a mail contract serving Brashear, Louisiana and Galveston. Brashear was located at the western terminus of the New Orleans, Opelousas and Great Western Railroad (NOO&GW), navigable from Berwick's Bay on the Gulf of Mexico. The NOO&GW was planned to span from New Orleans to the Pacific Coast, but higher than expected construction costs prevented the company from building beyond this swampy, inaccessible, and sparsely populated area of Louisiana. After fending off a serious challenge from Cornelius Vanderbilt, the Southern Steamship Company forged a mutual freight agreement with the NOO&GW. The eastern terminal of the NOO&GW was located in Algiers, Louisiana, directly across the Mississippi River from New Orleans. Freight delivered to the railroad at Algiers avoided New Orleans taxes and wharfage fees. In addition, transferring western-bound freight to the railroad eliminated both navigating the Mississippi River and shortened the western route to Galveston by about . Morgan himself had negotiated the complex freight deal between the Southern Steamship Company and NOO&GW. Southern Steamship agreed to transport freight for the railroad from its Brashear terminal to Galveston (via Sabine Pass, Texas), and to Indianola, Texas (via Galveston). The Southern Steamship Company received a percentage of NOO&GW's freight receipts based on a multiple-tiered formula. NOO&GW also agreed to move freight for the Southern Steamship Company at a discount, and provide its ships with a wharf at Brashear. The steamship service not only carried mail and other freight, but carried over 16,000 passengers from Brashear to Texas in 1859, and over 28,000 passengers in 1860. American Civil War Morgan was a lifetime resident of the northeast. He grew up in Connecticut and maintained a primary residence in New York City his entire adult life. However, he was "a more-than-casual slaveholder," acquiring no fewer than thirty-three enslaved persons through the start of the American Civil War. In a very early action of the Civil War, one of Morgan's steamships, the General Rusk, transported Texas troops. The State of Texas adopted its ordinance of secession on February 4, 1861, and appointed E.B. Nichols as its Commissioner and Financial Representative on February 7. Nichols had already been serving Morgan as his shipping agent in Galveston. He booked Morgan's General Rusk to transport troops for $500 per day. Shortly after, this Texas force captured the United States Army Base at Brazos St. Iago. Yet, Morgan used the same steamer to transport the fleeing Union troops from Brazos St. Iago to Key West, Florida. Morgan kept his fleet in the Gulf ports, even after the attack on Fort Sumter. Louisiana Governor Thomas O. Moore mandated a seizure of three ships from the Southern Steamship Company, viewing Charles Morgan as a northerner and a threat to the Confederacy. While Morgan never sided with the Confederacy, Israel Harris did. The ships were allowed to travel on a limited basis, but with many inspections at the ports of New Orleans and Galveston. At Galveston, the ranking officer, Sidney Sherman, seized two of Morgan's vessels. Nichols interceded on Morgan's behalf, and Sherman agreed to allow the Morgan steamers passage off Texas waters. However, he did keep the General Rusk for the Confederate fleet. With the United States mail service discontinued and the effective Union blockade of southern ports, Morgan's steamship empire was idled. The Southern Steamship Company, which had entered the Civil War with twelve steamships, sold two of them, and lost the other ten when General Mansfield Lovell expropriated them on behalf of the Confederacy on January 16, 1862. The company was left without a fleet and without compensation for its seized assets, thus it folded in 1863. Despite the losses borne by the Southern Steamship Company, Morgan continued to operate other ventures by doing business with both the Union and the Confederacy. Morgan placed orders with Harlan and Hollingsworth for five large steamers between 1862 and 1864. These ships were frequently booked on charters for the Union during the war. Morgan also profited from his ownership of the Morgan Iron Works, which was producing engines from its shops in New York City for commercial enterprises in the United States and abroad. Beyond civilian production, the Morgan Iron Works built engines and other marine equipment for thirteen ships in the Union Naval fleet. In one case, they built a warship from scratch: . These naval contracts accrued over $2 million in revenue to the Morgan Iron Works. For the Confederates, Morgan ran the Frances to evade Union blockades between Havana and the southern ports. Reconstruction Sale of Morgan Iron Works After the Civil War, demand for steamships flagged and the federal government placed some of their vessels on the market. Morgan had already anticipated the buyer's market for steamships as early as the fall of 1865. In 1867, Quintard and Morgan sold their interests in the Morgan Iron Works to John Roach for $450,000. Morgan bought four steamers from the U.S. Navy. Morgan had sold two of these to the Navy during the war: Austin and Wiliam G. Hewes. In another case of making money on both ends, Morgan sold three steamships to the federal government for $495,000, then repurchased them a year later for $225,000. Included in this expansion of his fleet, he ordered eight new ships from Harlan and Hollingsworth between 1865 and 1867. Quintard proposed a new investment idea to Morgan in 1867. This resulted in the formation of the New York and Charleston Steamship Company (not to be confused with the New York and Charleston Packet Company). The new company started by acquiring Manhattan from Morgan's youngest son-in-law, Charles A. Whitney. Next they purchased two large steamers, Champion (1,452 tons) and Charleston (1,517), and chartered the steamer James Adger. Morgan forged a rate deal with the Pontchartrain Railroad on April 17, 1866. He received 75 percent of the revenue for their freight traveling on his steamships between New Orleans and Mobile, Alabama. At first, Morgan faced open competition from other carriers on the route, but in 1868, he paid $250,000 in exchange for being the exclusive carrier. However, after 1870, Morgan competed for business with a railroad running between the two cities. On May 3, 1866, Morgan signed a new contract with the NOO&GW railroad. This deal contained many of the provisions of the eight-year contract they implemented before the war. Morgan received a percentage of the freight receipts based on multiple-tiers, received a fifty percent discount moving his supplies on the railroad, and the railroad would provide wharfage at the western terminus at Brashear. As before, Morgan's steamship transported freight from the western terminus of the railroad at Brashear to the Texas ports of Galveston and Indianola.In the late-1860s, Morgan's steamships carried much of the eastbound freight for a growing coastal Texas cattle industry. He offered new service in 1867 to Rockport, Texas, hauling outbound hides and beef products. The new port on Aransas Bay exported over 2 million pounds of hides and over 1.7 million pounds of wool by 1869. The port of Indianola served the area around Matagorda Bay, where five meatpacking facilities were located by 1870. Indianola exported cattle, hides, wool, cotton, and processed meat. There was a major managerial re-alignment in 1867. Preceded by the departure of George Quintard and Henry Morgan, both of whom pursued new paths as independent financiers, his agent in New Orleans, Israel C. Harris, retired on April 1 before dying on Christmas Eve. Morgan hired a new agency for the Crescent City, headed by another one of his son-in-laws, Charles A. Whitney. Whitney & Company's other partner was another Morgan associate by the name of Alexander C. Hutchinson. Morgan granted them total authority over the Morgan Line, which operated between Louisiana and Texas. Railroad ownership To this point, Morgan had invested in the first-tier mortgage bonds of the NOO&GW, but had not yet held equity in any railroads. The NOO&GW had fallen behind in its coupon payments. The Illinois Central Railroad, a large bondholder, filed suit on June 10, 1868. The following year, William S. Pike approached Morgan to join a group of 26 NOO&GW bondholders to force a court-ordered sale of the railroad. Morgan refused to join the suit, and Pike dropped the plan. Several months later, in September 1868, Morgan offered a complex counter-proposal to act as an operator and lessee of the NOO&GW. Morgan agreed to buy all of the NOO&GW's debt, while taking responsibility for two-thirds of the debt for building the railroad to Sabine Pass, Texas. In Pike's opinion, the bondholders would be offering as much as a fifty percent discount on their assets, thus he declined the offer. Morgan predicted a bankruptcy in the NOO&GW's near future. Shortly later, Morgan retained Miles Taylor as legal counsel, who filed a series of legal actions against the railroad. One of these suits competed against an earlier lawsuit filed in federal court by the Illinois Central Railroad, which was attempting to force the NOO&GW into liquidation. This was inimical to Morgan's financial interests since he was a major bondholder, and Taylor's suit asked for a court-ordered sale of the railroad. The Illinois Central's suit was dismissed in Federal District Court, and Morgan won his suit, sending the NOO&GW into an auction at the New Orleans Custom House. Through an agent, Morgan purchased the NOO&GW on May 25, 1869, for $2,050,000. Morgan, for his more than $2 million, owned the entire NOO&GW. He renamed it Morgan's Louisiana and Texas Railroad. At the time of purchase, the assets included a terminal each at Algiers and Brashear, and eighty miles of single-track in between. The Algiers terminal included forty acres with 370-feet of frontage on the Mississippi River, directly across from New Orleans. The Brashear terminal included coal yards, warehouses, and wharves, with 330-feet of frontage on the Atchafalaya River. Along its track, the NOO&GW had built many miles of trestles over the watery right-of-way, and also maintained depots and rolling stock. Morgan purchased a large share of equity of the San Antonio and Mexican Gulf Railroad (SA&MG) on May 25, 1870. His steamers had long served the Texas port of Indianola, and he met the competition of railroad transport to Indianola by his own railroad investment. Morgan, and the other owner, Henry S. McComb, renamed the company the Gulf, Western Texas, and Pacific Railroad. Morgan acquired McComb's interest in the railroad in May 1872. With steamboat service from New Orleans to Mobile, Alabama, discontinued, the LouisianaTexas service accounted for most of Morgan's marine business. High taxes and fees increased the costs for doing business in New Orleans. The city's access to the Mississippi River was less important in light of Morgan's Texas and Louisiana Railroad. After the dredging of the Atchafalya River in 1872, Morgan moved all of his Louisiana steamboat operations from New Orleans to Brashear, relying on the ferry from New Orleans to Algiers, and his railroad from Algiers to Brashear to convey his westbound passengers and freight. He also shifted investments from wharfage at Lake Ponchartain and in New Orleans to expanded facilities in Brashear. By around 1875, Morgan's payroll included about 800 workers in Brashear, with the wharf spanning over 2,600 feet. In February 1876, the town's name was changed to Morgan City. By the 1870s, the Southern Steamship Company had expanded its geographical reach through new assets and alliances. For example, a Morgan agent in 1873 was able to sell a single ticket for passage between New Orleans and San Antonio, Texas, which employed several modes of transportation. Starting on rail from Algiers, Louisiana, the journey included coastal steamboat passage combining two packets, another rail segment, and finally the passenger reached San Antonio via stagecoach. Meanwhile, in Texas, Morgan started withdrawing from Galveston as a port. The Galveston Wharf Company no longer gave his ships favored treatment. In addition, the only railroad from that citythe Galveston, Houston and Hendersoncould not run any of its rolling stock north of Houston. Morgan also adapted to the progress of the railroad networks. First, in 1873, he changed his freight tariffs from rates by container volumes to rates per hundred pounds of weight. Second, he had started a plan to bypass Galveston as a logistics center in favor of Houston. Around the same time, he also forwarded his passengers and freight through the Houston Direct Navigation Company, which transferred between coastal steamers to Buffalo Bayou packets in Galveston Bay, thus avoiding the port of Galveston, along with its regulations and fees. Houston Direct Navigation Company ran its steamers from Galveston Bay to the wharf at Houston. Later he bought enough stock in the company to become its largest shareholder, further aligning his interests with Houston and against Galveston. On July 1, 1874, Morgan forged an agreement with the Ship Channel Company (Houston) to improve navigation of Buffalo Bayou. Morgan agreed to dredge to a depth of nine feet and at least 120-feet in width in exchange for company stock. Less than two years later, he completed a channel through Galveston Bay to an existing channel through the Red Fish Bar, and the final segment terminated about six miles east of Houston, at a place he named Clinton, Texas. He next commissioned the construction of a short line railroad of just over seven miles to network his wharves at Clinton with two Houston railroads: the Houston and Texas Central and the International and Great Northern. When this line started operation on September 11, 1876, Morgan gained access to most of Texas and no longer needed Galveston within his transportation network. Next, he engineered a takeover of the Houston and Texas Central, ousting the all-Texas management, and installing himself as a director with Charles A. Whitney as president. Death and legacy Morgan died on May 9, 1878, at his home in New York City after an extended illness as a consequence of Bright's disease. He is interred at Green-Wood Cemetery in Brooklyn, New York. Morgan's incorporation of the eponymous Morgan's Louisiana and Texas Railroad and Steamship Company (ML&TRSC) facilitated a division of his estate, delegated to the administration of Charles A. Whitney. In April 1878, just a few weeks prior to his death, Morgan conveyed shares of the company to Mary Jane Sexton Morgan, Maria Louis Morgan Whitney, Frances Eliza Morgan Quintard, Charles A. Whitney, George W. Quintard, and Richard Jessup Morgan (a grandson). Whitney was the president of the company; Alexander C. Hutchinson was the vice-president. Both held proxies from the Morgan family shareholders to manage the company. The ML&TRSC continued to expand westward, reaching Lafayette by 1879. The company continued to operate under the goals that he established, and it was finally acquired by Southern Pacific Railroad in 1883. The railroad company persisted through 1885. Ultimately the development of the railroads severely diminished the role of coastal steamships; however, the railroad part of the enterprise remained valuable. Collis P. Huntington and Jay Gould emerged as dominant railroad developers in the region. Though Texas law prohibited interstate railroads, Huntington leased Texas railroads to bring them into the Southern Pacific Railroad system. Some of these included segments of the ML&TRSC in Texas and Louisiana. Morgan's legacy is preserved with the 1876 renaming of Brashear to Morgan City, Louisiana, and in the Morgan School, a high school in Clinton, Connecticut, for which Morgan donated land and capital. References Bibliography 1795 births 1878 deaths 19th-century American railroad executives People from Clinton, Connecticut Businesspeople from New York City Ship owners American businesspeople in shipping American Civil War industrialists American slave owners Burials at Green-Wood Cemetery
37357939
https://en.wikipedia.org/wiki/ARM%20big.LITTLE
ARM big.LITTLE
ARM big.LITTLE is a heterogeneous computing architecture developed by ARM Holdings, coupling relatively battery-saving and slower processor cores (LITTLE) with relatively more powerful and power-hungry ones (big). Typically, only one "side" or the other will be active at once, but all cores have access to the same memory regions, so workloads can be swapped between Big and Little cores on the fly. The intention is to create a multi-core processor that can adjust better to dynamic computing needs and use less power than clock scaling alone. ARM's marketing material promises up to a 75% savings in power usage for some activities. Most commonly, ARM big.LITTLE architectures are used to create a multi-processor system-on-chip (MPSoC). In October 2011, big.LITTLE was announced along with the Cortex-A7, which was designed to be architecturally compatible with the Cortex-A15. In October 2012 ARM announced the Cortex-A53 and Cortex-A57 (ARMv8-A) cores, which are also intercompatible to allow their use in a big.LITTLE chip. ARM later announced the Cortex-A12 at Computex 2013 followed by the Cortex-A17 in February 2014. Both the Cortex-A12 and the Cortex-A17 can also be paired in a big.LITTLE configuration with the Cortex-A7. The problem that big.LITTLE solves For a given library of CMOS logic, active power increases as the logic switches more per second, while leakage increases with the number of transistors. So, CPUs designed to run fast are different from CPUs designed to save power. When a very fast out-of-order CPU is idling at very low speeds, a CPU with much less leakage (fewer transistors) could do the same work. For example, it might use a smaller (fewer transistors) memory cache, or a simpler microarchitecture such as a pipeline. big.LITTLE is a way to optimize for both cases: Power and speed, in the same system. In practice, a big.LITTLE system can be surprisingly inflexible. One issue is the number and types of power and clock domains that the IC provides. These may not match the standard power management features offered by an operating system. Another is that the CPUs no longer have equivalent abilities, and matching the right software task to the right CPU becomes more difficult. Most of these problems are being solved by making the electronics and software more flexible. Run-state migration There are three ways for the different processor cores to be arranged in a big.LITTLE design, depending on the scheduler implemented in the kernel. Clustered switching The clustered model approach is the first and simplest implementation, arranging the processor into identically sized clusters of "big" or "LITTLE" cores. The operating system scheduler can only see one cluster at a time; when the load on the whole processor changes between low and high, the system transitions to the other cluster. All relevant data are then passed through the common L2 cache, the active core cluster is powered off and the other one is activated. A Cache Coherent Interconnect (CCI) is used. This model has been implemented in the Samsung Exynos 5 Octa (5410). In-kernel switcher (CPU migration) CPU migration via the in-kernel switcher (IKS) involves pairing up a 'big' core with a 'LITTLE' core, with possibly many identical pairs in one chip. Each pair operates as one so-termed virtual core, and only one real core is (fully) powered up and running at a time. The 'big' core is used when the demand is high and the 'LITTLE' core is employed when demand is low. When demand on the virtual core changes (between high and low), the incoming core is powered up, running state is transferred, the outgoing is shut down, and processing continues on the new core. Switching is done via the cpufreq framework. A complete big.LITTLE IKS implementation was added in Linux 3.11. big.LITTLE IKS is an improvement of cluster migration (), the main difference being that each pair is visible to the scheduler. A more complex arrangement involves a non-symmetric grouping of 'big' and 'LITTLE' cores. A single chip could have one or two 'big' cores and many more 'LITTLE' cores, or vice versa. Nvidia created something similar to this with the low-power 'companion core' in their Tegra 3 System-on-Chip. Heterogeneous multi-processing (global task scheduling) The most powerful use model of big.LITTLE architecture is Heterogeneous Multi-Processing (HMP), which enables the use of all physical cores at the same time. Threads with high priority or computational intensity can in this case be allocated to the "big" cores while threads with less priority or less computational intensity, such as background tasks, can be performed by the "LITTLE" cores. This model has been implemented in the Samsung Exynos starting with the Exynos 5 Octa series (5420, 5422, 5430), and Apple A series processors starting with the Apple A11. Scheduling The paired arrangement allows for switching to be done transparently to the operating system using the existing dynamic voltage and frequency scaling (DVFS) facility. The existing DVFS support in the kernel (e.g. cpufreq in Linux) will simply see a list of frequencies/voltages and will switch between them as it sees fit, just like it does on the existing hardware. However, the low-end slots will activate the 'Little' core and the high-end slots will activate the 'Big' core. This is the early solution provided by Linux's "deadline" CPU scheduler (not to be confused with the I/O scheduler with the same name) since 2012. Alternatively, all the cores may be exposed to the kernel scheduler, which will decide where each process/thread is executed. This will be required for the non-paired arrangement but could possibly also be used on the paired cores. It poses unique problems for the kernel scheduler, which, at least with modern commodity hardware, has been able to assume all cores in a SMP system are equal rather than heterogeneous. A 2019 addition to Linux 5.0 called Energy Aware Scheduling is an example of a scheduler that considers cores differently. Advantages of global task scheduling Finer-grained control of workloads that are migrated between cores. Because the scheduler is directly migrating tasks between cores, kernel overhead is reduced and power savings can be correspondingly increased. Implementation in the scheduler also makes switching decisions faster than in the cpufreq framework implemented in IKS. The ability to easily support non-symmetrical clusters (e.g. with 2 Cortex-A15 cores and 4 Cortex-A7 cores). The ability to use all cores simultaneously to provide improved peak performance throughput of the SoC compared to IKS. Successor In May 2017, ARM announced DynamIQ as the successor to big.LITTLE. DynamIQ is expected to allow for more flexibility and scalability when designing multi-core processors. In contrast to big.LITTLE, it increases the maximum number of cores in a cluster to 8, allows for varying core designs within a single cluster, and up to 32 total clusters. The technology also offers more fine grained per core voltage control and faster L2 cache speeds. However, DynamIQ is incompatible with previous ARM designs and is initially only supported by the Cortex-A75 and Cortex-A55 CPU cores. References Further reading External links big.LITTLE Processing big.LITTLE Processing with ARM CortexTM-A15 & Cortex-A7 (PDF) (full technical explanation) ARM architecture Heterogeneous computing
3766506
https://en.wikipedia.org/wiki/Lange%20%28musician%29
Lange (musician)
Stuart Langelaan (born 4 June 1974), stage name Lange, is a British DJ and record producer. Career Lange was born in Shrewsbury. His career began in 1997, signing his first recording contract with Additive Records in the UK. Early popularity was achieved with a collection of releases attracting the attention of industry leaders such as Paul van Dyk, Sasha, Paul Oakenfold and Judge Jules. A mixture of slow build-ups and melodic choruses gave Lange several hits in the UK Singles Chart. His first success was his remix of DJ Sakin's "Protect Your Mind", which became a club track. This was followed by his Lost Witness "Happiness Happening" remix. Other remix credits include work for Faithless and The Pet Shop Boys (the Lange remix of "New York City Boy" featured on their PopArt album's bonus "remix disc"). In 1999, he released the vinyl-only single "I Believe" featuring vocals by Sarah Dwyer charting at No. 68. A year later, in 2000, he released the single "Follow Me" on Positiva, featuring the vocals of Cecily Fay from The Morrighan. Upon its release, the track was deemed chart ineligible as the CD single contained six tracks (including the previous release "I Believe") and over 40 minutes of music (at the time, chart rules stated that CD singles could only contain three tracks with a total running length of 20 minutes). Thus the single reached No. 1 on the UK budget album chart upon release, staying in the chart for four weeks. "I Believe" and "Follow Me" were re-released as part of Positiva Records 10th birthday celebrations in 2003, reaching No. 12 in the UK budget album chart. "Drifting Away" reached No. 9 in the UK Singles Chart in 2002, and resulted in Lange performing on the BBC One's Top of the Pops and also the Pepsi Chart Show. Lange was also behind the SuReaL track "You Take My Breath Away", which peaked at No. 15 in the UK in 2000. Lange mixes have accumulated many appearances on compilation albums, including two tracks on the million-selling EMI Now That's What I Call Music! series. Other more club-orientated projects include his 'Firewall', 'LNG', and 'Vercetti' guises. In 2003, Firewall's "Sincere" was released on Armin Van Buuren's A State of Trance record label, and was re-released on Lange's own label, Lange Recordings. His 2003 track, "Don't Think It (Feel It)" (featuring Leah) peaked at No. 59 in the UK Singles Chart. In 2005, Lange went down the commercial road again with the release of the vocal track "If I ever see you again", under the name of Offbeat, however this only made number 136 in the UK Charts, probably due to it being a 12" release only despite the apparent mainstream appeal of the track. Lange has found similar success as a club DJ, reaching No. 37 in the DJ Mag poll. As well as playing some of the UK's biggest clubs (including Godskitchen, Gatecrasher, Passion, Gallery, and Slinky), Lange regularly tours internationally. Touring has taken him to Australia, Denmark, Finland, Germany, Hungary, Japan, Netherlands, New Zealand, Russia, China, Singapore, Sweden, Switzerland, Ibiza, Canada and the United States. Lange has headlined music festivals including Australia's Summadayze (alongside Tiësto), the Popsicle festival in San Francisco, Spooky and WEMF in Canada, and Fortdance in St Petersburg performing alongside Ferry Corsten and Tiësto. He released his debut album Better Late Than Never on Maelstrom and Lange Recordings in 2007. The album had one CD of new material, and a second disc called History that highlighted some of his past hits from 1998 to the present. Three singles from the album had successful releases; those were "Songless", "Angel Falls" and the most successful release, "Lange feat. Sarah Howells – Out of the Sky". That release retained a top 10 position on the Beatport Trance Chart for a considerable time after the track's release in October 2008, and earned Lange the nomination as one of the Beatport Top 10 selling Trance Artists in 2008. In 2010, he released his second album, Harmonic Motion. In 2013, his third album "We Are Lucky People" was released. Discography Studio albums 2007 Better Late Than Never 2010 Harmonic Motion 2013 We Are Lucky People Compilation albums 1999 Tranceformer 2000 (Mixed By Mauro Picotto & Lange) 2003 A Trip In Trance 3 (Mixed By Lange) 2005 Global Phases Vol 1 (Mixed By Lange) 2009 Visions – Lange Recordings Sessions (Mixed By Lange) 2009 Lange pres. Intercity – Summer 2009 (Mixed By Lange) 2010 Lange pres. Intercity – Spring 2010 (Mixed by Lange) 2011 Passion – The Album (Mixed by Lange & Genix) 2011 Lange Remixed (Mixed by Lange) 2012 Lange pres. Intercity 100 – The Album (Mixed by Lange) 2013 Ministry of Sound – Trance Nation (Mixed By Lange) 2014 In Search Of Sunrise 12 (Mixed By Richard Durand & Lange) Singles 1998 "The Root of Unhappiness / Obsession" 1999 "I Believe" (feat. Sarah Dwyer) 2000 "Follow Me" (feat. The Morrighan) 2000 "You Take My Breath Away" (as SuReal) 2001 "Reflections / Touched" (as Firewall) 2001 "Always On My Mind" (as SuReal) 2001 "The Way I Like It" (as S.L.) 2001 "Memory" (with DuMonde) 2002 "Drifting Away" (feat. Skye) 2002 "Atacama / Summer in Space" (with Pulser as The Bass Tarts) 2003 "Sincere" (as Firewall) 2002 "Frozen Beach" 2003 "Don't Think It (Feel It)" (feat. Leah) 2003 "I'm in Love Again" (as X-odus feat. Xan) 2003 "I Believe 2003 / Follow Me" (feat. The Morrighan) 2003 "Intercity" (as LNG) 2004 "Kilimanjaro" (as Firewall) 2004 "Sincere For You" (feat. Kirsty Hawkshaw) 2004 "In Control / Skimmer" (as Vercetti) 2005 "If I Ever See You Again" (as Offbeat) 2005 "Sincere 2005" (as Firewall) 2005 "This Is New York / X Equals 69" (with Gareth Emery) 2006 "Bermuda / Radar" (with Mike Koglin) 2006 "Looking Too Deep" (as Firewall feat. Jav D) 2006 "Back on Track / Three" (with Gareth Emery) 2006 "Dial Me Up" 2006 "Another You, Another Me" (with Gareth Emery) 2007 "Red October" 2007 "Angel Falls" 2008 "Songless" 2008 "Out of the Sky" (feat. Sarah Howells) 2009 "Stadium Four" (with Andy Moor) 2009 "Let It All Out" (feat. Sarah Howells) 2009 "Happiness Happening 2009" (feat. Tracey Carmen) 2009 "Wanderlust" (as Firewall) 2010 "Under Pressure" 2010 "Live Forever" (feat. Emma Hewitt) 2010 "Strong Believer" (feat. Alexander Klaus) 2010 "Harmonic Motion" 2010 "All Around Me" (feat. Betsie Larkin) 2011 "Electrify" (with Fabio XB and Yves Lacroix) 2011 "Brandalism" (as LNG) 2011 "Harmony Will Kick You in the Ass" (as LNG) 2011 "Hoover Damn" (as LNG) 2011 "Lange Remixed EP1: Touched (Dash Berlin's 'Sense of Touch' Remix) / Under Pressure (Steve Brian Remix) / Angel Falls (Signalrunners Fierce Remix)" 2011 "Songless (Mark Sherry's Outburst Remix)" (feat. Jennifer Karr) 2012 "Our Way Home" (feat. Audrey Gallagher) 2012 "Crossroads" (feat. Stine Grove) 2012 "We Are Lucky People" 2012 "Destination Anywhere" 2013 "Hold That Sucker Down" 2013 "Immersion" (with Genix) 2013 "Our Way Home (The Remixes)" (feat. Audrey Gallagher) 2013 "Our Brief Time in the Sun" 2013 "Risk Worth Taking" (feat. Susana) 2013 "Follow Me 2013" (feat. The Morrighan) 2013 "A Different Shade of Crazy" 2013 "Harmony Will Kick You in the Ass / Hoover Damn (Remixes)" (as LNG) 2013 "Imagineer" 2013 "Fireflies" (feat. Cate Kanell) 2014 "Crossroads (Remixed)" (feat. Stine Grove) 2014 "Insatiable" (feat. Betsie Larkin) 2014 "Unfamiliar Truth (Remixed)"(feat. Hysteria!) 2014 "Hey! While The Sun Shines" (as LNG) 2014 "Top Of The World" (with Andy Moor feat. Fenja) 2014 "Top Of The World (Remixes)" (with Andy Moor feat. Fenja) 2015 "Formula None" 2015 "Origin" 2015 "Formula None (Remixes)" 2015 "Wired To Be Inspired" 2015 "Weaponized" (with Stephen Kirkwood) 2015 "You Are Free" 2015 "On Your Side" (feat. Tom Tyler) 2016 "Airpocalypse" 2016 "Conspiracy" 2016 "Hacktivist" 2016 "On Your Side (Remixed)" (feat. Tom Tyler) 2016 "The First Rebirth" 2017 "Unity" (with Andy Moor as Stadium4) 2017 "The Great Silence" (as Lange presents Firewall) 2020 "Hybrid Origin" (with Andy Moor as Stadium4) Remixes 2013 Andy Moor "K Ta" 2013 Allure Feat. Emma Hewitt "No Goodbyes" 2013 Dennis Sheperd & Cold Blue "Fallen Angel" 2012 Dash Berlin Feat. Kate Walsh "When You Were Around" 2011 Super8 & Tab Feat. Betsie Larkin "Good Times" 2011 Gareth Emery "Into The Light" 2009 Ferry Corsten "We Belong" 2009 Above & Beyond presents OceanLab "I Am What I Am" 2009 Bartlett & Dyor "Floating Beyond" 2008 Kyau & Albert "Hide and Seek" 2008 Matt Cerf vs Evelio Feat Jaren "Walk Away" 2008 Martin Roth & Alex Bartlett "Off the World" 2007 DT8 "Perfect World" 2007 Jas Van Houten "Loco Love" 2005 Hemstock & Jennings "Mirage of Hope" 2004 The Thrillseekers "New Life" 2004 Empyreal Sun "From Dark To Light" 2003 Pulser "My Religion" 2003 Dario G "Feels Like Heaven" 2003 Ayumi Hamasaki "Hanabi" 2002 Ian Van Dahl "Reason" 2001 Ultra 5 feat. J Cee "Potion" 2001 SPX "Straight to the Point" 2001 Ian Van Dahl "Will I" 2001 Eye To Eye Feat. Taka Boom "Can't Get Enough" 2001 D. B. Boulevard "Point of View" 2001 Dumonde vs. Lange "Memory" 2000 Z2 "I Want You" 2000 Ruff Driverz Presents Arrola "Dreaming" 2000 DJ Sakin & Friends "Stay (Reminiscing)" 2000 Rhythm of Life "Put Me in Heaven" 2000 DuMonde "Tomorrow" 2000 Atlantis Vs Avatar "Fiji" 1999 TR Junior "Rock With Me" 1999 The Morrighan "Remember" 1999 Spacebrothers "Heaven Will Come" 1999 Smudge & Smith "Near Me" 1999 Pulp Victim "The World '99" 1999 Pet Shop Boys "New York City Boy" 1999 Lost Witness "Red Sun Rising" 1999 Lost Witness "Happiness Happening" 1999 Friends of Matthew "Out There" 1999 Faithless "Why Go?" 1999 DJ Sakin & Friends "Nomansland" 1999 DJ Manta "Holding On" 1999 Brainchild "Symmetry C" 1999 Agnelli & Nelson "Everyday" 1999 Agenda "Heaven" 1998 Sosa "The Wave" 1998 Sash! "Move Mania" 1998 Marc Et Claude "La" 1998 Golden Delicious "Ascension" 1998 DJ Sakin & Friends "Protect Your Mind" 1998 DJ Quicksilver "Timerider" 1998 Boccaccio Life "Secret Wish" 1998 Babe Instinct "Disco Babes From Outer Space" References External links Official website Living people 1974 births People from Shrewsbury British DJs British dance musicians British techno musicians British trance musicians Electronic dance music DJs Anjunabeats artists
1835851
https://en.wikipedia.org/wiki/GNU%20Readline
GNU Readline
GNU Readline is a software library that provides line-editing and history capabilities for interactive programs with a command-line interface, such as Bash. It is currently maintained by Chet Ramey as part of the GNU Project. It allows users to move the text cursor, search the command history, control a kill ring (a more flexible version of a copy/paste clipboard) and use tab completion on a text terminal. As a cross-platform library, readline allows applications on various systems to exhibit identical line-editing behavior. Editing modes Readline supports both Emacs and vi editing modes, which determine how keyboard input is interpreted as editor commands. See Editor war#Differences between vi and Emacs. Emacs keyboard shortcuts Emacs editing mode key bindings are taken from the text editor Emacs. On some systems, must be used instead of , because the shortcut conflicts with another shortcut. For example, pressing in Xfce's terminal emulator window does not move the cursor forward one word, but activates "File" in the menu of the terminal window, unless that is disabled in the emulator's settings. : Autocompletes from the cursor position. : Moves the cursor to the line start (equivalent to the key ). : Moves the cursor back one character (equivalent to the key ). : Sends the signal SIGINT via pseudoterminal to the current task, which aborts and closes it. Sends an EOF marker, which (unless disabled by an option) closes the current shell (equivalent to the command exit). (Only if there is no text on the current line) If there is text on the current line, deletes the current character (then equivalent to the key ). : (end) moves the cursor to the line end (equivalent to the key ). : Moves the cursor forward one character (equivalent to the key ). : Abort the reverse search and restore the original line. : Deletes the previous character (same as backspace). : Equivalent to the tab key. : Equivalent to the enter key. : Clears the line content after the cursor and copies it into the clipboard. : Clears the screen content (equivalent to the command clear). : (next) recalls the next command (equivalent to the key ). : Executes the found command from history, and fetch the next line relative to the current line from the history for editing. : (previous) recalls the prior command (equivalent to the key ). : (reverse search) recalls the last command including the specified characters. A second recalls the next anterior command that corresponds to the search : Go back to the next more recent command of the reverse search (beware to not execute it from a terminal because this command also launches its XOFF). If you changed that XOFF setting, use to return. : Transpose the previous two characters. : Clears the line content before the cursor and copies it into the clipboard. : If the next input is also a control sequence, type it literally (e. g. * types "^H", a literal backspace.) : Clears the word before the cursor and copies it into the clipboard. : Edits the current line in the $EDITOR program, or vi if undefined. : Read in the contents of the inputrc file, and incorporate any bindings or variable assignments found there. : Incremental undo, separately remembered for each line. : Display version information about the current instance of Bash. : Alternates the cursor with its old position. (C-x, because x has a crossing shape). : (yank) adds the clipboard content from the cursor position. : Sends the signal SIGTSTP to the current task, which suspends it. To execute it in background one can enter bg. To bring it back from background or suspension fg ['process name or job id'] (foreground) can be issued. : Incremental undo, separately remembered for each line. : (backward) moves the cursor backward one word. : Capitalizes the character under the cursor and moves to the end of the word. : Cuts the word after the cursor. : (forward) moves the cursor forward one word. : Lowers the case of every character from the cursor's position to the end of the current word. : Cancels the changes and puts back the line as it was in the history. : Capitalizes every character from the cursor's position to the end of the current word. : Insert the last argument to the previous command (the last word of the previous history entry). Choice of the GPL as GNU Readline's license GNU Readline is notable for being a free software library which is licensed under the GNU General Public License (GPL). Free software libraries are far more often licensed under the GNU Lesser General Public License (LGPL), for example, the GNU C Library, GNU gettext and FLTK. A developer of an application who chooses to link to an LGPL licensed library can use any license for the application. But linking to a GPL licensed library such as Readline requires the entire combined resulting application to be licensed under the GPL when distributed, to comply with section 5 of the GPL. This licensing was chosen by the FSF on the hopes that it would encourage software to switch to the GPL. An important example of an application changing its licensing to comply with the copyleft conditions of GNU Readline is CLISP, an implementation of Common Lisp. Originally released in 1987, it changed to the GPL license in 1992, after an email exchange between one of CLISP's original authors, Bruno Haible, and Richard Stallman, in which Stallman argued that the linking of readline in CLISP meant that Haible was required to re-license CLISP under the GPL if he wished to distribute the implementation of CLISP which used readline. Another response has been to not use this in some projects, making text input use the primitive Unix terminal driver for editing. Alternative libraries Alternative libraries have been created with other licenses so they can be used by software projects which want to implement command line editing functionality, but be released with a non-GPL license. Many BSD systems have a BSD-licensed libedit. MariaDB and PHP allow for the user to select at build time whether to link with GNU Readline or with libedit. linenoise is a tiny C library that provides line editing functions. Haskeline is a readline-like library for Haskell. It is mainly written for the Glasgow Haskell Compiler, but is available to other Haskell projects which need line-editing services as well. Sample code The following code is in C and must be linked against the readline library by passing a flag to the compiler: #include <stdlib.h> #include <stdio.h> #include <readline/readline.h> #include <readline/history.h> int main() { // Configure readline to auto-complete paths when the tab key is hit. rl_bind_key('\t', rl_complete); while (1) { // Display prompt and read input char* input = readline("prompt> "); // Check for EOF. if (!input) break; // Add input to readline history. add_history(input); // Do stuff... // Free buffer that was allocated by readline free(input); } return 0; } Bindings Non-C programming languages that provide language bindings for readline include Python's built-in module; Ruby's built-in module; Perl's third-party (CPAN) module, specifically for GNU ReadLine. Support for readline alternatives differ among these bindings. Notes References External links GNU readline homepage Things You Didn't Know About GNU Readline Free software programmed in C Readline Text user interface libraries
488056
https://en.wikipedia.org/wiki/Miner%202049er
Miner 2049er
Miner 2049er is a platform video game created by Bill Hogue that was released in 1982 by Big Five Software. It was developed for the Atari 8-bit family and widely ported to other systems. The title "Miner 2049er" evokes a 21st-century take on the California Gold Rush of around 1849, in which the gold miners and prospectors were nicknamed "49ers". A key selling point of the game was having ten different screens, which was a large number for a platform game at the time. For comparison, the Donkey Kong (1981) arcade game had four screens (and its console versions only two or three), which was more typical of the time. Unlike most of the home computer versions, Miner 2049er for the Atari 8-bit family was released on 16K ROM cartridge with the high price of . Plot Bounty Bob is a member of the Royal Canadian Mounted Police on a mission to search through all of Nuclear Ned's abandoned uranium mines for the treacherous Yukon Yohan. Bob must claim each section of each mine by running over it. There are a wide variety of futuristic obstacles that he must deal with such as matter transporters, hydraulic scaffolds, and jet-speed floaters; plus, he must also avoid radioactive creatures that have been left behind in the mines. Gameplay As Bounty Bob, the player's goal is to inspect every section of each mine in search of the evil Yukon Yohan while avoiding the radioactive creatures that inhabit the mine. As Bounty Bob walks over a section of flooring, it fills with color. To complete the level, every section of flooring must be colored. There are ten mines in total (eleven in the ColecoVision port). Each level is timed and must be completed before the player runs out of oxygen. Along the way, Bob encounters many objects left behind by past miners. By collecting these, bonus points are achieved and the radioactive creatures smile and change color. While in this state, Bob can collect them and earn extra points. Obstacles in each mine aid and hinder Bob's progress. Ladders allow him to climb up or down to the next platform; matter transporters teleport him to other transporters in that mine, chutes slide Bob off a platform (often against his will), and pulverizers crush Bob if he gets in their way. Most levels are defined by a unique element or obstacle. Development Big Five Software co-founders Bill Hogue and Jeff Konyu programmed computer games from 1980-82 for Radio Shack's TRS-80 Model I home computer. They created games patterned after arcade games, such as Super Nova (Asteroids), Attack Force (Targ), Cosmic Fighter (Astro Fighter), Galaxy Invasion (Galaxian), Meteor Mission II (Lunar Rescue), Robot Attack (Berzerk), and Defense Command (Missile Command). Hogue was originally going to write Miner 2049er for the TRS-80 Model I, but Radio Shack discontinued it in mid-1982, so he instead developed the game on the Atari 800. The game required 16k at a time when cartridges were normally 8k, so the company had to produce their own circuit boards holding two EPROMs. Early versions of the cartridges had a bug in the elevator code. Due to a production delay, it was first released on the Apple II. Ports Miner 2049er was ported to the Apple II, IBM PC (as a self-booting disk), Commodore 64, VIC-20, Atari 5200, Atari 2600, TI-99/4A, and ColecoVision. For the Atari 2600, two separate cartridges were published by Tigervision, each containing three selected levels: Miner 2049er and Miner 2049er Volume II. Reception ANALOG Computing in 1982 called Miner 2049er "one of those rare games which looks as if it were designed, not just thrown together", praising its animation and large number of levels, and concluded that it "is a must-play game for the Atari". Softline in 1983 stated that the Apple version was a good port of the Atari version, which "You already knew ... was a great game". The game reached #1 on the Softsel Hot List in 1983. The same year, Softline readers named the game the fourth most-popular Apple and sixth most-popular Atari program of 1983. Miner 2049er was awarded "1984 Electronic Game of the Year" at the 5th annual Arkie Awards, where the judges noted that the game was available on so many platform that it had become "the most widely played home electronic game of all time", and that "no home-arcade title has had the impact" that the game had. It also won an Outstanding Software Award from Creative Computing that year. Computer and Video Games rated the ColecoVision version 82% in 1989. Legacy Miner 2049er has been cited as the inspiration behind the Miner Willy and Crystal Caves series of games produced by Bug-Byte / Software Projects and Apogee Software respectively. Sequels Development of two games followed that featured Bounty Bob. The first, named Scraper Caper, had Bob acting as a fireman in a side-scrolling building, but that version was scrapped and they started over with an unnamed new game where Bob was chased by fireballs in 3D. Neither was considered appealing enough to release, and in 1985 an official sequel was released, Bounty Bob Strikes Back!. However, it never achieved the same level of success as its predecessor, and it was Hogue's last game. Re-releases A Game Boy version of the game with different graphics and levels was released by Mindscape in 1991. Miner 2049er was re-released in 2007 by Magmic for the mobile market. This version contains both a recreation of Hogue's original and the second a modernized version. The remake received an IGN Editor's Choice Award and won the Best Revival category in the Best of 2007 IGN awards. In 2011, Magmic added support for iOS devices. Also in 2007 Hogue released an emulator coded in C++ with both Bounty Bob games in one package for Windows. The emulator was made available free of charge on the Big Five website. Hogue states that as neither game used any Atari ROM routines they are not necessary for the emulator to run. An updated version of the game was announced in 2018 for the Intellivision Amico. See also Jumpman (1983) Mr. Robot and His Robot Factory (1984) References External links Big Five Software's official web site Miner 2049er for the Atari 8-bit family at Atari Mania The Miner 2049er Museum 1982 video games Apple II games Atari 2600 games Atari 5200 games Atari 8-bit family games Big Five Software games BlackBerry games ColecoVision games Commodore 64 games Commodore VIC-20 games Game Boy games IOS games FM-7 games NEC PC-8801 games Platform games Sharp X1 games Single-player video games Texas Instruments TI-99/4A games Video games about police officers Video games developed in the United States Video games set in the 2040s Windows Mobile games Mindscape games
4342215
https://en.wikipedia.org/wiki/MIKEY
MIKEY
Multimedia Internet KEYing (MIKEY) is a key management protocol that is intended for use with real-time applications. It can specifically be used to set up encryption keys for multimedia sessions that are secured using SRTP, the security protocol commonly used for securing real-time communications such as VoIP. MIKEY was first defined in RFC 3830. Additional MIKEY modes have been defined in RFC 4650, RFC 4738, RFC 6043, RFC 6267 and RFC 6509. Purpose of MIKEY As described in RFC 3830, the MIKEY protocol is intended to provide end-to-end security between users to support a communication. To do this, it shares a session key, known as the Traffic Encryption Key (TEK), between the participants of a communication session. The MIKEY protocol may also authenticate the participants of the communication. MIKEY provides many methods to share the session key and authenticate participants. Using MIKEY in practice MIKEY is used to perform key management for securing a multimedia communication protocol. As such, MIKEY exchanges generally occur within the signalling protocol which supports the communication. A common setup is for MIKEY to support Secure VoIP by providing the key management mechanism for the VoIP protocol (SRTP). Key management is performed by including MIKEY messages within the SDP content of SIP signalling messages. Use cases MIKEY considers how to secure the following use cases: One-to-one communications Conference communications Group Broadcast Call Divert Call Forking Delayed delivery (Voicemail) Not all MIKEY methods support each use case. Each MIKEY method also has its own advantages and disadvantages in terms of feature support, computational complexity and latency of communication setup. Key transport and exchange methods MIKEY supports eight different methods to set up a common secret (to be used as e.g. a session key or a session KEK): Pre-Shared Key (MIKEY-PSK): This is the most efficient way to handle the transport of the Common Secret, since only symmetric encryption is used and only a small amount of data has to be exchanged. However, an individual key has to be shared with every single peer, which leads to scalability problems for larger user groups. Public-Key (MIKEY-PK): The Common Secret is exchanged with the help of public key encryption. In larger systems, this requires a PKI to handle the secure distribution of public keys. Diffie–Hellman (MIKEY-DH): A Diffie–Hellman key exchange is used to set up the Common Secret. This method has a higher resource consumption (both computation time and bandwidth) than the previous ones, but has the advantage of providing perfect forward secrecy. Also, it can be used without any PKI. DH-HMAC (MIKEY-DHHMAC) (HMAC-Authenticated Diffie–Hellman): This is a light-weight version of Diffie–Hellman MIKEY: instead of certificates and RSA signatures it uses HMAC to authenticate the two parts to one another. DH-HMAC is defined in RFC 4650. RSA-R (MIKEY-RSA-R) (Reverse RSA): The Common Secret is exchanged with the help of public key encryption in a way that doesn't require any PKI: the initiator sends its public RSA key to the responder, which responds by selecting the Common Secret and then send it back to the initiator encrypted with the initiator's public key. RSA-R is defined in RFC 4738. TICKET (MIKEY-TICKET): Ticket-Based Modes of Key Distribution in Multimedia Internet KEYing (MIKEY). MIKEY-TICKET is defined in RFC 6043. IBAKE (MIKEY-IBAKE): Identity-Based Authenticated Key Exchange (IBAKE) Mode of Key Distribution in Multimedia Internet KEYing (MIKEY). MIKEY-IBAKE is defined in RFC 6267. SAKKE (MIKEY-SAKKE): Sakai-Kasahara Key Encryption in Multimedia Internet KEYing (MIKEY). This is an Identity-Based Authenticated Key Exchange method. MIKEY-SAKKE is defined in RFC 6509. MIKEY messages The majority of MIKEY methods requires the initiator to send a message to participants (the I_MESSAGE), and the receivers to respond with another message (the R_MESSAGE). Once this exchange has completed, the session key can be generated by the participants. MIKEY-SAKKE does not require an R_MESSAGE. MIKEY message content MIKEY messages are made up of a number of payloads. Each payload describes the next payload in the MIKEY message. In this way the MIKEY protocol has shown it is flexible to being extended and adapted. The first payload is always the Common Header (HDR). This identifies the version of the MIKEY protocol, the method used (data type), whether a response is required and it identifies the cryptographic session that will be established via the exchange. Further payloads are defined by the MIKEY method in use. Frequently these will include information payloads such as: A timestamp payload (T) - this contains the time and hence helps protect against replay attacks. Identity Payloads (ID) - this identifies the participants. This payload type can also contain certificates (CERT). This was extended in RFC 6043 to include the 'role' of the user as part of the ID (IDR). A RAND payload (RAND) - this is random data used to salt the post-exchange key derivation. Security Policies (SP) - this contains a limited set of security policies to support the communication. Certificate Hash (CHASH) - a hash indicating a certificate used for public-key encryption. In addition to this, the MIKEY message will contain at least one payload which encapsulates key material. These include: Key data transport (KEMAC) - this encapsulating the key by encrypting it using a pre-shared secret. This is extended by RFC 4650 to support authenticated Diffie–Hellman (DHHMAC). Diffie–Hellman (DH) - this contains cryptographic information supporting the Diffie–Hellman protocol. Envelope Data (PKE) - this encapsulates the key using public key encryption. This is extended by RFC 4738 and RFC 6267. Sakai-Kasahara (SAKKE) - this encapsulates the key using the identity-based Sakai-Kasahara protocol. This is defined by RFC 6509. Ticket (TICKET) - provides a cryptographic token to request key material from an external server (KMS). This is defined by RFC 6043. Finally, the MIKEY message may contain an authentication payload. These include: Signature (SIGN) - a signature on the MIKEY message. Verification (V) - a MAC sent by the receiver to verify receipt. See also ZRTP - an alternative to MIKEY as cryptographic key-agreement protocol for SRTP SDES Session Description Protocol Security Descriptions for Media Streams Key-agreement protocol Internet Key Exchange (IKE): Another key management protocol wolfSSL : A SSL/TLS library that has integration with MIKEY SAKKE References Cryptographic protocols
56311950
https://en.wikipedia.org/wiki/Amikumu
Amikumu
Amikumu ( ; ) is a cross-platform app for smartphones (Android and iOS) which can be used to find people nearby who speak or learn the same languages as the user. The app was launched for Esperanto speakers on 22 April 2017 and for speakers of all languages during LangFest in Montreal on 25 August 2017. On 9 August 2018 Amikumu had members in more than 130 countries speaking 588 languages. Architecture The Android app is written in Java and Kotlin, the iOS app in Swift, and the server in Ruby on Rails. Kickstarter campaign Amikumu was funded in part using Kickstarter. The Kickstarter campaign, organized by Esperanto speakers Chuck Smith and Richard "Evildea" Delamore, launched on 18 October 2016. More than 3000 euros were collected in the first 10 hours after the campaign started. The original goal of the campaign was 8500 euros, which was reached in 27 hours. The campaign went on until 16 November 2016, collecting a total of 26,671 euros, which is over three times more than the original goal. References External links 2017 software Android (operating system) software Communication software Esperanto Geosocial networking iOS software Mobile social software Proprietary cross-platform software
14620714
https://en.wikipedia.org/wiki/Go%20software
Go software
There is an abundance of go software available to support players of the game of Go. This includes software programs that play Go themselves, programs that can be used to view and/or edit game records and diagrams, programs that allow the user to search for patterns in the games of strong players and programs that allow users to play against each other over the Internet. Go playing programs With the advent of AlphaGo in 2016, computer programs can beat top professional players on the standard 19x19 board. A more in depth look into Go playing programs and the research behind them can be found in the article on computer Go. Recording There are several file formats used to store game records, the most popular of which is the Smart Game Format (SGF). Programs used for editing game records allow the user to record not just the moves, but also variations, commentary and further information on the game Databases Electronic databases can be used to study life and death situations, joseki, fuseki and games by a particular player. Available programs give players pattern searching options, which allows a player to research positions by searching for high level games in which similar situations occur. Such software will generally list common follow up moves that have been played by professionals, and give statistics on win/loss ratio in opening situations. Internet servers and clients Many Internet-based Go servers allow access to competition with players all over the world. Such servers also allow easy access to professional teaching, with both teaching games and interactive game review being possible. The first Go server that started operating is the Internet Go Server (IGS), which began service in 1992 and is still active today. Several other servers, all with the same basic server-client architecture, followed. Such servers required players to download a client program, and many such programs were therefore developed for a wide range of platforms. Around 2000, Kiseido publishing started the Kiseido Go Server (KGS), which allowed players to play without downloading a client by utilizing a Java applet in the web browser. This server quickly became popular and still is today. IGS and KGS were the most popular real-time go servers for the English speaking audience. Online Go Server (OGS) and Dragon Go Server (DGS) were the most popular turn-based go servers. See also Hikarunix, a Linux distribution focused on Go Notes External links List of Go playing programs on Sensei's Library. List of game record editing programs on Sensei's Library. List of Database programs on Sensei's Library. List of game record editing programs on GoBase. List of internet Go servers on Sensei's Library. List of internet Go servers on the AGA website. List of teaching services on the BGA website. Free Go Software GoKnot, a Windows solution open for developing Alejo's Tenuki Video-reviews and analysis on database analysis programs. GoNote Browser-based go boards in a number of different sizes. GoChild Web2.0 site designed for learning how to play go efficiently. User Interfaces These programs provide graphical interfaces on a variety of platforms, performing various combinations of the services listed above. Qibago - Windows Phone 8 and 8.1 App containing more than 40,000 professional match records. CGoban - Linux / Unix Drago - Windows gGo - Java based glGo - Linux / Windows; a prototype for a 3D goban display Goban - Mac OS X GoGui - Java based GridMaster - Android Jago - Java based qGo - Linux / Windows / Mac OS X ; also an IGS client Quarry - Linux, GTK+-based Ruby Go - Linux, Unix, Windows, Tk-based Monkeyjump - Linux, Python/SDL-based Go Game Online Viewer - Internet browser. A Go game viewer with animation and Kifu printer.
48662
https://en.wikipedia.org/wiki/Computer%20number%20format
Computer number format
A computer number format is the internal representation of numeric values in digital device hardware and software, such as in programmable computers and calculators. Numerical values are stored as groupings of bits, such as bytes and words. The encoding between numerical values and bit patterns is chosen for convenience of the operation of the computer; the encoding used by the computer's instruction set generally requires conversion for external use, such as for printing and display. Different types of processors may have different internal representations of numerical values and different conventions are used for integer and real numbers. Most calculations are carried out with number formats that fit into a processor register, but some software systems allow representation of arbitrarily large numbers using multiple words of memory. Binary number representation Computers represent data in sets of binary digits. The representation is composed of bits, which in turn are grouped into larger sets such as bytes. A bit is a binary digit that represents one of two states. The concept of a bit can be understood as a value of either 1 or 0, on or off, yes or no, true or false, or encoded by a switch or toggle of some kind. While a single bit, on its own, is able to represent only two values, a string of bits may be used to represent larger values. For example, a string of three bits can represent up to eight distinct values as illustrated in Table 1. As the number of bits composing a string increases, the number of possible 0 and 1 combinations increases exponentially. A single bit allows only two value-combinations, two bits combined can make four separate values, three bits for eight, and so on, increasing with the formula 2^n. The amount of possible combinations doubles with each binary digit added as illustrated in Table 2. Groupings with a specific number of bits are used to represent varying things and have specific names. A byte is a bit string containing the number of bits needed to represent a character. On most modern computers, this is an eight bit string. Because the definition of a byte is related to the number of bits composing a character, some older computers have used a different bit length for their byte. In many computer architectures, the byte is the smallest addressable unit, the atom of addressability, say. For example, even though 64-bit processors may address memory sixty-four bits at a time, they may still split that memory into eight-bit pieces. This is called byte-addressable memory. Historically, many CPUs read data in some multiple of eight bits. Because the byte size of eight bits is so common, but the definition is not standardized, the term octet is sometimes used to explicitly describe an eight bit sequence. A nibble (sometimes nybble), is a number composed of four bits. Being a half-byte, the nibble was named as a play on words. A person may need several nibbles for one bite from something; similarly, a nybble is a part of a byte. Because four bits allow for sixteen values, a nibble is sometimes known as a hexadecimal digit. Octal and hexadecimal number display Octal and hexadecimal encoding are convenient ways to represent binary numbers, as used by computers. Computer engineers often need to write out binary quantities, but in practice writing out a binary number such as 1001001101010001 is tedious and prone to errors. Therefore, binary quantities are written in a base-8, or "octal", or, much more commonly, a base-16, "hexadecimal" (hex), number format. In the decimal system, there are 10 digits, 0 through 9, which combine to form numbers. In an octal system, there are only 8 digits, 0 through 7. That is, the value of an octal "10" is the same as a decimal "8", an octal "20" is a decimal "16", and so on. In a hexadecimal system, there are 16 digits, 0 through 9 followed, by convention, with A through F. That is, a hexadecimal "10" is the same as a decimal "16" and a hexadecimal "20" is the same as a decimal "32". An example and comparison of numbers in different bases is described in the chart below. When typing numbers, formatting characters are used to describe the number system, for example 000_0000B or 0b000_00000 for binary and 0F8H or 0xf8 for hexadecimal numbers. Converting between bases Each of these number systems is a positional system, but while decimal weights are powers of 10, the octal weights are powers of 8 and the hexadecimal weights are powers of 16. To convert from hexadecimal or octal to decimal, for each digit one multiplies the value of the digit by the value of its position and then adds the results. For example: Representing fractions in binary Fixed-point numbers Fixed-point formatting can be useful to represent fractions in binary. The number of bits needed for the precision and range desired must be chosen to store the fractional and integer parts of a number. For instance, using a 32-bit format, 16 bits may be used for the integer and 16 for the fraction. The eight's bit is followed by the four's bit, then the two's bit, then the one's bit. The fractional bits continue the pattern set by the integer bits. The next bit is the half's bit, then the quarter's bit, then the ⅛'s bit, and so on. For example: This form of encoding cannot represent some values in binary. For example, the fraction , 0.2 in decimal, the closest approximations would be as follows: Even if more digits are used, an exact representation is impossible. The number , written in decimal as 0.333333333..., continues indefinitely. If prematurely terminated, the value would not represent precisely. Floating-point numbers While both unsigned and signed integers are used in digital systems, even a 32-bit integer is not enough to handle all the range of numbers a calculator can handle, and that's not even including fractions. To approximate the greater range and precision of real numbers, we have to abandon signed integers and fixed-point numbers and go to a "floating-point" format. In the decimal system, we are familiar with floating-point numbers of the form (scientific notation): 1.1030402 × 105 = 1.1030402 × 100000 = 110304.02 or, more compactly: 1.1030402E5 which means "1.1030402 times 1 followed by 5 zeroes". We have a certain numeric value (1.1030402) known as a "significand", multiplied by a power of 10 (E5, meaning 105 or 100,000), known as an "exponent". If we have a negative exponent, that means the number is multiplied by a 1 that many places to the right of the decimal point. For example: 2.3434E−6 = 2.3434 × 10−6 = 2.3434 × 0.000001 = 0.0000023434 The advantage of this scheme is that by using the exponent we can get a much wider range of numbers, even if the number of digits in the significand, or the "numeric precision", is much smaller than the range. Similar binary floating-point formats can be defined for computers. There is a number of such schemes, the most popular has been defined by Institute of Electrical and Electronics Engineers (IEEE). The IEEE 754-2008 standard specification defines a 64 bit floating-point format with: an 11-bit binary exponent, using "excess-1023" format. Excess-1023 means the exponent appears as an unsigned binary integer from 0 to 2047; subtracting 1023 gives the actual signed value a 52-bit significand, also an unsigned binary number, defining a fractional value with a leading implied "1" a sign bit, giving the sign of the number. Let's see what this format looks like by showing how such a number would be stored in 8 bytes of memory: where "S" denotes the sign bit, "x" denotes an exponent bit, and "m" denotes a significand bit. Once the bits here have been extracted, they are converted with the computation: <sign> × (1 + <fractional significand>) × 2<exponent> − 1023 This scheme provides numbers valid out to about 15 decimal digits, with the following range of numbers: The specification also defines several special values that are not defined numbers, and are known as NaNs, for "Not A Number". These are used by programs to designate invalid operations and the like. Some programs also use 32-bit floating-point numbers. The most common scheme uses a 23-bit significand with a sign bit, plus an 8-bit exponent in "excess-127" format, giving seven valid decimal digits. The bits are converted to a numeric value with the computation: <sign> × (1 + <fractional significand>) × 2<exponent> − 127 leading to the following range of numbers: Such floating-point numbers are known as "reals" or "floats" in general, but with a number of variations: A 32-bit float value is sometimes called a "real32" or a "single", meaning "single-precision floating-point value". A 64-bit float is sometimes called a "real64" or a "double", meaning "double-precision floating-point value". The relation between numbers and bit patterns is chosen for convenience in computer manipulation; eight bytes stored in computer memory may represent a 64-bit real, two 32-bit reals, or four signed or unsigned integers, or some other kind of data that fits into eight bytes. The only difference is how the computer interprets them. If the computer stored four unsigned integers and then read them back from memory as a 64-bit real, it almost always would be a perfectly valid real number, though it would be junk data. Only a finite range of real numbers can be represented with a given number of bits. Arithmetic operations can overflow or underflow, producing a value too large or too small to be represented. The representation has a limited precision. For example, only 15 decimal digits can be represented with a 64-bit real. If a very small floating-point number is added to a large one, the result is just the large one. The small number was too small to even show up in 15 or 16 digits of resolution, and the computer effectively discards it. Analyzing the effect of limited precision is a well-studied problem. Estimates of the magnitude of round-off errors and methods to limit their effect on large calculations are part of any large computation project. The precision limit is different from the range limit, as it affects the significand, not the exponent. The significand is a binary fraction that doesn't necessarily perfectly match a decimal fraction. In many cases a sum of reciprocal powers of 2 does not match a specific decimal fraction, and the results of computations will be slightly off. For example, the decimal fraction "0.1" is equivalent to an infinitely repeating binary fraction: 0.000110011 ... Numbers in programming languages Programming in assembly language requires the programmer to keep track of the representation of numbers. Where the processor does not support a required mathematical operation, the programmer must work out a suitable algorithm and instruction sequence to carry out the operation; on some microprocessors, even integer multiplication must be done in software. High-level programming languages such as Ruby and Python offer an abstract number that may be an expanded type such as rational, bignum, or complex. Mathematical operations are carried out by library routines provided by the implementation of the language. A given mathematical symbol in the source code, by operator overloading, will invoke different object code appropriate to the representation of the numerical type; mathematical operations on any number—whether signed, unsigned, rational, floating-point, fixed-point, integral, or complex—are written exactly the same way. Some languages, such as REXX and Java, provide decimal floating points operations, which provide rounding errors of a different form. See also Arbitrary-precision arithmetic Binary-coded decimal Binary numeral system Gray code Numeral system Notes and references Computer arithmetic Numeral systems
238766
https://en.wikipedia.org/wiki/C%20file%20input/output
C file input/output
The C programming language provides many standard library functions for file input and output. These functions make up the bulk of the C standard library header . The functionality descends from a "portable I/O package" written by Mike Lesk at Bell Labs in the early 1970s, and officially became part of the Unix operating system in Version 7. The I/O functionality of C is fairly low-level by modern standards; C abstracts all file operations into operations on streams of bytes, which may be "input streams" or "output streams". Unlike some earlier programming languages, C has no direct support for random-access data files; to read from a record in the middle of a file, the programmer must create a stream, seek to the middle of the file, and then read bytes in sequence from the stream. The stream model of file I/O was popularized by Unix, which was developed concurrently with the C programming language itself. The vast majority of modern operating systems have inherited streams from Unix, and many languages in the C programming language family have inherited C's file I/O interface with few if any changes (for example, PHP). Overview This library uses what are called streams to operate with physical devices such as keyboards, printers, terminals or with any other type of files supported by the system. Streams are an abstraction to interact with these in a uniform way. All streams have similar properties independent of the individual characteristics of the physical media they are associated with. Functions Most of the C file input/output functions are defined in (or in the C++ header , which contains the standard C functionality but in the namespace). Constants Constants defined in the header include: Variables Variables defined in the header include: Member types Data types defined in the header include: – also known as a file handle, this is an opaque type containing the information about a file or text stream needed to perform input or output operations on it, including: platform-specific identifier of the associated I/O device, such as a file descriptor the buffer stream orientation indicator (unset, narrow, or wide) stream buffering state indicator (unbuffered, line buffered, fully buffered) I/O mode indicator (input stream, output stream, or update stream) binary/text mode indicator end-of-file indicator error indicator the current stream position and multibyte conversion state (an object of type mbstate_t) reentrant lock (required as of C11) – a non-array type capable of uniquely identifying the position of every byte in a file and every conversion state that can occur in all supported multibyte character encodings – an unsigned integer type which is the type of the result of the operator. Extensions The POSIX standard defines several extensions to in its Base Definitions, among which are a function that allocates memory, the and functions that establish the link between objects and file descriptors, and a group of functions for creating objects that refer to in-memory buffers. Example The following C program opens a binary file called myfile, reads five bytes from it, and then closes the file. #include <stdio.h> #include <stdlib.h> int main(void) { char buffer[5]; FILE* fp = fopen("myfile", "rb"); if (fp == NULL) { perror("Failed to open file \"myfile\""); return EXIT_FAILURE; } for (int i = 0; i < 5; i++) { int rc = getc(fp); if (rc == EOF) { fputs("An error occurred while reading the file.\n", stderr); return EXIT_FAILURE; } buffer[i] = rc; } fclose(fp); printf("The bytes read were... %x %x %x %x %x\n", buffer[0], buffer[1], buffer[2], buffer[3], buffer[4]); return EXIT_SUCCESS; } Alternatives to stdio Several alternatives to have been developed. Among these is the C++ library, part of the ISO C++ standard. ISO C++ still requires the functionality. Other alternatives include the SFIO (A Safe/Fast I/O Library) library from AT&T Bell Laboratories. This library, introduced in 1991, aimed to avoid inconsistencies, unsafe practices and inefficiencies in the design of . Among its features is the possibility to insert callback functions into a stream to customize the handling of data read from or written to the stream. It was released to the outside world in 1997, and the last release was 1 February 2005. See also printf format string scanf format string References External links C standard library Input/output Articles with example C code