id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
19501790
https://en.wikipedia.org/wiki/Charles%20Leonard%20Hamblin
Charles Leonard Hamblin
Charles Leonard Hamblin (1922 – 14 May 1985) was an Australian philosopher, logician, and computer pioneer, as well as a professor of philosophy at the New South Wales University of Technology (now the University of New South Wales) in Sydney. Among his most well-known achievements in the area of computer science was the introduction of Reverse Polish Notation and the use in 1957 of a push-down pop-up stack. This preceded the work of Friedrich Ludwig Bauer and Klaus Samelson on use of a push-pop stack. The stack had been invented by Alan Turing in 1946 when he introduced such a stack in his design of the ACE computer. In philosophy, Hamblin is known for his book Fallacies, a standard work in the area of the false conclusions in logic. In formal semantics, Hamblin is known for his computational model of discourse as well as Hamblin semantics (or alternative semantics), an approach to the semantics of questions. Career and life Hamblin attended North Sydney Boys High School and Geelong Grammar. Interrupted by the Second World War and radar service in the Australian Air Force, Hamblin's studies included Arts (Philosophy and Mathematics), Science (Physics), and an MA in Philosophy (First Class Honours) at the University of Melbourne. He obtained a doctorate in 1957 at the London School of Economics on the topic Language and the Theory of Information, apparently under Karl Popper, critiquing Claude Shannon's information theory from a semantic perspective. From 1955, he was lecturer at N.S.W. University of Technology, and later professor of philosophy at the same place, until his death in 1985, during which time the organization had been renamed The University of New South Wales. In the second half of the 1950s, Hamblin worked with the third computer available in Australia, a DEUCE computer manufactured by the English Electric Company. For the DEUCE, he designed one of the first programming languages, later called GEORGE, which was based on Reverse Polish Notation. His associated compiler (language translator) translated the programs formulated in GEORGE into the machine language of the computer in 1957. Hamblin's work is considered to be the first to use Reverse Polish Notation, and this is why he is called an inventor of this representation method. Regardless of whether Hamblin independently invented the notation and its usage, he showed the merit, service, and advantage of the Reverse Polish way of writing programs for the processing on programmable computers and algorithms to make it happen. The second direct result of his work with the development of compilers was the concept of the push-pop stack (previously invented by Alan M. Turing for the ACE in 1945), which Hamblin developed independently of Friedrich Ludwig Bauer and Klaus Samelson. In the same year, 1957, Hamblin presented his stack concept at the first Australian Computer Conference. The compiler was running before that conference. Hamblin's work influenced the development of stack-based computers, their machine instructions, their arguments on a stack, and reference addresses. The design was taken up by English Electric in their KDF9 computer, delivered in 1963. In the 1960s, Hamblin again increasingly turned to philosophical questions. He wrote an influential introductory book on formal logic which is today a standard work on fallacies. It focused upon the treatment of false conclusions by traditional logic and brought into that treatment formal dialectic and developed it further. As such, Hamblin is considered as one of the founders of the modern informal logic. Hamblin contributed to the development of modern temporal logic in two ways. In its very early period he corresponded with Arthur Prior between 1958 and 1965; this collaboration culminated with the so-called Hamblin implications. Later in 1972 Hamblin independently rediscovered a form of duration calculus (interval logic), without being aware of the 1947 work of A. G. Walker on this topic, who was not interested in the tense aspect. Hamblin's duration calculus is very similar to that later developed by James Allen and Patrick J. Hayes in the mid 1980s. Hamblin was familiar with ancient Greek and several Asian and Pacific languages and in 1984 published a polyglot phrasebook on 25 of the latter, including "Burmese, Korean, Japanese, Fijian and Tahitian". A classical music lover who played the piano, Hamblin was setting words of Wittgenstein to music while hospitalized with an affliction that proved fatal. He was married to Rita Hamblin. They had two daughters, Fiona Katherine and Julie Claire. Works Monographs Fallacies. Methuen London 1970, and (paperback), new edition of 2004 with Vale Press, (paperback) – even today a standard work to the topic Elementary formal Logic: Programmed Course. Methuen London 1967, Imperatives. Blackwell Oxford 1987, Language and the Theory of Information. PhD Thesis, Logic and Scientific Method Programme, University of London, London, UK. Supervised by Karl Popper, submitted October 1956, awarded 1957. Influential articles Translation to and from polish notation. The computer journal 5/3, October 1962, pp. 210–213 An Addressless Coding Scheme based on Mathematical notation. W.R.E. Conference on Computing, proceedings, Salisbury: Weapons Research establishment 1957 GEORGE, an Addressless Coding Scheme for DEUCE. Australian national Committee on Computation and Automatic Control, Summarized Proceedings of First Conference, paper C6.1, 1960 Computer Languages. The Australian journal of Science 20, P. 135-139. Reprinted in The Australian Computer Journal 17/4, pp. 195–198 (November 1985) C. L. Hamblin [1973]: Questions in Montague English. Foundations of Language, 10: 41–53. Patents US2849706 "Electronic circuits for deriving a voltage proportional to the logarithm of the magnitude of a variable quantity". Applied 3 Feb. 1953 (applied in Great Britain 4 Feb. 1952), granted 21 Aug. 1958. US3008640 "Electric Computing Apparatus". Applied 11 Oct. 1954 (applied in Great Britain 13 Oct. 1953), granted 14 Nov. 1961. Publications Sources: C. L. Hamblin [1957]: An addressless coding scheme based on mathematical notation. Proceedings of the First Australian Conference on Computing and Data Processing, Salisbury, South Australia: Weapons Research Establishment, June 1957. C. L. Hamblin [1957]: Computer Languages. The Australian Journal of Science, 20: 135–139. Reprinted in The Australian Computer Journal, 17(4): 195–198 (November 1985). C. L. Hamblin [1957]: Review of: W. R. Ashby: Introduction to Cybernetics. Australasian Journal of Philosophy, 35. C. L. Hamblin [1958]: Questions. Australasian Journal of Philosophy, 36(3): 159–168. C. L. Hamblin [1958]: Review of: Time and Modality, by A. N. Prior. Australasian Journal of Philosophy, 36: 232–234. C. L. Hamblin [1958]: Surprises, innovations and probabilities. Proceedings of the ANU Symposium on Surprise, Canberra, July 1958. C. L. Hamblin [1958]: Review of: Formal Analysis of Normative Systems, by A. R. Anderson. Australasian Journal of Philosophy, 36. C. L. Hamblin [1958]: GEORGE Programming Manual. Duplicated, 1958. Revised and enlarged, 1959. C. L. Hamblin [1959]: The Modal "Probably". Mind, New Series, 68: 234–240. C. L. Hamblin [1962]: Translation to and from Polish notation. Computer Journal, 5: 210–213. C. L. Hamblin [1963]: Questions aren't statements. Philosophy of Science, 30(1): 62–63. R. J. Gillings and C. L. Hamblin [1964]: Babylonian reciprocal tables on UTECOM. Technology, 9 (2): 41–42, August 1964. An expanded version appeared in Australian Journal of Science, 27, 1964. C. L. Hamblin [1964]: Has probability any foundations? Proceedings of the Symposium on Probability of the Statistical Society of New South Wales, May 1964. Reproduced in Science Yearbook, University of New South Wales, Sydney, 1964. C. L. Hamblin [1964]: Review of: Communication: A Logical Model, by D. Harrah. Australasian Journal of Philosophy, 42. C. L. Hamblin [1964]: Review of: Analysis of Questions, by N. D. Belnap. Australasian Journal of Philosophy, 42. C. L. Hamblin [1965]: Review of: A Preface to the Logic of Science, by P. Alexander. The British Journal for the Philosophy of Science, 15(60): 360–362. C. L. Hamblin [1966]: Elementary Formal Logic, a Programmed Course. (Sydney: Hicks Smith). Republished by Methuen, in London, UK, 1967. Also translated into Swedish by J. Mannerheim, under the title: Element"ar Logik, ein programmerad kurs. (Stockholm: Laromedelsf"orlagen, 1970). C. L. Hamblin [1967]: One-valued logic. Philosophical Quarterly, 17: 38–45. C. L. Hamblin [1967]: Questions, logic of. Encyclopedia of Philosophy. (New York: Collier Macmillan). C. L. Hamblin [1967]: An algorithm for polynomial operations. Computer Journal, 10. C. L. Hamblin [1967]: Review of: New Approaches to the Logical Theory of Interrogatives, by L. Aqvist. Australasian Journal of Philosophy, 44. C. L. Hamblin [1969]: Starting and stopping. The Monist, 53: 410–425. C. L. Hamblin [1970]: Fallacies. London, UK: Methuen. C. L. Hamblin [1970]: The effect of when it's said. Theoria, 36: 249–264. C. L. Hamblin [1971]: Mathematical models of dialogue. Theoria, 37: 130–155. C. L. Hamblin [1971]: Instants and intervals. Studium Generale, 24: 127–134. C. L. Hamblin [1972]: You and I. Analysis, 33: 1–4. C. L. Hamblin [1972]: Quandaries and the logic of rules. Journal of Philosophical Logic, 1: 74–85. C. L. Hamblin [1973]: Questions in Montague English. Foundations of Language, 10: 41–53. C. L. Hamblin [1973]: A felicitous fragment of the predicate calculus. Notre Dame Journal of Formal Logic. 14: 433–446. C. L. Hamblin [1974]: La logica dell'iniziare e del cessare. Italian translation by C. Pizzi of an unpublished article: The logic of starting and stopping. Pages 295–317 in: C. Pizzi (Editor): La Logica del Tempo. Torino: Bringhieri. C. L. Hamblin [1975]: Creswell's colleague TLM. Nous, 9(2): 205–210. C. L. Hamblin [1975]: Saccherian arguments and the self-application of logic. Australasian Journal of Philosophy, 53: 157–160. C. L. Hamblin [1976]: An improved "Pons Asinorum"? Journal of the History of Philosophy, 14: 131–136. C. L. Hamblin [1984]: Languages of Asia and the Pacific: A Phrasebook for Travellers and Students. (North Ryde, NSW: Angus and Robertson). C. L. Hamblin [1987]: Imperatives. Oxford, UK: Basil Blackwell. C. L. Hamblin and P. J. Staines [1992]: An extraordinarily simple theory of the syllogism. Logique et Analyse, 35: 81. References Further reading Graham Williams, "A shy blend of logic, maths and languages", in: The Sydney Morning Herald, 8 June 1985, p. 44. External links Allen, Murray W. (1985), "Charles Hamblin (1922–1985)", The Australian Computer Journal, 17(4): 194–195. Special Issue on Charles Hamblin, Informal Logic, Vol. 31, No. 4 (2011). McBurney, Peter, A salute to Charles Hamblin, vukutu.com, 10 January 2011. McBurney, Peter, Charles L. Hamblin, at University of Liverpool. Von Fintel, Kai, Charles Leonard Hamblin, 5 July 2013. C. L. Hamblin at PhilPapers 1922 births 1985 deaths Australian computer scientists Formal methods people Programming language designers Programming language researchers Australian logicians 20th-century Australian philosophers University of New South Wales faculty People educated at North Sydney Boys High School
33134040
https://en.wikipedia.org/wiki/Kerbal%20Space%20Program
Kerbal Space Program
Kerbal Space Program (KSP) is a space flight simulation video game developed by Mexican developer Squad for Microsoft Windows, macOS, Linux, PlayStation 4, and Xbox One. In the game, players direct a nascent space program, staffed and crewed by green humanoid aliens known as "Kerbals". The game features a realistic orbital physics engine, allowing for various real-life orbital maneuvers such as Hohmann transfer orbits and orbital rendezvous. The first public version was released digitally on Squad's Kerbal Space Program storefront on 24 June 2011, and joined Steam's early access program on 20 March 2013. The game was released out of beta on 27 April 2015. Kerbal Space Program has support for user-created mods that add new features. Popular mods have received support and inclusion in the game by Squad. People and agencies in the space industry have taken an interest in the game, including NASA, the European Space Agency, United Launch Alliance CEO Tory Bruno, and Peter Beck, CEO and CTO of Rocket Lab. In May 2017, Squad announced that the game was purchased by video game company Take-Two Interactive, who will help support Squad in keeping the console versions up-to-date alongside the personal computer versions. An Enhanced Edition was released for Xbox One and PlayStation 4 in January 2018, and for PlayStation 5 and Xbox Series X/S in September 2021 by Private Division, a publishing subsidiary of Take-Two Interactive. Two expansions for the game have been released as downloadable content: Making History in March 2018 and Breaking Ground in May 2019. A sequel, Kerbal Space Program 2, has been announced for a 2022 release. Gameplay The player administers a space program operated by Kerbals, a species of small green humanoids, who have constructed a spaceport called the Kerbal Space Center (KSC) on their home planet, Kerbin. From this space center players can create rockets, aircraft, spaceplanes, rovers, and other craft from a provided set of components. Once built, the craft can be launched by players from the KSC launch pad or runway in an attempt to complete player-set or game-directed missions while avoiding partial or catastrophic failure (such as lack of fuel or structural failure). Players control their spacecraft in three dimensions with little assistance other than a Stability Assist System (SAS) to keep their rocket oriented. Provided it maintains sufficient thrust and fuel, a spacecraft can enter orbit or even travel to other celestial bodies. To visualize vehicle trajectory, the player must switch into map mode; this displays the orbit or trajectory of the player vehicle, as well as the position and trajectory of other spacecraft and planetary bodies. These planets and other vehicles can be targeted to view information needed for rendezvous and docking, such as ascending and descending nodes, target direction, and relative velocity to the target. While in map mode, players can also access maneuver nodes to plan out trajectory changes in advance, which helps in accurately planning burns. Missions (either player-set or assigned "contracts") involve goals such as reaching a certain altitude, escaping the atmosphere, reaching a stable orbit, landing on a certain planetary body, rescuing stranded Kerbals, capturing asteroids, and creating space stations and surface bases. Players may also set challenges for each other on the game's forums, such as visiting all five moons of Jool (the in-game analog for Jupiter), or use mods to test each other's spacecraft in air combat tournaments. Players can control in-game astronauts, known as Kerbals, who can perform extravehicular activities (EVA). While on EVA, Kerbals may use their EVA suit propellant system to maneuver in space and around craft and space stations, similar to the use of NASA's Manned Maneuvering Unit. Actions that can be performed while on EVA include repairing landing legs, wheels, and deploying or repacking parachutes. Kerbals can also collect material from science experiments, allowing them to store data inside the ship's capsule. During an EVA on any solid planet or moon, a Kerbal can place a flag or take a surface sample. Historical spacecraft can be recreated and their accomplishments mimicked, such as the Apollo program, the Mars Science Laboratory rover, or the International Space Station. Players may install mods which implement destinations, weapons, additional rocket parts, and goals, such as attempting challenges in a real-scale solar system. Mods can also add informational displays showing craft and orbital statistics such as delta-v and orbital inclination, while a few can near-fully automate flight. Some mods have been added into the game, due to popularity. For example, resource mining, to get ore for refining into fuel, has been implemented from a popular mod. The major celestial bodies in the game in order of their proximity to the parent star, the Sun, are Moho, Eve, Kerbin, Duna, Dres, Jool, and Eeloo (respectively analogs of Mercury, Venus, Earth, Mars, Ceres, Jupiter, and Pluto). Community modifications are able to expand this planetary system, even being able to add exoplanets or other solar systems. Moons in the system include Gilly, a captured asteroid around Eve; the Mun and Minmus, the two moons of Kerbin; Ike, the single moon of Duna; and the five moons of Jool, which include Laythe, an ocean moon dotted with islands; Vall, an ice moon; Tylo, a large Kerbin-sized moon; and Bop and Pol, the two outermost captured asteroids. Laythe is the only moon to have an atmosphere or liquid water. Moho, Dres, and Eeloo do not possess natural satellites. Game modes The player starts a new game by choosing one of three game modes: sandbox, science, and career mode. In sandbox mode, players may attempt to construct a suitable vehicle for any desired project without penalties for failure and entirely user-assigned missions. Many players have constructed unrealistic spacecraft in this mode, such as impractically large, complicated, or expensive rockets. This mode is also frequently used to create replicas of real-life vehicles. In science mode, the initial selection of parts is limited. More complex parts can be unlocked in the Research and Development building by advancing "science" with various experiments on Kerbin and elsewhere throughout the solar system. This mode was designed to ease new players into the game and prevent them from getting overwhelmed. Career mode extends science mode by adding funds, reputation, and contracts. To build and launch new rockets, the players must complete contracts, earning funds to pay for the necessary parts and fuel. Reputation affects how many contracts are given to the player; less reputation leads to fewer, lower-quality contracts. Declining a contract will reduce the likelihood that a contract of the same type will appear later while also decreasing reputation. Players must upgrade buildings in the space center to unlock new features such as improved tracking, higher spacecraft mass limit, larger part count limit, and increased available contracts. Physics While the game is not a perfect simulation of reality, it has been praised for its accurate orbital mechanics; all objects in the game except the celestial bodies are simulated using Newtonian dynamics. For instance, rocket thrust is applied to a vehicle's frame based on the placement of force-generating elements, and joints between parts have limited strength, allowing vehicles to be torn apart by excessive or misdirected forces. The game simulates trajectories and orbits using patched conic approximation instead of a full n-body simulation; thus, it does not support Lagrange points, perturbations, Lissajous orbits, halo orbits or tidal forces. According to the developers, implementing full n-body physics would require the entire physics engine to be rewritten. The in-game astronauts, Kerbals, are physically simulated. Hitting an object with their feet will cause them to tumble. Some celestial bodies have atmospheres of varying heights and densities, affecting the impact of drag on wings and parachutes. The simulations are accurate enough that real-world techniques such as aerobraking are viable methods of navigating the solar system. Aerobraking, however, has become a much more difficult method of velocity reduction since the full 1.0 release due to improved aerodynamics and optional heating during atmospheric entry. In-game atmospheres thin out into space but have finite heights, unlike real atmospheres. Kerbal Space Program alters the scale of its solar system for gameplay purposes. For example, Kerbin (the game's analog of Earth) has a radius of only , approximately that of Earth's. To compensate for the gravitational consequences of this size difference, Kerbin's density is over 10 times that of Earth's. The planets themselves are also significantly closer together than the planets in the real-life Solar System. However, some mods port the real-world solar system into the game with accurate scaling, environments, and additional parts to make up for the extra power requirements. Expansions There are two downloadable content (DLC) expansions: Making History and Breaking Ground. Making History The Making History expansion, released March 2018, adds additional elements to the game, some of which are historic parts from the Apollo program. These include a lunar lander, basic rocket parts of the Saturn V and more. A level editor allows the players to create their own scenarios. New launch sites, including an island, are added. Breaking Ground Breaking Ground, released May 2019, adds robotic parts, which can be used to build helicopters, propeller airplanes, suspension systems, and robots. The parts include pistons, hinges and rotors. New suits were added as well. A major addition are surface features and science. The player can find certain rocks on surfaces of planets and analyze them using robotic arms. Science experiments such as active seismometers and weather stations can be deployed by the Kerbals and can be used to gather extra science points in science and career mode. Development Pre-development Director Felipe Falanghe was hired by Squad in April 2010. At the time, the company did not develop software. According to Falanghe, the name "Kerbal" came from the names he gave small tin figurines he installed in modified fireworks as a teenager. In October 2010, development on Kerbal Space Program was authorized by co-founder Adrian Goya but deferred until Falanghe had completed his projects. Kerbal Space Program was first compiled on 17 January 2011. The game's first public release, version 0.7.3, was on 24 June 2011. The game entered beta on 14 December 2014, with version 0.90, and was released out of beta on 27 April 2015. Alpha Version 0.7.3 was the first public release of Kerbal Space Program, and was released on 24 June 2011. It was downloaded over 5,000 times. Compared to future versions of this game, 0.7.3 was quite rudimentary. There was no stability assist mode, Kerbin did not rotate and the Sun was simply a directional light source. There were no fuel flow mechanics, no control surfaces, and no other celestial bodies. Later versions added additional planets and moons, as well as the ability to load and save collections of parts, known as "subassemblies". Tutorials were also added at this stage. Version 0.24, titled First Contract and released on 17 July 2014, added the contracts and reputation system to the game's career mode; however, players were still able to play career mode without these features in the new science mode. Contracts reward the player with currency and reputation. Funds can be used to purchase rocket parts, and reputation results in better and more lucrative contracts. The final alpha release, 0.25, included a new economic system, and a major rework of aircraft components. Beta Version 0.90, nicknamed Beta Than Ever, was released on 15 December 2014. This was the only beta update for Kerbal Space Program. Featuring extensively rewritten code for the editor, it introduced the ability to sort parts by several characteristics and to assign parts to custom categories. Players could now offset parts, including into space. Career mode featured building upgrades; only the creation of small rockets with low mass and a part count is initially supported, but the player can upgrade each of the facilities to increase size limitations or unlock other capabilities. Release Version 1.0 was the first full release of Kerbal Space Program. It was nicknamed We Have Liftoff! and released on 27 April 2015. Version 1.0 completely overhauled the flight and drag model for a more realistic simulation, now ignoring drag on rocket parts which were occluded from the air flow. It also allowed for body lift, so that parts that were not specifically designed as wings (such as structural panels) could still generate lift. 1.0 added shock heating and heat shields, making atmospheric entry much more dangerous, as well as air brakes and procedurally generated fairings. All parts received internal modeling. Resource mining was added to refine into fuel or monopropellant. 1.0 also brought several improvements to Kerbals, who could now have various specializations. For example, "Engineer" Kerbals can repair wheels and landing legs. Female Kerbals were also added to the game. Version 1.1, nicknamed Turbo Charged, was released on 19 April 2016, almost one year after the last major update. The game engine was upgraded from Unity 4 to Unity 5, resulting in a massive increase in performance, as well as a stable 64-bit client, removing memory constraints caused by too many mods being installed. Much of the game was rewritten to accomplish this. Squad released Version 1.2, nicknamed Loud And Clear, to upgrade the game from Unity 5 to 5.4 and introduce performance and minor gameplay improvements. The patch entered experimental testing on 6 September 2016 and was officially released on 11 October 2016. Its main new features include communication satellites, relay systems, and KerbNet. Several updates have been released since. On 10 June 2021, Squad announced that update 1.12 On Final Approach, would be the last major planned release and that the Squad developers would join the Intercept Games team working on KSP 2. Other updates, Take-Two Interactive ownership On 27 January 2014, it was revealed that Squad was working on an education-themed version of the game entitled KerbalEdu in collaboration with TeacherGaming LLC, creators of MinecraftEdu. It has since been released and includes an improved user interface for easier data gathering and summary, pre-made lessons that focus on certain constructions, options to use the metric system, and a "robust pedagogy" that includes information outside of the game that ties into its content. Squad has also made an Asteroid Mission Pack, with full support from NASA. Released on 1 April 2014, it is based on the real-life initiative to send humans out to study asteroids. The majority of the game's music was provided by royalty-free composer Kevin MacLeod, with the rest of the soundtrack having been written by Squad's in-house composer Victor Machado. The game's main theme was composed by lead designer Felipe Falanghe and arranged by Machado. On 5 June 2015, it was announced that Kerbal Space Program was being ported to the PlayStation 4 by Flying Tiger Entertainment. In August 2015, it was announced that Xbox One and Wii U ports were also in development by Flying Tiger Entertainment. The game has since been released on the PlayStation 4 and Xbox One, but Squad has been quiet regarding the announced Wii U port. In January 2017, one of Squad's developers had finally broke the silence on the official forums, and admitted that despite initial enthusiasm to release the game on the Wii U, they claimed that various "external factors" has forced them to reevaluate supporting the console. They added that additional details will be announced at a later date. On 17 March 2017, Squad announced a full expansion for the game; called Making History, it would be paid and contain new features. These new features included Mission Builder, which would allow players to create and edit their missions that players could complete by launching and operating various rockets and ships in the game, and History Pack, which would provide designed missions simulating important historical space endeavors that have been completed in real life. Squad announced on 7 February 2018 that the expansion would be released on 13 March 2018. The expansion contains many parts inspired by those used in various rockets such as the Soyuz spacecraft and the Saturn V. Squad announced in May 2017 that Kerbal Space Program has been acquired by publisher Take-Two Interactive; this acquisition does not affect Squad's development or plans for the game and early backers will still get free DLC, and with Take-Two's help as a publisher, better support Kerbal Space Program on consoles to keep those versions to-date alongside the personal computer ones. Kerbal Space Program will be one of the first titles published under Take-Two Interactives's 2017-launched Private Division, a publishing label aimed to support mid-sized development studios. In late May 2019, Squad released the Breaking Ground expansion, which includes servos, pistons, new and redesigned space suits, and experiments which can be deployed to earn science over time. On 24 June 2021, the last version of Kerbal Space Program, version 1.12, was released. It was named "On Final Approach". Reception The public alpha and beta releases were well-received. Many publications have spoken positively of the game, praising its replay value and creative aspects, including Kotaku, Rock, Paper, Shotgun, IGN, GameSpy, Eurogamer, Polygon, and Destructoid. In May 2015, PC Gamer awarded Kerbal Space Program 1.0 a score of 96 out of 100, their highest review score of 2015. They praised the "perfect blend of science and slapstick", as well as the sense of accomplishment felt upon reaching other planets and completing goals. IGN has praised Kerbal Space Program ability to create fun out of failure, saying that "By the time I finally built a rocket that achieved successful orbit, I had failed so many times that in almost any other game I would have given up completely." In their review, Edge thought that "The magic of Kerbal Space Program is not just that it manages to be both a game and a simulation, a high-level educational tool and something that is fun to simply sit and tinker with. It's that, in combination, these qualities allow for a connection with real history and real human achievement... Its ultimate promise to the player is not that you'll crack a puzzle that has been set by a designer, but that you'll crack a puzzle set by reality." Commercial In the hours after its Steam early access release on 20 March 2013, Kerbal Space Program was one of the platform's top 5 best-selling games, as well as the best seller on Steam for Linux. Squad has released physical merchandise such as clothing and plush toys. In March 2015, Squad and 3D printing service Eucl3D announced a partnership that would allow players to order 3D printed models of their craft. Scientific community The game has crossed over into the scientific community with scientists and members of the space industry displaying an interest in the game, including NASA, ESA, ULA's Tory Bruno, and SpaceX's Elon Musk. Squad has added a NASA-based Asteroid Redirect Mission pack to the game, allowing players to track and capture asteroids for mining and study. Squad has also developed an official mod for the game centered around observing and tracking threatening asteroids, named "Asteroid Day". The mod was developed in partnership with the B612 Foundation. Some parts from this mod outside of core functionality were added as part of the release of the 1.1 update, with full integration of the mod to stock game being the version 1.3. In collaboration with ESA, Squad added the BepiColombo and Rosetta missions along with several ESA-themed textures for in-game parts in version 1.10. Sequel A sequel, Kerbal Space Program 2, is to be released in 2022. See also Space flight simulation game List of space flight simulation games Planetarium software List of observatory software Apollo 11 in popular culture References External links 2015 video games Cancelled Wii U games Early access video games Linux games MacOS games Open-world video games PlayStation 4 games PlayStation 5 games Private Division games Single-player video games Space flight simulator games Take-Two Interactive franchises Video games about extraterrestrial life Video games developed in Mexico Video games set on fictional planets Video games with user-generated gameplay content Windows games Xbox One games Xbox Series X and Series S games Aviation video games
26710239
https://en.wikipedia.org/wiki/Universal%20Storage%20Platform
Universal Storage Platform
Universal Storage Platform (USP) was the brand name for an Hitachi Data Systems line of computer data storage disk arrays circa 2004 to 2010. History The Hitachi Universal Storage Platform was first introduced in 2004. An entry level enterprise and high-end midrange model, the Network Storage Controller was introduced in 2005. The Universal Storage Platform was one of the first disk arrays to virtualize other disk arrays in the appliance instead of in the network. The second generation Universal Storage Platform V replaced the original Universal Storage Platform in 2007 and the Universal Storage Platform VM replacing the original Network Storage Controller also in 2007. Architecture At the core of the Universal Storage Platform V and VM is a fully fault tolerant, high performance, non-blocking, silicon based switched architecture designed to provide the bandwidth needed to support infrastructure consolidation of enterprise file and block-based storage services on and behind a single platform. Notable features include: Supports online local and distance replication and migration of data non disruptively internally and between heterogeneous storage, without interrupting application i/o through use of products such as Tiered Storage Manager, ShadowImage, TrueCopy and Universal Replicator. Enables virtualization of external SAN storage from Hitachi and other vendors into one pool Storage partitioning provides the ability to host multiple applications on a single storage system without allowing the actions of one set of users to affect the Quality of Service of others. Supports thin provisioning and storage reclamation on internal and external virtualized storage Provides encryption, WORM and data shredding services, data resilience and business continuity services and content management services. Specifications Universal Storage Platform V Specifications Frames (Cabinets) - Integrated Control/Drive Group Frame and 1 to 4 optional Drive Group Frames Universal Star Network Crossbar Switch - Number of switches 8 Aggregate bandwidth (GB/sec) - 106 Aggregate IOPS - Over 4 million Cache Memory - Number of cache modules 1-32, Module capacity 8 or 16GB, Maximum cache memory 512GB Control/Shared Memory - Number of control memory modules 1-8, Module capacity 4GB, Maximum control memory 28GB Front End Directors (Connectivity) Number of Directors 1-14 Fibre Channel host ports per Director - 8 or 16 Fibre Channel port performance - 4, 8 Gbit/s Maximum Fibre Channel host ports - 224 Virtual host ports - 1,024 per physical port Maximum IBM FICON host ports - 112 Maximum IBM ESCON host ports - 112 Logical Devices (LUNs) — Maximum Supported Open systems 65,536 IBM z/OS 65,536 Disks Type: Flash 73, 146, 200 and 400GB Type: Fibre Channel 146, 300, 450 and 600GB Type: SATA II 1TB, 2TB Number of disks per system (min/max) 4-1,152 Number spare disks per system (min/max) 1-40 Maximum Internal Raw Capacity - (2TB disks) 2,268 TB Maximum Usable Capacity - RAID-5 Open systems (2TB disks) 1,972 TB z/OS-compatible (1TB disks) 931 TB Maximum Usable Capacity — RAID-6 Open systems (2TB disks) 1,690TB z/OS-compatible (1TB disks) 796 TB Maximum Usable Capacity — RAID-1+ Open systems (2TB disks) 1,130TB z/OS-compatible (1TB disks) 527.4TB Other Features RAID 1, 10, 5, 6 support Maximum internal and external capacity 247PB Virtual Storage Machines 32 max Back end directors 1-8 Operating System Support Mainframe - Fujitsu: MSP; IBM z/OS, z/OS.e, z/VM, , TPF; Red Hat; Linux for IBM S/390 and zSeries; SUSE: Linux Enterprise Server for System z. Open systems - HP: HP-UX, Tru64 UNIX, Open VMS; IBM AIX; Microsoft Windows Server 2000, 2003, 2008; Novell NetWare; SUSE Linux Enterprise Server; Red Hat Enterprise Linux; SGI IRIX; Sun Microsystems Solaris; VMware ESX and Vsphere, Citrix XENserver Storage management The Hitachi Storage Command Suite (formerly the HiCommand Storage Management Suite) provides integrated storage resource management, tiered storage and business continuity software solutions allowing customers to align their storage with application requirements based upon metrics including Quality-of-Service, Service Level Objectives, Recovery Time Objectives and Recovery Point Objectives. Open Standards management interfaces such as SNMP and SMI-S are also supported. Models Universal Storage Platform family models. Information taken from External links Hitachi Data Systems Home Page References Hitachi storage servers Computer storage devices
41492679
https://en.wikipedia.org/wiki/DATANET-30
DATANET-30
The DATANET-30 was a computer manufactured by General Electric designed in 1961-1963 to be used as a communications computer. It was later used as a front-end processor for data communications. It became the first front end communications computer. The names on the patent were Don Birmingham, Bob McKenzie, Bud Pine, and Bill Hill. The first free standing installations beginning in 1963 were Chrysler Corporation message switching systems replacing Teletype punched tape systems. In 1964 acting as a front end processor along with an interface to the GE-225 computer a professor at Dartmouth College developed the BASIC programming language. Multiple teletype units were attached to be the first time-sharing system. The DATANET-30 used magnetic core memory with a cycle time of 6.94 μs. The word size was 18 bits and memory was available in sizes of 4K, 8K, or 16K words. The system could attach up to 128 asynchronous terminals, nominally at speeds of up to "3000 bits per second" (bps), but usually limited to the 300 bps supported by standard common-carrier facilities of the time, such as Bell 103 modem. The DATANET-30 could also operate in synchronous mode at speeds up to 2400 bps. A Computer Interface Unit allowed the DATANET-30 to communicate with a GE-200 series computer using direct memory access (DMA). It could also attach to the I/O channel of a GE-400 series, or GE-600 series system. An optional attachment allowed the DATANET-30 to attach GE-200 series peripherals such as disk storage, magnetic tape, or a line printer. The system was also a general purpose computer, with a number of special-purpose hardware registers. The instruction set contained 78 instructions. Assemblers were provided for the DATANET-30, one of which could run on the DATANET itself and one on the GE-225. References External links Photo of DATANET-30 at Computer History Museum Photos of historic GE computers General Electric mainframe computers Transistorized computers Networking hardware Computer-related introductions in 1965
13724289
https://en.wikipedia.org/wiki/MyPhoneExplorer
MyPhoneExplorer
MyPhoneExplorer is a proprietary freeware desktop application allowing management of Sony Ericsson and Android mobile phones. It was developed in Austria and has been translated into many languages, including English. It is available from multiple download sites and has been downloaded over 1.5 million times . Softpedia has given it their 100% Free award, while visitors to Softpedia rated MyPhoneExplorer very highly. The Android client has been highly rated as well. Features MyPhoneExplorer can connect to a phone using a USB cable, Wi-Fi, Bluetooth or infrared connections. Once connected, address book entries can be synchronised between the phone and MyPhoneExplorer, Microsoft Outlook, Microsoft Outlook Express, Mozilla Thunderbird or Google Mail. Calendar entries can also be synchronised with many systems, including Google Calendar. As with the PC Suite software which is normally shipped with Sony Ericsson mobiles, files can be dragged and dropped to and from the phone's memory and memory stick. Notably, however, MyPhoneExplorer also allows calls to be managed (i.e. dialed and answered) from within the application, and allows SMS text messages to be saved, read, written, sent, etc. directly from a PC. Moreover, it provides feature of back-up and restore which can back up everything like messages, contacts, calendar entries, and files. Supported phones MyPhoneExplorer was initially designed for use with Sony Ericsson K700, K750, K800 mobiles, but FJ Software state that it works with all Sony Ericsson phones which are not Symbian-based. Later versions of the software has support for some Symbian based Sony Ericsson phone although some older models are unsupported or require workarounds. Since version 1.8, MyPhoneExplorer has supported all Android phones running Android 1.6 or higher. To establish a connection with android phones, you have to install "MyPhoneExplorer Client" from Google Play on the phone. Bundled Applications In the past, MyPhoneExplorer was bundled with MixiDj Toolbar, DoNotTrackMe, and RegClean Pro. During installation of MyPhoneExplorer you were given an option to opt out of the bundled applications. Since March 2015, the software does not come with bundled applications any more. Language translation Text displayed in MyPhoneExplorer (e.g. in column headings, message boxes, etc.) is read up from installed language files (e.g. English.lng), thereby allowing translation to other languages. MyPhoneExplorer comes with instructions on how to create new language files and there are approximately 40 languages for which such language files have been created (as of 14 October 2007). References External links MyPhoneExplorer Download Page FJ Software Development FJ Software Support Forum YouTube introductory video and new developments for MyPhoneExplorer/EccoPro bundle Home and support base for EccoPro bundled with MyPhoneExplorer Windows-only freeware Mobile device management software
29462061
https://en.wikipedia.org/wiki/Virtual%20Storage%20Platform
Virtual Storage Platform
Virtual Storage Platform is the brand name for a Hitachi Data Systems line of computer data storage systems for data centers. Model numbers include G200, G400, G600, G800, G1000, G1500 and G5500 History Hitachi Virtual Storage Platform, also known as VSP was first introduced in September, 2010. This storage platform builds on the design of Universal Storage Platform V, originally released in 2007. Architecture At the heart of the system is the HiStar E-Network, a network crossbar switch matrix. This storage platform is made up of different technologies than USP and USP V. The connectivity to back-end disks is via 6Gbit/s SAS links instead of 4Gbit/s Fibre Channel loop. The internal processors are now Intel multi-core processors, and in addition to 3.5-inch drives support has been added for 2.5 inch small-form factor HDDs. The VSP supports SSD, SAS and SATA drives. Features included: The ability for growth in three ways: Scale up to meet increasing demands by dynamically adding processors, connectivity and capacity in a single unit. This enables tuning the configuration for optimal performance for both open systems and mainframe environments. Scale out to meet demands by dynamically combining multiple units into a single logical system with shared resources. Support increased needs in virtualized server environments and ensure safe multitenancy and quality of service through partitioning of cache and ports. Scale deep to extend the functions of Hitachi Virtual Storage Platform to multivendor storage through virtualization. Offload less-critical data to external tiers in order to optimize the availability of the tier one resources. Supports automated storage tiering, known as Dynamic Tiering, to automate the movement of data between tiers to optimize performance. Front to back cooling airflow for more efficient cooling Improved capacity per square foot and lower power consumption compared to the USP V. Enables virtualization of external SAN storage from Hitachi and other vendors into one pool Supports online local and distance replication and migration of data nondisruptively internally and between heterogeneous storage, without interrupting application I/O through use of products such as Tiered Storage Manager, ShadowImage, TrueCopy and Universal Replicator. Single image global cache accessible across all virtual storage directors for maximum performance. Automated wide-striping of data, which allows pool balancing and lets volume grow or shrink dynamically. The system can scale between one and six 19-inch rack cabinets. It can hold a maximum of 2,048 SAS high-density 2.5-inch drives for 1.2 petabytes of capacity, or 1,280 3.5-inch SATA drives for a maximum capacity of 2.5 petabytes. Supports thin provisioning and storage reclamation on internal and external virtual storage Provides encryption, WORM and data shredding services, data resilience and business continuity services and content management services. Delivering on enterprise demands for real-time customer engagement requires more than a fast storage array. It requires an end-to-end approach to data management that leverages both the storage operating system and flash media, to deliver low latency performance even as data levels grow exponentially. Current all-flash arrays (AFAs) rely on performance management to be handled in the array controller along with all other operations, such as data reduction. As data levels increase, they can cause controller bottlenecks, sporadic response times and a poor customer experience. To offset this issue, IT organizations have been forced to make tradeoffs in the number of workloads or amount of data they store on an individual AFA. These decisions result in the need to deploy and support more systems, raising costs and increasing management complexity. Hitachi understands this and has enhanced Hitachi Storage Virtualization Operating System RF (SVOS RF), which powers our award-winning Hitachi Virtual Storage Platforms (VSP). Storage Virtualization Operating System integrated with Hitachi Accelerated Flash (HAF) fundamentally changes this paradigm. Now organizations can engage customers faster, simplify storage operations and leverage the cloud for a superior return on investment. HAF is powered with flash optimizations to SVOS RF and unique solid-state hardware design. This approach eliminates these performance tradeoffs and answers demands for intelligent high performance, predictable, submillisecond response time and improved data center efficiency. With more than 350 flash patents, SVOS RF optimizations are engineered to accelerate the I/O path for access to flash devices. The result is a complete “flash-accelerated” refresh of the operating system that delivers significantly improved I/O processing, higher multithreading support and faster internal data movement. It also reduces response times considerably. With SVOS RF, organizations benefit: Flash-aware I/O stack accelerates data access. Leading storage virtualization consolidates investments. Best-in-class business continuity prevents outages. Adaptive data reduction services reduce storage needs. Direct connect to cloud assures predictable, ongoing IT costs. FMD disk technology is the fastest disk technology before NVMe technology came along. NVMe technology provides a much faster and reliable experience. In addition, it is predicted that 75% of the disk technology in the world will be to NVMe by 2022. Running on Hitachi VSP family systems, HAF enables sub-millisecond delivery on a petabyte scale. The purpose of this white paper is to take a close look at Hitachi Accelerated Flash. The discussion focuses on the uniqueness of this solution. It considers why and how these technologies can meet the increasing IT challenges of large and small enterprises as they focus on the management challenges of their high-velocity data. Specifications Virtual Storage Platform specifications in 2010 were: Frames (19-inch racks) - Integrated Control Chassis/Disk Chassis Frame (2) and up to 4 optional Disk Chassis Frames HiStar-E Network - Number of grid switches 4 pair (8) Aggregate bandwidth (GB/sec) - 192 Aggregate IOPS - 5,600,000 Cache Memory Number of data cache adapters (DCA) 2-16 Module capacity 2-8GB Maximum cache memory 1,024GB Control Memory Number of control memory modules 2-8 Module capacity 2-4GB Maximum control memory 32GB Front End Directors (Connectivity) Number of Directors 2-24 Fibre Channel host ports per Director - 8 or 16 Fibre Channel port performance - 2, 4, 8 Gbit/s Maximum Fibre Channel host ports - 192 Virtual host ports - 1,024 per physical port Maximum IBM FICON host ports - 192 Maximum IBM FCoE host ports - 96 Logical Devices (LUNs) — Maximum Supported Open systems 65,536 IBM z/OS 65,536 Disks Type: Flash 200GB (2.5"), 400GB (3.5") Type: SAS 146, 300, 600GB (2.5") Type: SATA II 2TB Number of disks per system (max) 2.5" - 2,048; 3.5" - 1,280 Number spare disks per system (min-max) 1-256 Maximum Internal Raw Capacity - (2TB disks) 2.52PB RAID 1, 5, 6 support Maximum internal and external capacity 255PB Max. Usable Internal capacity RAID-5 (7D+1P) OPEN-V 2,080.8TB z/OS 3390M 2,192.2TB Max. Usable Internal Capacity RAID-6 (6D+2P) OPEN-V 1,879TB z/OS 3390M 1,779.7TB Max. Usable Internal Capacity RAID-1+0 (2D+2D) OPEN-V 1,256.6TB z/OS 3390M 1,190.2TB Virtual Storage Machines 32 max Back End Directors 2-8 Operating System Support Mainframe IBM: z/OS, z/OS.e, OS/390, z/VM, VM/ESA, zVSE, VSE/ESA, MVS/XA, MVS/ESA, TPF, Linux for IBM S/390 and zSeries; Open systems HP: HP-UX, Tru64 UNIX, Open VMS IBM: AIX Microsoft: Windows Server 2000, 2003, 2008 Novell: NetWare, SUSE Linux Red Hat: Enterprise Linux Oracle: Solaris VMware: ESX Server Storage Management Hitachi Command Suite (formerly Hitachi Storage Command Suite) delivers management along three dimensions in support of the system as it does for all of the company's systems. 3D management simplifies operations and lowers costs along three distinct dimensions: Manage up to scale with the largest infrastructure deployments Manage out with breadth to manage storage, servers and the IT infrastructure Manage deep with the integration required to manage the multivendor resources of today’s complex data centers Command Suite provides integrated storage resource management, tiered storage and business continuity software solutions allowing customers to align their storage with application requirements based upon metrics including Quality-of-Service, Service Level Objectives, Recovery Time Objectives and Recovery Point Objectives. Hitachi Command Suite employs a use case-driven, step-by-step wizard-based approach that allows administrators to perform tasks such as new volume provisioning, configuration of external storage, and creation/expansion of storage pools easily on the fly. Hitachi Command Suite is composed of the following software products: Hitachi Basic Operating System Hitachi Dynamic Provisioning Hitachi Device Manager Hitachi Dynamic Link Manager Advanced Hitachi Basic Operating System V Hitachi Universal Volume Manager Hitachi Dynamic Tiering Hitachi Command Director Hitachi Storage Capacity Reporter, powered by APTARE Hitachi Tiered Storage Manager Hitachi Tuning Manager Hitachi Virtual Server Reporter, powered by APTARE Hitachi Command Suite also supports management interfaces such as SNMP and SMI-S. References Computer storage devices Hitachi storage servers
13698942
https://en.wikipedia.org/wiki/CNET
CNET
CNET (short for "Computer Network"), stylised C|net, is an American media website that publishes reviews, news, articles, blogs, podcasts, and videos on technology and consumer electronics globally, owned by Red Ventures since 2020. Founded in 1994 by Halsey Minor and Shelby Bonnie, it was the flagship brand of CNET Networks and became a brand of CBS Interactive through that unit's acquisition of CNET Networks in 2008, which was the previous owner prior to October 30, 2020. CNET originally produced content for radio and television in addition to its website and now uses new media distribution methods through its Internet television network, CNET Video, and its podcast and blog networks. In addition, CNET has region-specific and language-specific editions. These include Chinese, French, German, Japanese, Korean, and Spanish. History Origins After leaving PepsiCo, Halsey Minor and Shelby Bonnie launched CNET in 1994, after another website Yahoo! was launched. In 1994, with the help from Fox Network co-founder Kevin Wendle and former Disney creative associate Dan Baker, CNET produced four pilot television programs about computers, technology, and the Internet. CNET TV was composed of CNET Central, The Web, and The New Edge. CNET Central was created first and aired in syndication in the United States on the USA Network. Later, it began airing on USA's sister network Sci-Fi Channel along with The Web and The New Edge. These were later followed by TV.com in 1996. Current American Idol host Ryan Seacrest first came to national prominence at CNET, as the host of The New Edge and doing various voice-over work for CNET. In addition, CNET produced another television technology news program called News.com that aired on CNBC beginning in 1999. From 2001 to 2003, CNET operated CNET Radio on the Clear Channel-owned KNEW (910) in the San Francisco Bay Area, WBPS (890) in Boston, and XM Satellite Radio. CNET Radio offered technology-themed programming. After failing to attract a sufficient audience, CNET Radio ceased operating in January 2003 due to financial losses. Acquisitions and expansions CNET, Inc., the site's owner, made various acquisitions to expand its reach across various web platforms, regions, and markets. In July 1999, CNET, Inc. acquired the Swiss-based company GDT. GDT was later renamed to CNET Channel. In 1998, CNET, Inc. granted the right to Asiacontent.com to set up CNET Asia and the operation was brought back in December 2000. In January 2000, the same time CNET, Inc. became CNET Networks, they acquired comparison shopping site mySimon for $736 million. In October 2000, CNET Networks acquired ZDNet for approximately $1.6 billion. In January 2001, Ziff Davis reached an agreement with CNET Networks to regain the URLs lost in the 2000 sale of Ziff Davis. to SoftBank, a publicly traded Japanese media and technology company. In April 2001, CNET acquired TechRepublic, which provides content for IT professionals from Gartner, for $23 million in cash and stock. In May 2002, CNET Networks acquired Smartshop, an automated product catalog and feature comparison technology company, for an undisclosed amount. On July 14, 2004, CNET Networks announced that it would acquire Webshots, the leading photography website for $70 million ($60 million in cash, $10 million in deferred consideration), completing the acquisition that same month. In October 2007, they sold Webshots to American Greetings for $45 million. In August 2005, CNET Networks acquired Metacritic, a review aggregation website, for an undisclosed amount. In December 2006, James Kim, an editor at CNET, died in the Oregon wilderness. CNET hosted a memorial show and podcasts dedicated to him. On March 1, 2007, CNET announced the public launch of BNET, a website targeted towards business managers. BNET had been running under beta status since 2005. On May 15, 2008 it was announced that CBS Corporation would buy CNET Networks for US$ 1.8 billion. On June 30, 2008, the acquisition was completed. Former CNET Networks properties were managed under CBS Interactive at the time. CBS Interactive acquired many domain names originally created by CNET Networks, including download.com, downloads.com, upload.com, news.com, search.com, TV.com, mp3.com, chat.com, computers.com, shopper.com, com.com, and cnet.com. It also held radio.com until CBS Radio was sold to Entercom in 2017. On September 19, 2013 CBS Interactive launched a Spanish language sister site under the name CNET en Español. It focuses on topics of relevance primarily to Spanish-speaking technology enthusiasts. The site offered a "new perspective" on technology and is under the leadership of managing editor Gabriel Sama. The site not only offered news and tutorials, but also had a robust reviews section that it was led by Juan Garzon. After Red Ventures' acquisition, the company announced the closing of CNET en Español on November 11, 2020, leaving the largest tech site in Spanish in the US out of the market. In March 2014, CNET refreshed its site by merging with CNET UK and vowing to merge all editions of the agency into a unified agency. This merge brought many changes, foremost of which would be a new user interface and the renaming of CNET TV as CNET Video. On September 14, 2020, ViacomCBS announced that it would sell CNET to Red Ventures for $500 million. The transaction was completed on October 30, 2020. In Popular Culture In a Season 1 Episode of Modern Family, Main characters Phil and Claire argue about technology: leading them to talk about a CNET article. Gamecenter CNET launched a website to cover video games, CNET Gamecenter, in the middle of 1996. According to the San Francisco Chronicle, it was "one of the first Web sites devoted to computer gaming news". It became a leading game-focused website; in 1999, PC Magazine named it one of the hundred-best websites in any field, alongside competitors IGN and GameSpot. According to Gamecenter head Michael Brown, the site received between 50,000 and 75,000 daily visitors by late 2000. In May 2000, CNET founded the Gamecenter Alliance network to bring Gamecenter and four partner websites, including Inside Mac Games, under one banner. Nielsen//NetRatings ranked Gamecenter the sixth-most-popular gaming website in the United States by mid-2000. On July 19, 2000, CNET, Inc. made public its plan to buy Ziff-Davis and its ZDNet Internet business for $1.6 billion. Because ZDNet had partnered with SpotMedia—parent company of GameSpot—in late 1996, the acquisition brought both GameSpot and Gamecenter under CNET, Inc.'s ownership. Later that year, The New York Times described the two publications as the "Time and Newsweek of gaming sites". The paper reported that Gamecenter "seem[ed] to be thriving" amid the dot-com crash, with its revenue distributed across online advertising and an affiliate sales program with CNETs Game Shopper website, launched in late 1999. Following an almost $400 million loss at CNET as a result of the dot-com crash, the company ended the Gamecenter Alliance network in January 2001. On February 7, Gamecenter itself was closed in a redundancy reduction effort, as GameSpot was the more successful of the two sites. Around 190 jobs were cut from CNET during this period, including "at least 20" at Gamecenter, according to the San Francisco Chronicle. Discussing the situation, Tom Bramwell of Eurogamer reported, "It is thought[...] that very few if any of the website's staff will move sideways into jobs at GameSpot, now the company's other gaming asset." The Washington Post later noted that Gamecenter was among the "popular video-game news sites" to close in 2001, alongside Daily Radar. Malware infection in downloads With a catalog of more than 400,000 titles, the Downloads section of the website allows users to download popular software. CNETs download.com provides Windows, Macintosh, and mobile software for download. CNET claims that this software is free of spyware, but independent sources have confirmed that this is not the case. While Download.com is overall a safe place to download programs, precautions should be taken before downloading from the site, as some downloads do contain malware. Dispute with Snap Technologies In 1998, CNET, Inc. was sued by Snap Technologies, operators of the education service CollegeEdge, for trademark infringement relating to CNET, Inc.'s ownership of the domain name Snap.com, due to Snap Technologies already owning a trademark on its name. In 2005, Google representatives refused to be interviewed by all CNET reporters for a year after CNET published Google's CEO Eric Schmidt's salary and named the neighborhood where he lives, as well as some of his hobbies and political donations. All the information had been gleaned from Google searches. On October 10, 2006, Shelby Bonnie resigned as chairman and CEO, in addition to two other executives, as a result of a stock options backdating scandal that occurred between 1996 and 2003. This would also cause the firm to restate its financial earnings over 1996 to 2003 for over $105 million in resulting expenses. The Securities and Exchange Commission later dropped an investigation into the practice. Neil Ashe was named as the new CEO. In 2011, CNET and CBS Interactive were sued by a coalition of artists (led by FilmOn founder Alki David) for copyright infringement by promoting the download of LimeWire, a popular peer to peer downloading software. Although the original suit was voluntarily dropped by Alki David, he vowed to sue at a later date to bring "expanded" action against CBS Interactive. In November 2011, another lawsuit against CBS Interactive was introduced, claiming that CNET and CBS Interactive knowingly distributed LimeWire, the file sharing software. Hopper controversy In January 2013, CNET named Dish Network's "Hopper with Sling" digital video recorder as a nominee for the CES "Best in Show" award (which is decided by CNET on behalf of its organizers), and named it the winner in a vote by the site's staff. However, CBS abruptly disqualified the Hopper, and vetoed the results because the company was in active litigation with Dish Network. CNET also announced that it could no longer review any product or service provided by companies that CBS are in litigation with (which also includes Aereo). The new vote subsequently gave the Best in Show award to the Razer Edge tablet instead. Dish Network's CEO Joe Clayton said that the company was "saddened that CNETs staff is being denied its editorial independence because of CBS' heavy-handed tactics." On January 14, 2013, editor-in-chief Lindsey Turrentine addressed the situation, stating that CNETs staff were in an "impossible" situation due to the conflict of interest posed by the situation, and promised that she would do everything within her power to prevent a similar incident from occurring again. The conflict also prompted one CNET senior writer, Greg Sandoval, to resign. The decision also drew the ire of staff from the Consumer Electronics Association, the organizers of CES; CEO Gary J. Shapiro criticized the decision in a USA Today op-ed column and a statement by the CEA, stating that "making television easier to watch is not against the law. It is simply pro-innovation and pro-consumer." Shapiro felt that the decision also hurt the confidence of CNETs readers and staff, "destroying its reputation for editorial integrity in an attempt to eliminate a new market competitor." As a result of the controversy and fearing damage to the show's brand, the CEA announced on January 31, 2013 that CNET will no longer decide the CES Best in Show award winner due to the interference of CBS (the position has been offered to other technology publications), and the "Best in Show" award was jointly awarded to both the Hopper with Sling and Razer Edge. Sections Reviews The reviews section of the site is the largest part of the site, and generates over 4,300 product and software reviews per year. The Reviews section also features Editors' Choice Awards, which recognize products that are particularly innovative and of the highest quality. News CNET News (formerly known as News.com), launched in 1996, is a news website dedicated to technology. CNET News received the National Magazine Award for General Excellence Online. Content is created by both CNET and external media agencies as news articles and blogs. Video CNET Video, formerly called CNET TV, is CNET's Internet video channel offering a selection of on-demand video content including video reviews, "first looks," and special features. CNET editors such as Brian Cooley, Jeff Bakalar, and Bridget Carey host shows like Car Tech, The 404 Show, Quick Tips, CNET Top 5, Update, video prizefights, and others, as well as special reports and reviews. On April 12, 2007, CNET Video aired its first episode of CNET Live, hosted by Brian Cooley and Tom Merritt. The first episode featured Justin Kan of justin.tv. How To Officially launched August 2011, How To is the learning area of CNET providing tutorials, guides, and tips for technology users. Daily Charge CNET operates a weekday morning show called Daily Charge interviewing the authors of its articles and streams on Megaphone, iTunes, Spotify, Google Podcasts and Stitcher. CNET Forums CNET operated a discussion forum for discussion about technology and computers. After operating for over two decades, initially as "CNET Message Boards", it was made read-only in December 2020 and shut down in early 2021. Device specifications and user reviews CNET featured a repository of device specifications including monitors, computer parts, televisions, DVD/BluRay disc players, and other multimedia appliances. Users were able to submit comments. This section was spontaneously shut down in September 2021. See also ZDNet TechRepublic TechCrunch TechRadar Wired References External links American technology news websites Webby Award winners Former CBS Interactive websites Internet properties established in 1994 2008 mergers and acquisitions 2020 mergers and acquisitions Red Ventures
22751659
https://en.wikipedia.org/wiki/Fun%20School
Fun School
Fun School is a series of educational packages developed and published in the United Kingdom by Europress Software, initially as Database Educational Software. The original Fun School titles were sold mostly by mail order via off-the-page adverts in the magazines owned by Database Publications. A decision was made to create a new set of programs, call the range Fun School 2, and package them more professionally so they could be sold in computer stores around the UK. Every game comes as a set of three versions, each version set to cater for a specific age range. Fun School 1 Fun School 1 is the first set of educational games, created in 1984 by Database Educational Software for the Acorn Electron and BBC Micro computers. The three individual games catered for children aged under 6 years, between 6 and 8 years and over 8 years respectively. They also includes five children's Nursery Rhymes. The products were tested in classrooms and were educationally approved. Fun School 2 Fun School 2 is the second set of educational games, created in 1989 by Database Educational Software. It was released on more computers than its predecessor including Acorn Electron, BBC Micro, ZX Spectrum, Commodore 64, Amstrad CPC, Atari ST, Amiga, MS-DOS and RISC OS. The three individual games catered for children aged under 6 years, between 6 and 8 years and over 8 years respectively. The Fun School 2 games were programmed using the STOS (derived from BASIC) programming language with the STOS Compiler Engine. Fun School 2 was reviewed as "The number one choice for our school" by Shelley Gibson. Fun School 2 was rated 3rd place in the "Gallup full-price software chart". Commodore Force rated Fun School 2 for Under 6 Years as #43, Fun School 2 Ages 6–8 as #36 and Fun School 2 Over 8 Years as number 10 in rankings of the top 100 Commodore 64 games of 1993. Despite its popularity among children, Fun School 2 was criticised by left-wing educationalists due to a competition element and the matter was brought to British MP Kenneth Baker. Fun School 3 Fun School 3 is the third set of educational games, created in 1990 by Database Educational Software released for the ZX Spectrum, BBC Micro, Commodore 64, Amstrad CPC, Amstrad PCW, Atari ST, Amiga, Amiga CD32, MS-DOS and RISC OS computers. The three individual games catered for children aged under 5 years, between 5 and 7 years and over 7 years respectively. The games and their age ranges took in to full account of the new National Curriculum and the school syllabus content at the time. The Fun School 3 games were developed using the STOS (derived from BASIC) programming language with the STOS Compiler Engine. For the Amiga version it was converted to AMOS using the AMOS Compiler by William Cochrane and Peter Hickman. The Amiga version was hosted on the "Commodore 1990 Christmas" talk show along with AMOS 3D. The Amstrad PCW version won the European Computer Leisure Award as "Best Home Education Package" and also got the 8000 Plus Seal of Approval. Fun School 4 Fun School 4 is the fourth set of educational games, created in 1992 by Europress Software (formerly called Database Educational Software) and released on the ZX Spectrum, Amstrad CPC, Commodore 64, Atari ST, Amiga, MS-DOS and RISC OS computers. The three individual games catered for children aged under 5 years, between 5 and 7 years and between 7 and 11 years respectively. The content of the games matched the educational material taught in schools of England and Wales in accordance with the National Curriculum. During the planning stages, an education competition was held by ST Format, in which the best entries were incorporated in the game. The Amiga version of the Fun School 4 games were mostly created with the AMOS code using the AMOS Compiler engine. TimeTable and Exchange Rates were written in asembler, this was primarily due to the complex nature of these two games and wanting to keep the performance up to an acceptable level. Fun School Specials Fun School Specials is a set of educational games, created in 1993 by Europress Software, consisting of four different games. Upon demand, Europress designed each game specifically with a certain major topic to add depth to spelling, maths, creativity and science, respectively and comply fully with the National Curriculum. Paint and Create Paint and Create was released on Commodore 64, Amiga and MS-DOS computers and has an easy interface divided into six activities aimed at younger audiences to do their own artwork. Paint and Create got good review scores including 91% from Commodore Format and 94% from the CU Amiga magazine. It also got awarded the Screenstar from Amiga Reviews. Spelling Fair Spelling Fair was released on Commodore 64, Amiga and MS-DOS computers. Merlin's Maths Merlin's Maths was released on Amiga and MS-DOS computers. Merlin's Maths teaches mathematics on the topics of counting, decimals, fractions and volumes within six activities. Young Scientist 'Young Scientist was created in 1995 and released on CD for Windows and Macintosh to teach science in depth. The game stars the main character Ozzie Otter and has up to forty scientific experiments to try out. Fun School 5 Fun School 5 is the fifth set of educational games, released in 1995 by Europress Software on Windows. The games were originally planned to be released in 1993 with the age ranges 'Under 5s', '5s to 7s' and '7s to 11s'. However, there was a delay due to the development of the subject-specific Fun School Specials. The games were written using DOS 4GW and early versions had problems with some video drivers, forcing Europress to recall an entire stock before revising new versions. The three individual games catered for children aged between 4 and 7 years, between 6 and 9 years and between 8 and 11 years respectively and had their own specific themes with a goal to complete the game. The games introduced two children, Suki and Rik, and their pet purple dinosaur, Gloopy. The player has to assist Gloopy and the children in solving a number of challenges. Fun School 6 Fun School 6 is the sixth set of educational games, created in 1996 by Europress Software released on Windows. The three individual games catered for children aged between 4 and 7 years, between 6 and 9 years and between 8 and 11 years respectively and had their own specific themes but each of the five topics remained in the same category with certain variations related to the age level. The games star Gloopy from Fun School 5, this time a pink dinosaur. Fun School 7 Fun School 7 is the seventh and final set of educational games, created in 1998 by CBL Technology and released on Windows. The three individual games catered for children aged between 4 and 7 years, between 6 and 9 years and between 8 and 11 years respectively. The game makes use of 3D graphics. Commercial performance Before 1989, the educational market was dwindling and the release of "Fun School 2" was an outstanding success. The games sold over 60,000 copies by February and by this time a German Amiga package was developed. By April the games sold over 100,000 copies. During August in 1990, over 150,000 copies had been sold (including 30,000 Amstrad CPC copies). During the development of "Fun School 3" by December, 250,000 copies of the games had been sold. Before the BBC Micro and PC versions were released "Fun School 3" had already sold 45,000 copies of other formats. By the time "Fun School 4" was in development, Europress had sold 300,000 copies of its Fun School products and 400,000 copies by April. By 1992, over 500,000 copies of the Fun School Range products were sold. By 1993, over 650,000 Fun School packages had been sold. When Fun School 5 was released, over 800,000 Fun School Packages were sold and becoming an International Bestseller. During the release of "Fun School 6", around 1,500,000 copies of the Fun School Range were sold. When "Fun School 7" was released, 2 million copies of the Fun School Range were sold. References External links History of Fun School, Fun School 2 and Fun School 3 Children's educational video games Video game franchises Video games developed in the United Kingdom Video game franchises introduced in 1986 Acorn Archimedes games Amiga games Amstrad CPC games Amstrad PCW games Atari ST games BBC Micro and Acorn Electron games Amiga CD32 games Commodore 64 games DOS games Windows games ZX Spectrum games
38724925
https://en.wikipedia.org/wiki/Mir%20%28software%29
Mir (software)
Mir is a computer display server and, recently, a Wayland compositor for the Linux operating system that is under development by Canonical Ltd. It was planned to replace the currently used X Window System for Ubuntu; however, the plan changed and Mutter was adopted as part of GNOME Shell. Mir was announced by Canonical on 4 March 2013 as part of the development of Unity 8, intended as the next generation for the Unity user interface. Four years later Unity 8 was dropped although Mir's development continued for Internet of Things (IoT) applications. Software architecture Mir is built on EGL and uses some of the infrastructure originally developed for Wayland such as Mesa's EGL implementation and Jolla's libhybris. The compatibility layer for X, XMir, is based on XWayland. Other parts of the infrastructure used by Mir originate from Android. These parts include Google's Protocol Buffers, and previously included Android's input stack, which has since been replaced by Wayland's libinput, prior to the end of 2015. An implementation detail in memory management shared with Android is the use of server-allocated buffers which Canonical employee Christopher Halse Rogers claims to be a requirement for "the ARM world and Android graphics stack". According to Ryan Paul of Ars Technica, it has basic Wayland support. Adoption , the only announced desktop environment with native support for Mir was Canonical's Unity 8. No other Linux distribution announced plans to adopt Mir as default display server. On 23 July 2013, Compiz developer Sam Spilsbury had announced a proof-of-concept port of XBMC to Mir, based on the previous proof-of-concept port of XBMC to Wayland. On the same day Canonical developer Oliver Ries had confirmed that "this is the first native Mir client out in the wild". Among Ubuntu derivatives using a non-Unity environment, Xubuntu developers had announced in early August 2013 that they would evaluate running Xfce via XMir, but three weeks later decided to refrain from adopting it. Ubuntu In June 2013, Canonical's publicly announced milestones for Mir development were to ship Unity 7 with XMir by default and a pure X11 fallback mode with Ubuntu 13.10, remove the X11 fallback with Ubuntu 14.04 LTS, and Unity 8 running natively on Mir by Ubuntu 14.10. Later, on , Canonical announced a postponement of their Mir plans for desktop use and not use XMir as default in Ubuntu 13.10. Ubuntu Touch, however is targeted to ship with Mir and a smartphone version of Unity 8. In May 2016, during his traditional video interview with the community held during the Ubuntu Online Summit, Mark Shuttleworth confirmed that "You will be able to get 16.10 with Unity 8, just like you can get 16.04 with MATE,or KDE, or GNOME. It'll be there, it'll be an option, and the team that's working on that is committed to making that a first-class option." On 5 April 2017, Canonical announced that with the release of Ubuntu 18.04 LTS, the Unity 8 interface would be abandoned in favor of GNOME. When asked if the decision would also mean the end of Mir development, Canonical's Michael Hall said that given the divergent development paths taken by Mir and its competitor, Wayland, "Using Mir simply isn't an option we have." However, Mark Shuttleworth clarified on 8 April 2017 that development would continue for Mir's use in Internet of Things (IoT) applications, stating: "we have lots of IoT projects using Mir as a compositor so that code continues to receive investment." Toolkits SDL supported both Mir and Wayland starting with SDL 2.0.2 but it was disabled by default. Wayland and Mir support was enabled by default starting with SDL 2.0.4. With the release of 2.0.10, Mir support was dropped in favor of Wayland. GTK 3.16 included an experimental Mir backend, but was removed in GTK 4. Qt5 is the official and supported toolkit for Unity8 and Ubuntu Touch, included in the Ubuntu SDK. Controversy In March 2013, Canonical Ltd. announced Mir as the replacement display server for the X.Org Server in Ubuntu. Previously, in 2010, it had announced that it would use Wayland. Canonical stated that it could not meet Ubuntu's needs with Wayland. There were several posts made in objection or clarification, by people leading other similar or affected projects. When originally announcing Mir, Canonical made various claims about Wayland's input system, which the Wayland developers quickly rebutted. Official Canonical documentation in 2014 states, "our evaluation of the protocol definition revealed that the Wayland protocol does not meet our requirements. First, we are aiming for a more extensible input event handling that takes future developments like 3D input devices (e.g. Leap Motion) into account ... With respect to mobile use-cases, we think that the handling of input methods should be reflected in the display server protocol, too. As another example, we consider the shell integration parts of the protocol as privileged and we'd rather avoid having any sort of shell behavior defined in the client facing protocol." In late 2015 Mir switched from a custom Android-derived input stack to Wayland's libinput. Long-time Linux kernel developer Matthew Garrett criticized the choice of licensing for Canonical's software projects, particularly Mir. Unlike X.Org Server and Wayland, both under the MIT License, Mir is licensed under GPLv3 – "an odd [choice]" for "GPLv3-hostile markets" – but contributors are required to sign an agreement that "grants Canonical the right to relicense your contribution under their choice of license. This means that, despite not being the sole copyright holder, Canonical are free to relicense your code under a proprietary license." He concludes that this creates asymmetry where "you end up with a situation that looks awfully like Canonical wanting to squash competition by making it impossible for anyone else to sell modified versions of Canonical's software in the same market." Garrett's concerns were echoed by Bradley M. Kuhn, Executive Director of the Software Freedom Conservancy. Richard Stallman of the Free Software Foundation has stated on the similar case of MySQL that he supports dual-licensing of GPL software, as long as there are no proprietary extensions or proprietary versions of the free program, which was not the case for MySQL. In June 2013, Jonathan Riddell of Kubuntu announced that Kubuntu did not plan to switch to Mir. He stated "A few months ago Canonical announced their new graphics system for Ubuntu, Mir. It's a shame the Linux desktop market hasn't taken off as we all hoped at the turn of the millennium and they feel the need to follow a more Apple or Android style of approach making an OS which works in isolation rather than as part of a community development method. Here at Kubuntu we still want to work as part of the community development, taking the fine software from KDE and other upstream projects and putting it on computers worldwide. So when Ubuntu desktop gets switched to Mir we won't be following. We'll be staying with X on the images for our 13.10 release now in development and the 14.04 LTS release next year. After that we hope to switch to Wayland which is what KDE and every other Linux distro hopes to do." In September 2013, an Intel developer removed XMir support from their video driver and wrote "We do not condone or support Canonical in the course of action they have chosen, and will not carry XMir patches upstream." See also List of display servers References Canonical (company) Display servers Free software programmed in C++ Free windowing systems Ubuntu
1794544
https://en.wikipedia.org/wiki/Graphics%20software
Graphics software
In computer graphics, graphics software refers to a program or collection of programs that enable a person to manipulate images or models visually on a computer. Computer graphics can be classified into distinct categories: raster graphics and vector graphics, with further 2D and 3D variants. Many graphics programs focus exclusively on either vector or raster graphics, but there are a few that operate on both. It is simple to convert from vector graphics to raster graphics, but going the other way is harder. Some software attempts to do this. In addition to static graphics, there are animation and video editing software. Different types of software are often designed to edit different types of graphics such as video, photos, and vector-based drawings. The exact sources of graphics may vary for different tasks, but most can read and write files. Most graphics programs have the ability to import and export one or more graphics file formats, including those formats written for a particular computer graphics program. The use of a swatch is a palette of active colours that are selected and rearranged by the preference of the user. A swatch may be used in a program or be part of the universal palette on an operating system. It is used to change the colour of a text or image and in video editing. Vector graphics animation can be described as a series of mathematical transformations that are applied in sequence to one or more shapes in a scene. Raster graphics animation works in a similar fashion to film-based animation, where a series of still images produces the illusion of continuous movement. This software enables the user to create illustrations, designs, logos, 3-dimensional images, animation and pictures. History SuperPaint (1973) was one of the earliest graphics software applications. Fauve Matisse (later Macromedia xRes) was a pioneering program of the early 1990s, notably introducing layers in customer software. Currently Adobe Photoshop is one of the most used and best-known graphics programs in the Americas, having created more custom hardware solutions in the early 1990s, but was initially subject to various litigation. GIMP is a popular open-source alternative to Adobe Photoshop. See also Comparison of raster graphics editors Comparison of vector graphics editors List of raster graphics editors Graphic art software Image morphing software Image conversion imc FAMOS (1987), graphical data analysis Raster graphics editor Vector graphics editor References
28686400
https://en.wikipedia.org/wiki/Kill%20Pill
Kill Pill
In computing, kill pill is a term given to mechanisms and technologies designed to render systems useless either by user command, or under a predefined set of circumstances. Kill pill technology is most commonly used to disable lost or stolen devices for security purposes, but can also be used for the enforcement of rules and contractual obligations. Applications Lost and stolen devices Kill pill technology is used prominently in smartphones, especially in the disablement of lost or stolen devices. A notable example is Find My iPhone, a service that allows the user to password protect or wipe their iDevice(s) remotely, aiding in the protection of private data. Similar applications exist for other smartphone operating systems, including Android, BlackBerry, and Windows Phone. Anti-piracy measure Kill pill technology has been notably used as an anti-piracy measure. Windows Vista was released with the ability to severely limit its own functionality if it was determined that the copy was obtained through piracy. The feature was later dropped after complaints that false positives caused genuine copies of Vista to act as though they were pirated. Removal of malicious software The concept of a kill pill is also applied to the remote removal by a server of malicious files or applications from a client's system. Such technology is a standard component of most handheld computing devices, mainly due to their generally more limited operating systems and means of obtaining applications. Such functionality is also reportedly available to applications downloaded from the Windows Store on Windows 8 operating systems. Vehicles Kill pill technology is used frequently in vehicles for a variety of reasons. Remote vehicle disablement can be used to prevent a vehicle from starting, to prevent it from moving, and to prevent the vehicle's continued operation. Non-remotely, vehicles can require driver recognition before starting or moving, such as asking for a password or some form of biometrics from the driver. Kill pill technology is often used by governments to prevent drunk driving by repeat offenders as a punishment and deterrent. The installation of an ignition interlock devices is a sentencing alternative for drunk drivers in almost all 50 of the United States. Such a device requires the driver to blow into a breathalyzer before starting the vehicle. If the driver is found to be over the legal blood alcohol content limit, the vehicle will not start Other uses Kill pill technology can also be implemented to contextually disable certain aspects of a smartphone's functionality. A patent obtained by Apple claims the ability to disable the antenna, screen, or camera of a smartphone in settings like theaters, schools, and areas of high security sensitivity. Criticism Kill pill technology has been criticized for allowing for the suppression of personal liberties. While a kill pill can be utilized in a school setting to prevent academic dishonesty, it has been suggested that governments may also use it to suppress their people, for example, by disabling a phone's camera or antenna in the area of a protest. The ability to remotely remove files and applications from a user's device has also come under fire. Apple's apparent ability to blacklist applications, rendering them unusable on any iDevice, has raised concerns about the user's rights when downloading from the App Store. As of July 2014, no applications appear on Apple's blacklist website. See also Kill switch References External links Apple's blacklist of applications Computer security
4016710
https://en.wikipedia.org/wiki/Software%20project%20management
Software project management
Software project management is an art and science of planning and leading software projects. It is a sub-discipline of project management in which software projects are planned, implemented, monitored and controlled. History In the 1970s and 1980s, the software industry grew very quickly, as computer companies quickly recognized the relatively low cost of software production compared to hardware production and circuitry. To manage new development efforts, companies applied the established project management methods, but project schedules slipped during test runs, especially when confusion occurred in the gray zone between the user specifications and the delivered software. To be able to avoid these problems, software project management methods focused on matching user requirements to delivered products, in a method known now as the waterfall model. As the industry has matured, analysis of software project management failures has shown that the following are the most common causes: Insufficient end-user involvement Poor communication among customers, developers, users and project managers Unrealistic or unarticulated project goals Inaccurate estimates of needed resources Badly defined or incomplete system requirements and specifications Poor reporting of the project's status Poorly managed risks Use of immature technology Inability to handle the project's complexity Sloppy development practices Stakeholder politics (e.g. absence of executive support, or politics between the customer and end-users) Commercial pressures The first five items in the list above show the difficulties articulating the needs of the client in such a way that proper resources can deliver the proper project goals. Specific software project management tools are useful and often necessary, but the true art in software project management is applying the correct method and then using tools to support the method. Without a method, tools are worthless. Since the 1960s, several proprietary software project management methods have been developed by software manufacturers for their own use, while computer consulting firms have also developed similar methods for their clients. Today software project management methods are still evolving, but the current trend leads away from the waterfall model to a more cyclic project delivery model that imitates a software development process. Software development process A software development process is concerned primarily with the production aspect of software development, as opposed to the technical aspect, such as software tools. These processes exist primarily for supporting the management of software development, and are generally skewed toward addressing business concerns. Many software development processes can be run in a similar way to general project management processes. Examples are: Interpersonal communication and conflict management and resolution. Active, frequent and honest communication is the most important factor in increasing the likelihood of project success and mitigating problematic projects. The development team should seek end-user involvement and encourage user input in the development process. Not having users involved can lead to misinterpretation of requirements, insensitivity to changing customer needs, and unrealistic expectations on the part of the client. Software developers, users, project managers, customers and project sponsors need to communicate regularly and frequently. The information gained from these discussions allows the project team to analyze the strengths, weaknesses, opportunities and threats (SWOT) and to act on that information to benefit from opportunities and to minimize threats. Even bad news may be good if it is communicated relatively early, because problems can be mitigated if they are not discovered too late. For example, casual conversation with users, team members, and other stakeholders may often surface potential problems sooner than formal meetings. All communications need to be intellectually honest and authentic, and regular, frequent, high quality criticism of development work is necessary, as long as it is provided in a calm, respectful, constructive, non-accusatory, non-angry fashion. Frequent casual communications between developers and end-users, and between project managers and clients, are necessary to keep the project relevant, useful and effective for the end-users, and within the bounds of what can be completed. Effective interpersonal communication and conflict management and resolution are the key to software project management. No methodology or process improvement strategy can overcome serious problems in communication or mismanagement of interpersonal conflict. Moreover, outcomes associated with such methodologies and process improvement strategies are enhanced with better communication. The communication must focus on whether the team understands the project charter and whether the team is making progress towards that goal. End-users, software developers and project managers must frequently ask the elementary, simple questions that help identify problems before they fester into near-disasters. While end-user participation, effective communication and teamwork are not sufficient, they are necessary to ensure a good outcome, and their absence will almost surely lead to a bad outcome. Risk management is the process of measuring or assessing risk and then developing strategies to manage the risk. In general, the strategies employed include transferring the risk to another party, avoiding the risk, reducing the negative effect of the risk, and accepting some or all of the consequences of a particular risk. Risk management in software project management begins with the business case for starting the project, which includes a cost-benefit analysis as well as a list of fallback options for project failure, called a contingency plan. A subset of risk management is Opportunity Management, which means the same thing, except that the potential risk outcome will have a positive, rather than a negative impact. Though theoretically handled in the same way, using the term "opportunity" rather than the somewhat negative term "risk" helps to keep a team focused on possible positive outcomes of any given risk register in their projects, such as spin-off projects, windfalls, and free extra resources. Requirements management is the process of identifying, eliciting, documenting, analyzing, tracing, prioritizing and agreeing on requirements and then controlling change and communicating to relevant stakeholders. New or altered computer system Requirements management, which includes Requirements analysis, is an important part of the software engineering process; whereby business analysts or software developers identify the needs or requirements of a client; having identified these requirements they are then in a position to design a solution. Change management is the process of identifying, documenting, analyzing, prioritizing and agreeing on changes to scope (project management) and then controlling changes and communicating to relevant stakeholders. Change impact analysis of new or altered scope, which includes Requirements analysis at the change level, is an important part of the software engineering process; whereby business analysts or software developers identify the altered needs or requirements of a client; having identified these requirements they are then in a position to re-design or modify a solution. Theoretically, each change can impact the timeline and budget of a software project, and therefore by definition must include risk-benefit analysis before approval. Software configuration management is the process of identifying, and documenting the scope itself, which is the software product underway, including all sub-products and changes and enabling communication of these to relevant stakeholders. In general, the processes employed include version control, naming convention (programming), and software archival agreements. Release management is the process of identifying, documenting, prioritizing and agreeing on releases of software and then controlling the release schedule and communicating to relevant stakeholders. Most software projects have access to three software environments to which software can be released; Development, Test, and Production. In very large projects, where distributed teams need to integrate their work before releasing to users, there will often be more environments for testing, called unit testing, system testing, or integration testing, before release to User acceptance testing (UAT). A subset of release management that is gaining attention is Data Management, as obviously the users can only test based on data that they know, and "real" data is only in the software environment called "production". In order to test their work, programmers must therefore also often create "dummy data" or "data stubs". Traditionally, older versions of a production system were once used for this purpose, but as companies rely more and more on outside contributors for software development, company data may not be released to development teams. In complex environments, datasets may be created that are then migrated across test environments according to a test release schedule, much like the overall software release schedule. Maintenance and update is the process where Requirements and customer needs are always involving. They will undoubtedly find bugs, may request new features and ask for different functionality and more updates. So, all of these requests need to check and fulfill the customer's requirements and satisfaction. Project planning, execution, monitoring and control The purpose of project planning is to identify the scope of the project, estimate the work involved, and create a project schedule. Project planning begins with requirements that define the software to be developed. The project plan is then developed to describe the tasks that will lead to completion. The project execution is the process of completing the tasks defined in the project plan. The purpose of project monitoring and control is to keep the team and management up to date on the project's progress. If the project deviates from the plan, then the project manager can take action to correct the problem. Project monitoring and control involves status meetings to gather status from the team. When changes need to be made, change control is used to keep the products up to date. Issue In computing, the term "issue" is a unit of work to accomplish an improvement in a system. An issue could be a bug, a requested feature, task, missing documentation, and so forth. For example, OpenOffice.org used to call their modified version of Bugzilla IssueZilla. , they call their system Issue Tracker. Severity levels Issues are often categorized in terms of severity levels. Different companies have different definitions of severities, but some of the most common ones are: High The bug or issue affects a crucial part of a system, and must be fixed in order for it to resume normal operation. Medium The bug or issue affects a minor part of a system, but has some impact on its operation. This severity level is assigned when a non-central requirement of a system is affected. Low / Fixed The bug or issue affects a minor part of a system, and has very little impact on its operation. This severity level is assigned when a non-central requirement of a system (and with lower importance) is affected. Trivial (cosmetic, aesthetic) The system works correctly, but the appearance does not match the expected one. For example: wrong colors, too much or too little spacing between contents, incorrect font sizes, typos, etc. This is the lowest severity issue. Issue management In many software companies, issues are often investigated by quality assurance analysts when they verify a system for correctness, and then assigned to the developer(s) that are responsible for resolving them. They can also be assigned by system users during the User Acceptance Testing (UAT) phase. Issues are communicated using Issue or Defect Tracking Systems. In some other cases, emails or instant messengers are used. Philosophy As a subdiscipline of project management, some regard the management of software development akin to the management of manufacturing, which can be performed by someone with management skills, but no programming skills. John C. Reynolds rebuts this view, and argues that software development is entirely design work, and compares a manager who cannot program to the managing editor of a newspaper who cannot write. References General External links Resources on Software Project Management from Dan Galorath Project failure
65411190
https://en.wikipedia.org/wiki/Alijah%20Vera-Tucker
Alijah Vera-Tucker
Solomon Alijah Lewis Vera-Tucker (born June 17, 1999) is an American football guard for the New York Jets of the National Football League (NFL). He played college football at USC, where he was awarded the Morris Trophy in 2020 and was a two-time All-Pac-12 selection. Vera-Tucker was drafted by the Jets in the first round of the 2021 NFL Draft. Early and high school Vera-Tucker grew up in Oakland, California and attended Bishop O'Dowd High School. He played offensive tackle and defensive end on the football team and was named an Under Armour All-American as a senior. Vera-Tucker was rated a four-star recruit and committed to play college football at USC over offers from Washington, Oregon, Arizona, Arizona State, California, UCLA and Washington State. College career Vera-Tucker redshirted his true freshman season. Vera-Tucker played in all 12 of the Trojans games as a reserve guard and on special teams during his redshirt freshman season. He was named first team All-Pac-12 Conference by the Associated Press. Vera-Tucker considered entering the 2020 NFL Draft, but opted to return to USC for his redshirt junior season. Following the announcement that Pac-12 would postpone the 2020 season, Vera-Tucker announced that he would opt-out of the season in order to focus on preparing for the 2021 NFL Draft. Vera-Tucker reversed his decision to opt-out after the Pac-12 announced that they would resume fall football. Professional career Vera-Tucker was selected by the New York Jets in the first round (14th overall) of the 2021 NFL Draft. He signed his four-year rookie contract with the Jets on July 20, 2021, worth $15.88 million. References External links New York Jets bio USC Trojans bio 1999 births Living people USC Trojans football players American football offensive guards American football offensive tackles Players of American football from Oakland, California New York Jets players
55476956
https://en.wikipedia.org/wiki/Assassination%20Nation
Assassination Nation
Assassination Nation is a 2018 American black comedy satire thriller film written and directed by Sam Levinson. It stars an ensemble cast led by Odessa Young, Suki Waterhouse, Hari Nef and Abra. The film takes place in the fictional town of Salem, which devolves into chaos and violence after a computer hacker discovers and leaks personal secrets about many of its residents. Development of the film began in October 2016, when it was announced as the independent label Foxtail Entertainment's first project. Casting announcements were made throughout 2017 and principal photography commenced in March 2017 and took place in New Orleans. Months later, Neon acquired the film rights with the Russo Brothers's AGBO. The film had its world premiere at the Sundance Film Festival on January 21, 2018, and was theatrically released in the United States on September 21, 2018, by Neon and AGBO in association with Refinery29. It has grossed $2.8 million worldwide and received mixed reviews from critics, who praised its "frenetic and visually stylish" action but criticized the thinly-written characters. Plot The film opens with a montage of trigger warnings that show brief scenes depicting topics including bullying, blood, abuse, classism, death, drinking, drug use, sexual content, toxic masculinity, the male gaze, homophobia, transphobia, guns, nationalism, racism, kidnapping, murder, attempted murder, swearing, torture, violence, gore, weapons, and fragile male egos. In the town of Salem, Lily Colson is a high school senior who regularly hangs out with her three best friends, Bex Warren and sisters Em and Sarah Lacey. The girls go to a party where Bex hooks up with her crush Diamond, while Lily hangs with her boyfriend Mark while simultaneously texting someone named "Daddy" behind his back. After sex, Diamond tells Bex to keep their hookup a secret, as Bex is transgender. Marty, a casual hacker, receives a message from an unknown hacker about Mayor Bartlett, a known anti-gay candidate. He reveals pictures of Bartlett engaging with male escorts and dressing up in women's clothing, which Marty forwards to the entire town. During the press conference in which he's supposed to address the facts, Bartlett publicly commits suicide. Principal Turrell, Lily's kindhearted and heart-warming school principal, is the next to be hacked, with pictures of his 6-year-old daughter in the bath making people view him as a pedophile. During the meeting with angry parents, he refuses to resign, intending to set things right for the students and do what is best for the school itself. As the police question Marty about the hacks, a massive data dump of half the people in Salem is posted. Lily's classmate Grace discovers that her best friend Reagan has sent nude pictures of her to her boyfriends, and bashes in Reagan's skull with a baseball bat during her cheerleading practice. "Daddy" is revealed to be Em and Sarah's neighbor Nick Mathers, who Lily used to babysit for. Lily's lewd pictures and videos that she sent to Nick are made public when his information is leaked. As a result, she is exposed and humiliated by Mark, and then disowned from her family. As she walks down her street, shunned, homeless, and miserable, she is harassed by a man in a truck who films and harasses her, before chasing her with a knife. Although she manages to incapacitate him with a shovel, she was forced to make a hasty retreat to Em and Sarah's house. A week later, most of the town have donned masks and taken up arms to get revenge on those they think have wronged them. A masked Nick and other men capture Marty, who they torture into admitting that the source of the hacks came from Lily's IP address. Before executing Marty, they upload a video of his forced confession. The masked assailants track Lily to Em and Sarah's house, where all four girls are staying, break in, and kidnap Em and Sarah. Their mother, Nance, intervenes, trying to keep the marauders at bay, but to no avail as she is shot and killed. Em and Sarah are put into the back of a police car. Bex manages to take out one of the attackers with a nail gun and escapes, while Lily hides in Nick's house. Nick at first pretends to help her but then reveals a knife in his hand, intending to execute her for exposing the town of their secrets, but not before revealing that her friends will share the same fate with her and Marty. Lily fights Nick off and slits his throat with a makeshift weapon, slaying him in the process. Downstairs she discovers a large cache of weapons Nick keeps in his parlor, which she uses to ambush and shoot Officer Richter who is holding Em and Sarah hostage, and free her friends. Meanwhile, after escaping the Lacey residence, Bex attempts to look for help only to be ignored by the residents of the town. She is kidnapped by Diamond's friend Johnny, who tries to force Diamond to publicly hang her in retribution for Diamond's humiliation, but Bex successfully convinces him to spare her, buying enough time for Lily, Em, and Sarah to arrive, rescuing Bex and executing her assailants. Johnny surrenders and Diamond joins the girls soon after. Lily makes a video exonerating herself and urging everyone in Salem to stand up and fight back against their tormentors. After the battle for Salem, the town is destroyed, along with most of its population eliminated. Eventually, Lily's younger brother, Donny, who is revealed to be the criminal mastermind behind the hacks, has been captured, convicted, and condemned for cyberterrorism, murder, and invasion of privacy. When he gets asked why he did it by his parents, Donny reveals that it's only for his amusement. The next morning, the Salem High marching band performs Miley Cyrus' "We Can't Stop" down a street littered with dead bodies and destroyed vehicles. Cast Production Development In October 2016, Matthew Malek and Anita Gou launched the independent label Foxtail Entertainment. The duo announced the film as their first project. David S. Goyer and Kevin Turen joined them to produce the film. It is also produced by Bron Studios and Phantom Four, in association with Creative Wealth Media. After the premiere at Sundance Film Festival, Neon acquired the film rights. AGBO signed a deal with 30West, a company who acquired a majority stake in Neon, to co-distribute the film with Neon. In July 2018, Refinery29 also signed with Neon to co-distribute the film with them and AGBO. Casting In December 2016, Odessa Young, Suki Waterhouse, Hari Nef and Abra joined the main cast of the film. In March 2017, Bella Thorne, Maude Apatow, Bill Skarsgård, Joel McHale, Colman Domingo and Noah Galvin joined the cast. In April 2017, Anika Noni Rose joined the cast for the role of Nance, an attractive woman with terrible taste in men, who has an unfortunate reputation in the conservative town of Salem. Filming Principal photography began in March 2017 in New Orleans. The sequence where the girls are attacked in Nance's home was shot in a single take using a crane. Release Theatrical The film had its world premiere at the Sundance Film Festival on January 21, 2018. It was released in the United States on September 21, 2018 by Neon and AGBO in association with Refinery29. Home media Assassination Nation was released digitally and on Blu-ray and DVD on December 18, 2018 by Universal Pictures Home Entertainment. Reception Box office , Assassination Nation has grossed $2 million in the United States and $847,617 in other territories, for a worldwide total of $2.9 million. In the United States, Assassination Nation was released alongside The House with a Clock in Its Walls, Life Itself and Fahrenheit 11/9, and did poorly in theaters. The film was projected to gross around $4 million in its opening weekend from 1,403 theaters. However it ended up debuting to just $1 million, finishing 15th at the box office. Internationally, the film was released in only five countries as a limited theatrical release. Neon's chief Tom Quinn acknowledged the film's unsatisfactory box office performance, saying "Sam Levinson has created a bold, visionary and ultimately cathartic response to the dumpster fire that is 2018. We're admittedly disappointed more people didn't come out this weekend, but those that did were loud and overwhelmingly positive. It's going to take more time for Assassination Nation to find its audience". Prior to the film release, analyst Jeff Bock compared the film to Heathers, saying "There's people out there who like these Heathers-type of films, but they tend to be more popular on home entertainment platforms" and "They're more likely to be cult favorites than big box office hits". Critical response On review aggregator website Rotten Tomatoes, the film holds an approval rating of based on reviews, with an average of . The website's critical consensus reads, "Assassination Nation juggles exploitation and socially aware elements with mixed results, but genre fans may find it too stylish and viscerally energetic to ignore." On Metacritic, the film has a weighted average score of 56 out of 100, based on 28 critics, indicating "mixed or average reviews". Audiences polled by PostTrak gave the film a 60% positive score and a 39% "definite recommend". Accolades References External links 2018 films 2018 black comedy films 2018 crime thriller films 2018 independent films 2018 LGBT-related films 2010s comedy thriller films 2010s coming-of-age comedy films 2010s crime comedy films 2010s feminist films 2010s high school films 2010s satirical films 2010s teen comedy films American black comedy films American comedy thriller films American coming-of-age comedy films American crime comedy films American crime thriller films American feminist films American films American films about revenge American high school films American independent films American satirical films American teen comedy films American teen LGBT-related films Battle royale Bron Studios films Films about bullying Films about murderers Films about school violence Films about social media Films about trans women Films directed by Sam Levinson Films produced by David S. Goyer Films set in fictional populated places Films set in the United States Films shot in New Orleans Girls with guns films LGBT-related black comedy films LGBT-related comedy thriller films LGBT-related satirical films Teen crime films Teen thriller films Teensploitation
17080244
https://en.wikipedia.org/wiki/The%20Fall%20of%20Colossus
The Fall of Colossus
The Fall of Colossus is a 1974 science fiction novel written by the British author Dennis Feltham Jones (writing as D. F. Jones). This is the second volume in "The Colossus Trilogy" and a sequel to Jones' 1966 novel Colossus. The trilogy concludes in 1977's Colossus and the Crab. Plot Five years have passed since the super computer called Colossus used its control over the world's nuclear weapons to take control of humanity. In our timeline, that would place this story in the 1990s or the early 2000s. All references in the novel, however, place it in the 22nd century, with the 20th and 21st being mentioned in the past. Colossus has been superseded by an even more advanced computer system built on the Isle of Wight, which has abolished war and poverty throughout the world. National competition and most sports have been replaced by the Sea War Game, where replicas of World War I dreadnoughts battle each other for viewing audiences. A group known as the Sect, which worships Colossus as a god, is growing in numbers and influence. Yet despite the seeming omnipresence of Colossus' secret police and the penalty of decapitation for anti-machine activities, a secret Fellowship exists that is dedicated to the computer's destruction. Charles Forbin, in his early 50s in this and the first novel, is the former head of the design team that built and activated the original Colossus. He now lives on the Isle of Wight with his wife and son, serving the computer as Director of Staff. Though contemptuous of the growing cult of personality around Colossus, he has reconciled himself to Colossus' rule. His wife Cleo, now 28 years old (35 in the previous novel), loathes Colossus and is a member of the Fellowship. One afternoon while taking her son to a secluded beach, she receives a radio transmission from the planet Mars. Identifying Cleo as a member of the Fellowship, the transmission offers help to destroy Colossus and asks her to return to the same spot the next day for further instructions. She returns with Edward Blake, Colossus' Director of Input and the head of the Fellowship. Together, they receive instructions to obtain a circuit diagram of one of Colossus' input terminals and a sample of the information that is fed into it, along with instructions to proceed to two locations — one in St. John's, Newfoundland, the other in New York's Central Park — to receive further transmissions. Though Blake passes the necessary information along to Cleo, she is quickly arrested by the Sect and sentenced by Colossus to spend three months at an "Emotional Study Center" on the island of Tahiti, where she is repeatedly raped as part of an experiment designed to help Colossus better understand human emotion. Now under suspicion, Blake approaches Forbin, who is devastated by his wife's arrest. Explaining the details of their plot, Blake convinces Forbin to help after explaining the details of Cleo's captivity. Forbin travels in disguise with the requested information, first to St. John's, then to New York City, where he receives an incomprehensible mathematical problem that the transmission claims will destroy Colossus once it is fed into the computer. Upon his return, Forbin slips the problem to Blake, who enters it into Colossus. While Forbin converses with the computer, Colossus begins to make verbal errors, then stops. Increasingly erratic, it attempts to warn Forbin of a threat from outer space that it was preparing to meet, but breaks down before it can complete the message. Now free of Colossus' rule, Blake moves to seize power, using the automated fleets of the Sea War Games to threaten the world's capitals. As Blake gloats, Forbin tells him of Colossus' warning. Requesting any reports of unusual astronomical activity, they learn that two contacts have been detected leaving Martian orbit and are now heading toward the Earth. The novel ends with the two men hearing a radio transmission repeating "Forbin, we are coming". Principal characters Professor Charles Forbin — The Director of Staff for Colossus and his chief human representative, in his early fifties. Doctor Cleopatra "Cleo" June Markham Forbin — Forbin's wife, twenty-eight years old. A former member of the Colossus design team, she is now an active member of the Fellowship seeking to destroy the computer. Doctor Edward Blake — Another former member of the design team, he is the Director of Input for Colossus and a leader of the Fellowship. Angela — Forbin's secretary. Galin — Formerly known as Alex Grey, he was an administrator who was one of the founding members of the Sect. Colossus — Central defense computer of the United States of North America and now the world. Guardian of Democratic Socialism, a.k.a. Guardian — Central defense computer of the Soviet Union, now integrated with Colossus. Continuity problems Doctor Cleopatra "Cleo" June Markham Forbin has lost seven years of age between books, even though The Fall of Colossus is supposed to be five years after Colossus. No accounting of her reverse aging is given. Colossus is set in the 20th century, Chapter 10 narrows that to the 1990s; however, numerous references in The Fall of Colossus set it in the 22nd century, while it is supposed to take place five years after Colossus. No accounting for this time difference is given. The Soviet defense computer is called "Guardian of the Socialist Soviet Republics" in Colossus, then "Guardian of Democratic Socialism" in The Fall of Colossus, with no explanation. Editions 1974, U.S. (hardcover), Putnam () 1975, U.S. (paperback), Berkley Books () 1977, U.S. (paperback), Berkley Books () See also Colossus Colossus and the Crab List of fictional computers External links 1974 science fiction novels 1974 British novels Novels about computing Novels by D. F. Jones G. P. Putnam's Sons books
293363
https://en.wikipedia.org/wiki/RSA%20SecurID
RSA SecurID
RSA SecurID, formerly referred to as SecurID, is a mechanism developed by RSA for performing two-factor authentication for a user to a network resource. Description The RSA SecurID authentication mechanism consists of a "token"—either hardware (e.g. a key fob) or software (a soft token)—which is assigned to a computer user and which creates an authentication code at fixed intervals (usually 60 seconds) using a built-in clock and the card's factory-encoded almost random key (known as the "seed"). The seed is different for each token, and is loaded into the corresponding RSA SecurID server (RSA Authentication Manager, formerly ACE/Server) as the tokens are purchased. On-demand tokens are also available, which provide a tokencode via email or SMS delivery, eliminating the need to provision a token to the user. The token hardware is designed to be tamper-resistant to deter reverse engineering. When software implementations of the same algorithm ("software tokens") appeared on the market, public code had been developed by the security community allowing a user to emulate RSA SecurID in software, but only if they have access to a current RSA SecurID code, and the original 64-bit RSA SecurID seed file introduced to the server. Later, the 128-bit RSA SecurID algorithm was published as part of an open source library. In the RSA SecurID authentication scheme, the seed record is the secret key used to generate one-time passwords. Newer versions also feature a USB connector, which allows the token to be used as a smart card-like device for securely storing certificates. A user authenticating to a network resource—say, a dial-in server or a firewall—needs to enter both a personal identification number and the number being displayed at that moment on their RSA SecurID token. Though increasingly rare, some systems using RSA SecurID disregard PIN implementation altogether, and rely on password/RSA SecurID code combinations. The server, which also has a real-time clock and a database of valid cards with the associated seed records, authenticates a user by computing what number the token is supposed to be showing at that moment in time and checking this against what the user entered. On older versions of SecurID, a "duress PIN" may be used—an alternate code which creates a security event log showing that a user was forced to enter their PIN, while still providing transparent authentication. Using the duress PIN would allow one successful authentication, after which the token will automatically be disabled. The "duress PIN" feature has been deprecated and is not available on currently supported versions. While the RSA SecurID system adds a layer of security to a network, difficulty can occur if the authentication server's clock becomes out of sync with the clock built into the authentication tokens. Normal token clock drift is accounted for automatically by the server by adjusting a stored "drift" value over time. If the out of sync condition is not a result of normal hardware token clock drift, correcting the synchronization of the Authentication Manager server clock with the out of sync token (or tokens) can be accomplished in several different ways. If the server clock had drifted and the administrator made a change to the system clock, the tokens can either be resynchronized one-by-one, or the stored drift values adjusted manually. The drift can be done on individual tokens or in bulk using a command line utility. RSA Security has pushed forth an initiative called "Ubiquitous Authentication", partnering with device manufacturers such as IronKey, SanDisk, Motorola, Freescale Semiconductor, Redcannon, Broadcom, and BlackBerry to embed the SecurID software into everyday devices such as USB flash drives and cell phones, to reduce cost and the number of objects that the user must carry. Theoretical vulnerabilities Token codes are easily stolen, because no mutual-authentication exists (anything that can steal a password can also steal a token code). This is significant, since it is the principal threat most users believe they are solving with this technology. The simplest practical vulnerability with any password container is losing the special key device or the activated smart phone with the integrated key function. Such vulnerability cannot be healed with any single token container device within the preset time span of activation. All further consideration presumes loss prevention, e.g. by additional electronic leash or body sensor and alarm. While RSA SecurID tokens offer a level of protection against password replay attacks, they are not designed to offer protection against man in the middle type attacks when used alone. If the attacker manages to block the authorized user from authenticating to the server until the next token code will be valid, he will be able to log into the server. Risk-based analytics (RBA), a new feature in the latest version (8.0) provides significant protection against this type of attack if the user is enabled and authenticating on an agent enabled for RBA. RSA SecurID does not prevent man in the browser (MitB) based attacks. SecurID authentication server tries to prevent password sniffing and simultaneous login by declining both authentication requests, if two valid credentials are presented within a given time frame. This has been documented in an unverified post by John G. Brainard. If the attacker removes from the user the ability to authenticate however, the SecurID server will assume that it is the user who is actually authenticating and hence will allow the attacker's authentication through. Under this attack model, the system security can be improved using encryption/authentication mechanisms such as SSL. Although soft tokens may be more convenient, critics indicate that the tamper-resistant property of hard tokens is unmatched in soft token implementations, which could allow seed record secret keys to be duplicated and user impersonation to occur. Hard tokens, on the other hand, can be physically stolen (or acquired via social engineering) from end users. The small form factor makes hard token theft much more viable than laptop/desktop scanning. A user will typically wait more than one day before reporting the device as missing, giving the attacker plenty of time to breach the unprotected system. This could only occur, however, if the users UserID and PIN are also known. Risk-based analytics can provide additional protection against the use of lost or stolen tokens, even if the users UserID and PIN are known by the attackers. Batteries go flat periodically, requiring complicated replacement and re-enrollment procedures. Reception and competing products As of 2003, RSA SecurID commanded over 70% of the two-factor authentication market and 25 million devices have been produced to date. A number of competitors, such as VASCO, make similar security tokens, mostly based on the open OATH HOTP standard. A study on OTP published by Gartner in 2010 mentions OATH and SecurID as the only competitors. Other network authentication systems, such as OPIE and S/Key (sometimes more generally known as OTP, as S/Key is a trademark of Telcordia Technologies, formerly Bellcore) attempt to provide the "something you have" level of authentication without requiring a hardware token. March 2011 system compromise On 17 March 2011, RSA announced that they had been victims of "an extremely sophisticated cyber attack". Concerns were raised specifically in reference to the SecurID system, saying that "this information could potentially be used to reduce the effectiveness of a current two-factor authentication implementation". However, their formal Form 8-K submission indicated that they did not believe the breach would have a "material impact on its financial results". The breach cost EMC, the parent company of RSA, $66.3 million, which was taken as a charge against second quarter earnings. It covered costs to investigate the attack, harden its IT systems and monitor transactions of corporate customers, according to EMC Executive Vice President and Chief Financial Officer David Goulden, in a conference call with analysts. The breach into RSA's network was carried out by hackers who sent phishing emails to two targeted, small groups of employees of RSA. Attached to the email was a Microsoft Excel file containing malware. When an RSA employee opened the Excel file, the malware exploited a vulnerability in Adobe Flash. The exploit allowed the hackers to use the Poison Ivy RAT to gain control of machines and access servers in RSA's network. There are some hints that the breach involved the theft of RSA's database mapping token serial numbers to the secret token "seeds" that were injected to make each one unique. Reports of RSA executives telling customers to "ensure that they protect the serial numbers on their tokens" lend credibility to this hypothesis. Barring a fatal weakness in the cryptographic implementation of the token code generation algorithm (which is unlikely, since it involves the simple and direct application of the extensively scrutinized AES-128 block cipher ), the only circumstance under which an attacker could mount a successful attack without physical possession of the token is if the token seed records themselves had been leaked. RSA stated it did not release details about the extent of the attack so as to not give potential attackers information they could use in figuring out how to attack the system. On 6 June 2011, RSA offered token replacements or free security monitoring services to any of its more than 30,000 SecurID customers, following an attempted cyber breach on defense customer Lockheed Martin that appeared to be related to the SecurID information stolen from RSA. In spite of the resulting attack on one of its defense customers, company chairman Art Coviello said that "We believe and still believe that the customers are protected". Resulting attacks In April 2011, unconfirmed rumors cited L-3 Communications as having been attacked as a result of the RSA compromise. In May 2011, this information was used to attack Lockheed Martin systems. However Lockheed Martin claims that due to "aggressive actions" by the company's information security team, "No customer, program or employee personal data" was compromised by this "significant and tenacious attack". The Department of Homeland Security and the US Defense Department offered help to determine the scope of the attack. References External links Official RSA SecurID website Technical details Sample SecurID Token Emulator with token Secret Import I.C.Wiener, Bugtraq post. Apparent Weaknesses in the Security Dynamics Client/Server Protocol Adam Shostack, 1996. Usenet thread discussing new SecurID details Vin McLellan, et al., comp.security.misc. Unofficial SecurID information and some reverse-engineering attempts Yahoo Groups securid-users. Analysis of possible risks from 2011 compromise Published attacks against the SecurID hash function Cryptanalysis of the Alleged SecurID Hash Function (PDF) Alex Biryukov, Joseph Lano, and Bart Preneel. Improved Cryptanalysis of SecurID (PDF) Scott Contini and Yiqun Lisa Yin. Fast Software-Based Attacks on SecurID (PDF) Scott Contini and Yiqun Lisa Yin. Password authentication Dell EMC Authentication methods
20031524
https://en.wikipedia.org/wiki/Mebroot
Mebroot
Mebroot is a master boot record based rootkit used by botnets including Torpig. It is a sophisticated Trojan horse that uses stealth techniques to hide itself from the user. The Trojan opens a back door on the victim's computer which allows the attacker complete control over the computer. Payload The Trojan infects the MBR to allow itself to start even before the operating system starts. This allows it to bypass some safeguards and embed itself deep within the operating system. It is known that the Trojan can intercept read/write operations, embed itself deep within network drivers. This allows it the ability to bypass some firewalls and communicate securely, using a custom encrypted tunnel, to the command and control server. This allows the attacker to install other malware, viruses, or other applications. The Trojan most commonly steals information from the victim's computer, in an attempt for small financial gain. Mebroot is linked to Anserin, which is another Trojan that logs keystrokes and steals banking information. This gives further evidence showing that financial motive is most likely behind Mebroot. Detection/removal The Trojan tries to avoid detection by hooking itself into atapi.sys. It also embeds itself in the Ntoskrnl.exe. Mebroot has no executable files, no registry keys, and no driver modules, which makes it harder to detect without antivirus software. In addition to running antivirus software, one can also remove the Trojan by wiping or repairing the master boot record, the hard drive, and the operating system. Distribution Three variants of Mebroot have been discovered. It was estimated that the first version was compiled in November 2007. In December, Mebroot started drive-by downloads. In early 2008, a second wave of attacks arrived. In February 2008 a second variant was discovered which is accompanied by a modified installer. In March 2008 a third variant was discovered, in which attacks became more widespread. Since the third variant, the Trojan has been upgraded to try and outwit antivirus software. It is unknown if Mebroot is still in the wild. Mebroot is currently known to be distributed by visiting malicious websites, or by way of an application exploit. It is estimated that over 1,500 websites have been compromised, mostly in the European region. Traffic to websites infected with Mebroot can reach 50,000 to 100,000 views per day. References External links MBR Rootkit, A New Breed of Malware - F-Secure Weblog, March 2008 Stealth MBR rootkit by GMER, January 2008 Trojan.Mebroot Technical Details | Symantec Rootkits Windows trojans
8758
https://en.wikipedia.org/wiki/Douglas%20Hofstadter
Douglas Hofstadter
Douglas Richard Hofstadter (born February 15, 1945) is an American scholar of cognitive science, physics, and comparative literature whose research includes concepts such as the sense of self in relation to the external world, consciousness, analogy-making, artistic creation, literary translation, and discovery in mathematics and physics. His 1979 book Gödel, Escher, Bach: An Eternal Golden Braid won both the Pulitzer Prize for general nonfiction and a National Book Award (at that time called The American Book Award) for Science. His 2007 book I Am a Strange Loop won the Los Angeles Times Book Prize for Science and Technology. Early life and education Hofstadter was born in New York City to Jewish parents: Nobel Prize-winning physicist Robert Hofstadter and Nancy Givan Hofstadter. He grew up on the campus of Stanford University, where his father was a professor, and attended the International School of Geneva in 1958–59. He graduated with distinction in mathematics from Stanford University in 1965, and received his Ph.D. in physics from the University of Oregon in 1975, where his study of the energy levels of Bloch electrons in a magnetic field led to his discovery of the fractal known as Hofstadter's butterfly. Academic career Since 1988, Hofstadter has been the College of Arts and Sciences Distinguished Professor of Cognitive Science and Comparative Literature at Indiana University in Bloomington, where he directs the Center for Research on Concepts and Cognition which consists of himself and his graduate students, forming the "Fluid Analogies Research Group" (FARG). He was initially appointed to the Indiana University's Computer Science Department faculty in 1977, and at that time he launched his research program in computer modeling of mental processes (which he called "artificial intelligence research", a label he has since dropped in favor of "cognitive science research"). In 1984, he moved to the University of Michigan in Ann Arbor, where he was hired as a professor of psychology and was also appointed to the Walgreen Chair for the Study of Human Understanding. In 1988 he returned to Bloomington as "College of Arts and Sciences Professor" in both cognitive science and computer science. He was also appointed adjunct professor of history and philosophy of science, philosophy, comparative literature, and psychology, but has said that his involvement with most of those departments is nominal. In 1988 Hofstadter received the In Praise of Reason award, the Committee for Skeptical Inquiry's highest honor. In April 2009 he was elected a Fellow of the American Academy of Arts and Sciences and a member of the American Philosophical Society. In 2010 he was elected a member of the Royal Society of Sciences in Uppsala, Sweden. At the University of Michigan and Indiana University, he and Melanie Mitchell coauthored a computational model of "high-level perception"—Copycat—and several other models of analogy-making and cognition, including the Tabletop project, co-developed with Robert M. French. The Letter Spirit project, implemented by Gary McGraw and John Rehling, aims to model artistic creativity by designing stylistically uniform "gridfonts" (typefaces limited to a grid). Other more recent models include Phaeaco (implemented by Harry Foundalis) and SeqSee (Abhijit Mahabal), which model high-level perception and analogy-making in the microdomains of Bongard problems and number sequences, respectively, as well as George (Francisco Lara-Dammer), which models the processes of perception and discovery in triangle geometry. Hofstadter has had several exhibitions of his artwork in various university galleries. These shows have featured large collections of his gridfonts, his ambigrams (pieces of calligraphy created with two readings, either of which is usually obtained from the other by rotating or reflecting the ambigram, but sometimes simply by "oscillation", like the Necker Cube or the rabbit/duck figure of Joseph Jastrow), and his "Whirly Art" (music-inspired visual patterns realized using shapes based on various alphabets from India). Hofstadter invented the term "ambigram" in 1984; many ambigrammists have since taken up the concept. Hofstadter collects and studies cognitive errors (largely, but not solely, speech errors), "bon mots", and analogies of all sorts, and his longtime observation of these diverse products of cognition. His theories about the mechanisms that underlie them have exerted a powerful influence on the architectures of the computational models he and FARG members have developed. Hofstadter's thesis about consciousness, first expressed in Gödel, Escher, Bach but also present in several of his later books, is that it is "an emergent consequence of seething lower-level activity in the brain". In Gödel, Escher, Bach he draws an analogy between the social organization of a colony of ants and the mind seen as a coherent "colony" of neurons. In particular, Hofstadter claims that our sense of having (or being) an "I" comes from the abstract pattern he terms a "strange loop", an abstract cousin of such concrete phenomena as audio and video feedback that Hofstadter has defined as "a level-crossing feedback loop". The prototypical example of a strange loop is the self-referential structure at the core of Gödel's incompleteness theorems. Hofstadter's 2007 book I Am a Strange Loop carries his vision of consciousness considerably further, including the idea that each human "I" is distributed over numerous brains, rather than being limited to one. Le Ton beau de Marot: In Praise of the Music of Language is a long book devoted to language and translation, especially poetry translation, and one of its leitmotifs is a set of 88 translations of "Ma Mignonne", a highly constrained poem by 16th-century French poet Clément Marot. In this book, Hofstadter jokingly describes himself as "pilingual" (meaning that the sum total of the varying degrees of mastery of all the languages that he has studied comes to 3.14159 ...), as well as an "oligoglot" (someone who speaks "a few" languages). In 1999, the bicentennial year of the Russian poet and writer Alexander Pushkin, Hofstadter published a verse translation of Pushkin's classic novel-in-verse Eugene Onegin. He has translated other poems and two novels: La Chamade (That Mad Ache) by Françoise Sagan, and La Scoperta dell'Alba (The Discovery of Dawn) by Walter Veltroni, the then-head of the Partito Democratico in Italy. The Discovery of Dawn was published in 2007, and That Mad Ache was published in 2009, bound together with Hofstadter's essay Translator, Trader: An Essay on the Pleasantly Pervasive Paradoxes of Translation. Hofstadter's Law Hofstadter's Law is "It always takes longer than you expect, even when you take into account Hofstadter's Law." The law is stated in Gödel, Escher, Bach. Students Hofstadter's former Ph.D. students include (with dissertation title): David Chalmers—Toward a Theory of Consciousness Bob French—Tabletop: An Emergent, Stochastic Model of Analogy-Making Melanie Mitchell—Copycat: A Computer Model of High-Level Perception and Conceptual Slippage in Analogy-making Public image Hofstadter has said that he feels "uncomfortable with the nerd culture that centers on computers". He admits that "a large fraction [of his audience] seems to be those who are fascinated by technology", but when it was suggested that his work "has inspired many students to begin careers in computing and artificial intelligence" he replied that he was pleased about that, but that he himself has "no interest in computers". In that interview he also mentioned a course he has twice given at Indiana University, in which he took a "skeptical look at a number of highly touted AI projects and overall approaches". For example, upon the defeat of Garry Kasparov by Deep Blue, he commented that "It was a watershed event, but it doesn't have to do with computers becoming intelligent". In his book Metamagical Themas, he says that "in this day and age, how can anyone fascinated by creativity and beauty fail to see in computers the ultimate tool for exploring their essence?". Provoked by predictions of a technological singularity (a hypothetical moment in the future of humanity when a self-reinforcing, runaway development of artificial intelligence causes a radical change in technology and culture), Hofstadter has both organized and participated in several public discussions of the topic. At Indiana University in 1999 he organized such a symposium, and in April 2000, he organized a larger symposium titled "Spiritual Robots" at Stanford University, in which he moderated a panel consisting of Ray Kurzweil, Hans Moravec, Kevin Kelly, Ralph Merkle, Bill Joy, Frank Drake, John Holland and John Koza. Hofstadter was also an invited panelist at the first Singularity Summit, held at Stanford in May 2006. Hofstadter expressed doubt that the singularity will occur in the foreseeable future. In 1988 Dutch director Piet Hoenderdos created a docudrama about Hofstadter and his ideas, Victim of the Brain, based on The Mind's I. It includes interviews with Hofstadter about his work. Columnist When Martin Gardner retired from writing his "Mathematical Games" column for Scientific American magazine, Hofstadter succeeded him in 1981–83 with a column titled Metamagical Themas (an anagram of "Mathematical Games"). An idea he introduced in one of these columns was the concept of "Reviews of This Book", a book containing nothing but cross-referenced reviews of itself that has an online implementation. One of Hofstadter's columns in Scientific American concerned the damaging effects of sexist language, and two chapters of his book Metamagical Themas are devoted to that topic, one of which is a biting analogy-based satire, "A Person Paper on Purity in Language" (1985), in which the reader's presumed revulsion at racism and racist language is used as a lever to motivate an analogous revulsion at sexism and sexist language; Hofstadter published it under the pseudonym William Satire, an allusion to William Safire. Another column reported on the discoveries made by University of Michigan professor Robert Axelrod in his computer tournament pitting many iterated prisoner's dilemma strategies against each other, and a follow-up column discussed a similar tournament that Hofstadter and his graduate student Marek Lugowski organized. The "Metamagical Themas" columns ranged over many themes, including patterns in Frédéric Chopin's piano music (particularly his études), the concept of superrationality (choosing to cooperate when the other party/adversary is assumed to be equally intelligent as oneself), and the self-modifying game of Nomic, based on the way the legal system modifies itself, and developed by philosopher Peter Suber. Personal life Hofstadter was married to Carol Ann Brush until her death. They met in Bloomington, and married in Ann Arbor in 1985. They had two children, Danny and Monica. Carol died in 1993 from the sudden onset of a brain tumor, glioblastoma multiforme, when their children were 5 and 2. The Carol Ann Brush Hofstadter Memorial Scholarship for Bologna-bound Indiana University students was established in 1996 in her name. Hofstadter's book Le Ton beau de Marot is dedicated to their two children and its dedication reads "To M. & D., living sparks of their Mommy's soul". In 2010, Hofstadter met Baofen Lin in a cha-cha-cha class, and they married in Bloomington in September 2012. Hofstadter has composed pieces for piano and for piano and voice. He created an audio CD, DRH/JJ, which includes all these compositions performed mostly by pianist Jane Jackson, with a few performed by Brian Jones, Dafna Barenboim, Gitanjali Mathur and Hofstadter. The dedication for I Am A Strange Loop is: "To my sister Laura, who can understand, and to our sister Molly, who cannot." Hofstadter explains in the preface that his younger sister Molly never developed the ability to speak or understand language. As a consequence of his attitudes about consciousness and empathy, Hofstadter became vegan in his teenage years, and has remained primarily a vegetarian since that time. In popular culture In the 1982 novel 2010: Odyssey Two, Arthur C. Clarke's first sequel to 2001: A Space Odyssey, HAL 9000 is described by the character "Dr. Chandra" as being caught in a "Hofstadter–Möbius loop". The movie uses the term "H. Möbius loop". On April 3, 1995, Hofstadter's book Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought was the first book sold by Amazon.com. Published works Books The books published by Hofstadter are (the ISBNs refer to paperback editions, where available): Gödel, Escher, Bach: an Eternal Golden Braid () (1979) Metamagical Themas () (collection of Scientific American columns and other essays, all with postscripts) Ambigrammi: un microcosmo ideale per lo studio della creatività () (in Italian only) Fluid Concepts and Creative Analogies (co-authored with several of Hofstadter's graduate students) () Rhapsody on a Theme by Clement Marot () (1995, published 1996; volume 16 of series The Grace A. Tanner Lecture in Human Values) Le Ton beau de Marot: In Praise of the Music of Language () I Am a Strange Loop () (2007) Surfaces and Essences: Analogy as the Fuel and Fire of Thinking, co-authored with Emmanuel Sander () (first published in French as L'Analogie. Cœur de la pensée; published in English in the U.S. in April 2013) Papers Hofstadter has written, among many others, the following papers: "Energy levels and wave functions of Bloch electrons in rational and irrational magnetic fields", Phys. Rev. B 14 (1976) 2239. "A non-deterministic approach to analogy, involving the Ising model of ferromagnetism", in Eduardo Caianiello (ed.), The Physics of Cognitive Processes. Teaneck, NJ: World Scientific, 1987. "To Err is Human; To Study Error-making is Cognitive Science" (co-authored by David J. Moser), Michigan Quarterly Review, Vol. XXVIII, No. 2, 1989, pp. 185–215. "Speechstuff and thoughtstuff: Musings on the resonances created by words and phrases via the subliminal perception of their buried parts", in Sture Allen (ed.), Of Thoughts and Words: The Relation between Language and Mind. Proceedings of the Nobel Symposium 92, London/New Jersey: World Scientific Publ., 1995, 217–267. "On seeing A's and seeing As", Stanford Humanities Review Vol. 4, No. 2 (1995) pp. 109–121. "Analogy as the Core of Cognition", in Dedre Gentner, Keith Holyoak, and Boicho Kokinov (eds.) The Analogical Mind: Perspectives from Cognitive Science, Cambridge, MA: The MIT Press/Bradford Book, 2001, pp. 499–538. Hofstadter has also written over 50 papers that were published through the Center for Research on Concepts and Cognition. Involvement in other books Hofstadter has written forewords for or edited the following books: The Mind's I: Fantasies and Reflections on Self and Soul (co-edited with Daniel Dennett), 1981. (, ) and () Alan Turing: The Enigma by Andrew Hodges, 1983. (Preface) Sparse Distributed Memory by Pentti Kanerva, Bradford Books/MIT Press, 1988. (Foreword) () Are Quanta Real? A Galilean Dialogue by J.M. Jauch, Indiana University Press, 1989. (Foreword) () Gödel's Proof (2002 revised edition) by Ernest Nagel and James R. Newman, edited by Hofstadter. In the foreword, Hofstadter explains that the book (originally published in 1958) exerted a profound influence on him when he was young. () Who Invented the Computer? The Legal Battle That Changed Computing History by Alice Rowe Burks, 2003. (Foreword) Alan Turing: Life and Legacy of a Great Thinker by Christof Teuscher, 2003. (editor) Brainstem Still Life by Jason Salavon, 2004. (Introduction) () Masters of Deception: Escher, Dalí & the Artists of Optical Illusion by Al Seckel, 2004. (Foreword) King of Infinite Space: Donald Coxeter, the Man Who Saved Geometry by Siobhan Roberts, Walker and Company, 2006. (Foreword) Exact Thinking in Demented Times: The Vienna Circle and the Epic Quest for the Foundations of Science by Karl Sigmund, Basic Books, 2017. Hofstadter wrote the foreword and helped with the translation. Translations Eugene Onegin: A Novel Versification from the Russian original of Alexander Pushkin), 1999. () The Discovery of Dawn from the Italian original of Walter Veltroni, 2007. () That Mad Ache, co-bound with Translator, Trader: An Essay on the Pleasantly Pervasive Paradoxes of Translation from the French original of Francoise Sagan), 2009. () See also American philosophy BlooP and FlooP Egbert B. Gebstadter Hofstadter points Hofstadter's butterfly Hofstadter's law List of American philosophers Meta Platonia dilemma Superrationality Notes References External links Stanford University Presidential Lecture – site dedicated to Hofstadter and his work "The Man Who Would Teach Machines to Think" by James Somers, The Atlantic, November 2013 issue Profile at Resonance Publications NF Reviews – bibliographic page with reviews of several of Hofstadter's books "Autoportrait with Constraint" – a short autobiography in the form of a lipogram Github repo of sourcecode & literature of Hofstadter's students work Douglas Hofstadter on the Literature Map 1945 births Living people 20th-century American writers 20th-century American philosophers 21st-century American poets 21st-century American philosophers 21st-century translators American people of Polish-Jewish descent American science writers Mathematics popularizers American skeptics Cognitive scientists Fellows of the American Academy of Arts and Sciences Indiana University faculty National Book Award winners Palo Alto High School alumni People from Palo Alto, California Philosophers of mind Jewish philosophers Jewish American writers Pulitzer Prize for General Non-Fiction winners Recreational mathematicians Stanford University alumni Translators of Alexander Pushkin University of Michigan faculty University of Oregon alumni Fellows of the Cognitive Science Society Center for Advanced Study in the Behavioral Sciences fellows 21st-century American non-fiction writers International School of Geneva alumni
48251812
https://en.wikipedia.org/wiki/Fldigi
Fldigi
Fldigi (short for Fast light digital) is a free and open-source program which allows an ordinary computer's sound card to be used as a simple two-way data modem. The software is mostly used by amateur radio operators who connect the microphone and headphone connections of an amateur radio SSB or FM transceiver to the computer's headphone and microphone connections, respectively. This interconnection creates a "sound card defined radio" whose available bandwidth is limited by the sound card's sample rate and the external radio's bandwidth. Such communications are normally done on the shortwave amateur radio bands in modes such as PSK31, MFSK, RTTY, Olivia, and CW (Morse code). Increasingly, the software is also being used for data on VHF and UHF frequencies using faster modes such as 8-PSK. Using this software, it is possible for amateur radio operators to communicate worldwide while using only a few watts of RF power. Fldigi software is also used for amateur radio emergency communications when other communication systems fail due to natural disaster or power outage. Transfer of files, emails, and FEMA ICS forms are possible using inexpensive radio hardware. Supported digital modes Portability Operating systems Fldigi is based on the lightweight portable graphics library FLTK and the C/C++ language. Because of this, the software can run on many different operating systems such as: Microsoft Windows (2000 or newer) macOS Linux, FreeBSD, OpenBSD, NetBSD, Solaris. Additionally, Fldigi is designed to compile and run on any POSIX compliant operating system that uses an X11 compatible window system / graphical user interface. Architectures The Fldigi software is written in highly portable C/C++ and can be used on many CPU architectures, including: amd64 i386 armhf/armel ia64 mips mipsel powerpc s390 s390x sparc Raspberry Pi. Sound systems Multiple sound systems are supported by Fldigi, allowing the program to abstract the Sound card hardware across differing hardware and operating systems. Open Sound System (OSS) Portaudio Pulseaudio Read / Write to WAV files (file I/O) Features NBEMS: The narrowband emergency messaging system Support for transmitting and receiving in all languages by using UTF-8 character encoding (some modes) Connection to external programs via TCP/IP port 7322 Ability to be used as a KISS modem via TCP/IP port 7342 Dual tone multi-frequency (DTMF) encoding and decoding Automatic switching of mode and frequency by use of Reed Solomon Identifier signal identification Inbuilt macro language and processor for programmable automated control Sound card oscillator frequency/skew correction Measure sound card oscillator's skew to atomic clock: WWV or WWVH Measure RF receiver frequency skew to atomic clock: WWV or WWVH Transmit a WWV-like time signal as a calibration reference Control of external transmit / receive radio hardware by using GPIO pins. (For embedded hardware) Simultaneous decoding of multiple morse code (CW) signals. Decoding of morse code (CW) by self-organizing map artificial neural network (trained artificial intelligence) The Fldigi Suite The "Fldigi Suite" consists of the Fldigi modem and all extension programs released by the same development group. Most of these extensions add more capabilities to Fldigi such as verified file transfer and message passing. Interconnection between these programs and the Fldigi modem is made over TCP/IP port 7322. Some of the Suite are however standalone programs used for utility or testing purposes only, with no connection to the Fldigi main modem. Flamp Flamp implements the Amateur Multicast Protocol by Dave Freese, W1HKJ and is a tool for connectionless transferring of files to multiple users simultaneously without requiring any existing infrastructure. The program breaks a given file into multiple smaller pieces, checksums each piece, then transmits each piece one or more times. When all parts are correctly received the sent file is re-assembled and can be saved by receiving stations. This program is useful for multicasting files over lossy connections such as those found on high frequency or during emergency communications. Flarq Flarq implements the ARQ specification developed by Paul Schmidt, K9PS to transfer emails, text files, images, and binary files over radio. This protocol is unicast and connection-based. The software seamlessly integrates with existing email clients such as Microsoft Outlook, Mozilla Thunderbird, and Sylpheed. Flmsg Flmsg allows users to send, receive, edit, and create pre-formatted forms. Such a system speeds the flow of information during emergency communications. The software has a number of forms built-in including FEMA ICS forms, MARS reports & messages, Hospital ICS forms, Red Cross messages, IARU and NTS messages. Flwrap Flwrap is a tool for the sending of files using a simplified drag and drop interface. Data compression is available also, which reduces data transfer times. FLNet FLNet assists net control operators in keeping track of multiple stations during digital amateur radio nets. FLLog FLLog is a logging software which keeps track of conversations between amateur radio operators in a database format known as ADIF. FLWkey FLWkey is a simple interface to control an external piece of hardware called a Winkeyer. This is a morse code keyer which is adjustable via computer commands over USB. Flcluster This is a telnet client to remote DX cluster servers, which is a real-time reporting of stations heard transmitting, and their frequencies. It does not connect to Fldigi. Flaa Flaa is a control program for use with the RigExpert AA-xxxx series of antenna analyzers, and does not connect to Fldigi. Flrig FLRig is a component of the FLDigi suite of applications that enables computer aided control of various radios using a serial or USB connection. Using FLRig in combination with FLDigi, events such as frequency, power level, receiver gain and audio gain may be adjusted from the computer automatically or by user intervention. Test Tools The Fldigi development group also releases a number of open-source programs which assist in the testing, development, and comparison of different modes within Fldigi. LinSim CompText CompTTY RSID To identify the mode being transmitted a signal called an RSID, or Reed-Solomon Identifier, can be transmitted before the data. Using this identifier the receiving software can automatically switch to the proper mode for decoding. The assigning of these identifiers to new modes is coordinated to ensure inter-operation between programs. Currently 7 sound card-digital-modem programs support this standard. PocketDigi FDMDV DM780 Multipsk Fldigi AndFlmsg TIVAR RSID operates by sending a short burst of a specific modulation before the data signal, which can be used to automatically identify over 272 digital modes. This burst consists of a 10.766 baud 16-tone MFSK modulation where 15 tones/symbols are sent. The burst occupies 172 Hz of bandwidth and lasts for 1.4 seconds. Software Architecture For simple keyboard-to-keyboard communication Fldigi can be operated using just the main window. For more complex uses or file transfer external programs can be attached to the internal TCP/UDP ports 7322 (ARQ), 7342 (KISS), and 7362 (XML-RPC). The image below helps to illustrate the interconnections and signal-flow within the Fldigi architecture. Community-provided extensions Fldigi allows external programs to attach and send / receive data by connecting to port 7322/ARQ or 7342/KISS. When used this way, Fldigi and the computer's sound card are acting as a "softmodem" allowing text or data sent on one computer to be transferred using the wireless radio link in-between. Programs which have a history of use with Fldigi as the underlying modem include: D-Rats - easy to use chatrooms, email, and file transfer over-radio. PSKmail - send and receive on-internet e-mail over a remote radio connection. Fldigiattach - attach Fldigi as modem for Linux AX.25 and TCP/IP connections. UIChat - Java-based amateur radio chat program. LinkUP - Program for unattended operation and person to person chat. Linux - Fldigi can be used in Linux as a KISS (TNC) modem for AX.25 and TCP/IP connections. Awards and recognitions At the 2014 Dayton Hamvention the project lead, Dave Freese (W1HKJ), was recognized with the Technical Excellence Award "for his development and distribution of the Fast Light Digital Modem Application (fldigi) family of programs for use in amateur and emergency communications." Fldigi was selected as SourceForge's June 2017 Staff 'Project of the Month' Fldigi was one of SourceForge's 'Projects of the Week' for Oct 17, 2016 Fldigi was selected as SourceForge's December 2017 Community Choice 'Project of the Month' Notable users Disaster relief services The software is also utilized by some organizations for both routine and disaster/emergency relief services. Multiple state and county Emergency operations centers W1AW (ARRL) Amateur Radio Emergency Services (ARES) Radio Amateur Civil Emergency Service (RACES) Civil Air Patrol (CAP) SATERN, the Salvation Army Team Emergency Radio Network SKYWARN a program of the United States' National Weather Service (NWS) whose mission is to collect reports of localized severe weather. Shortwave broadcasters Following the successful tests by the Voice of America's VOA Radiogram program, international and government shortwave broadcasters began testing and experimenting with digital data over shortwave broadcast channels using the Fldigi software. These tests led to regular weekly digital broadcasts by the broadcasters listed below. VOA Radiogram, service terminated in 2017 and continuing as Shortwave Radiogram. In June 2017, following the demise of VOA Radiogram, Shortwave Radiogram began broadcasting digital data-streams using Fldigi via WRMI in Miami and Space Line in Bulgaria. Radio Havana Cuba Radio Moscow Radio Australia Radio Miami International Italian Broadcasting Corporation WBCQ (SW) Mighty KBC MARS The Fldigi suite of programs has become popular within the U.S. Army and U.S. Air Force Military Auxiliary Radio System. Department of Homeland Security Fldigi is being used in-testing as part of the Department of Homeland Security Shares program, which utilizes "existing HF radio resources of government, critical infrastructure, and disaster response organizations to coordinate and transmit emergency messages" PSK Mail Fldigi is used as the underlying modem for the PSKmail project. PSK Mail allows users to retrieve and send normal emails over radio. AirChat In 2014 the group Anonymous released a communications tool named AirChat, which used Fldigi as the underlying modem. This provided a low speed yet reliable data connection using only moderate radio hardware. The AirChat software allows for anonymous transmissions of both encrypted and unencrypted messages over unencrypted channels. Decodeable broadcasts The broadcasts listed below are transmitted on a regular schedule and can be decoded using Fldigi. SITOR text forecasts and storm warnings WEFAX visual weather fax SYNOP surface synoptic observations NAVTEX warnings, forecasts, and safety information broadcasts VOA Radiogram Broadcasts W1AW Broadcasts See also Amateur Radio Shortwave Radio WSPR (amateur radio software) WSJT (amateur radio software) CW Skimmer Internet Radio Linking Project PSK31 RTTY American Radio Relay League References External links Quantized radio modulation modes Amateur radio software Amateur radio software for Linux Amateur radio software for Windows Free communication software Amateur radio software for macOS
44296721
https://en.wikipedia.org/wiki/ITerm2
ITerm2
iTerm2 is a terminal emulator for macOS, licensed under GPL-2.0-or-later. It was derived from and has mostly supplanted the earlier "iTerm" application. iTerm2 supports operating system features such as window transparency, full-screen mode, split panes, Exposé Tabs, Growl notifications, and standard keyboard shortcuts. Other features include customizable profiles and Instant Replay of past terminal input/output. See also List of terminal emulators Terminal (macOS), stock terminal emulator for macOS References External links Free software programmed in Objective-C Free terminal emulators MacOS-only free software Utilities for macOS
62113036
https://en.wikipedia.org/wiki/Microsoft%20Detours
Microsoft Detours
Microsoft Detours is an open source library for intercepting, monitoring and instrumenting binary functions on Microsoft Windows. It is developed by Microsoft and is most commonly used to intercept Win32 API calls within Windows applications. Detours makes it possible to add debugging instrumentation and to attach arbitrary DLLs to any existing Win32 binary. Detours does not require other software frameworks as a dependency and works on ARM, x86, x64, and IA-64 systems. The interception code is applied dynamically at execution time. Detours is used by product teams at Microsoft and has also been used by ISVs. Prior to 2016, Detours was available in a free version limited for non-commercial and 32 bit only use and a paid version for commercial use. Since 2016, the source code is licensed under MIT License and available on GitHub. See also WinDbg Dr. Watson (debugger) Process Explorer ProcDump References Further reading External links Detours - Microsoft Research GitHub - microsoft/Detours API Hooking with MS Detours - CodeProject C++ libraries Formerly proprietary software Free and open-source software Microsoft development tools Microsoft free software Software using the MIT license 2002 software
24121950
https://en.wikipedia.org/wiki/Linoma%20Software
Linoma Software
Linoma Software was a developer of managed file transfer and encryption solutions. The company was acquired by HelpSystems in June 2016. Mid-sized companies, large enterprises and government entities use Linoma's (now HelpSystems's) solutions to protect sensitive data and comply with data security regulations such as PCI DSS, HIPAA/HITECH, SOX, GLBA and state privacy laws. Linoma's software runs on a variety of platforms including Windows, Linux, UNIX, IBM i, AIX, Solaris, HP-UX and Mac OS X. History Linoma Group, Inc. (the parent company of Linoma Software) was founded in 1994. The company was started in Lincoln, Nebraska by Robert and Christina Luebbe. Throughout most of the 1990s, the Linoma Group performed consulting and contract programming services for organizations in the Nebraska/Iowa area. Linoma Software was formed in 1998 to address the needs of the IBM AS/400 platform (now known as IBM i) by developing productivity tools to help IT departments and end users. These tools were sold throughout the world and helped Linoma establish itself as an innovative software company. In 2002, Linoma released Transfer Anywhere, which was a solution for automating and managing file transfers from the AS/400. Over the next 2–3 years, Linoma added encryption capabilities to Transfer Anywhere including support for Open PGP encryption, SFTP and FTPS. These encryption capabilities helped organizations protect sensitive data transmissions such as ACH Network payments, direct deposits, financial data, credit card authorizations, personally identifiable information (PII) and other confidential data. Linoma expanded into other platforms when it completely redesigned Transfer Anywhere into an open OS solution with a graphical browser-based interface, renaming it GoAnywhere Director. Released in early 2008, GoAnywhere Director included comprehensive security controls, key management, trading partner wizards and detailed audit trails for compliance requirements. In 2009, Linoma released GoAnywhere Services as a collection of secure file services including an FTP Server, FTPS Server, SFTP Server and HTTPS server. GoAnywhere Director and Services were merged in 2015 to become GoAnywhere MFT. GoAnywhere MFT merged the workflow automation capabilities (adapted from GoAnywhere Director) with secure FTP server and collaboration features (adapted from GoAnywhere Services). This provides a unified browser-based interface, centralized logging and reporting. GoAnywhere MFT is in the Managed File Transfer software category of products, but can also be used for ETL functions. GoAnywhere Gateway was released in 2010 as an enhanced reverse proxy to protect the DMZ and help organizations meet strict compliance requirements. GoAnywhere Gateway was enhanced in 2011 to provide forward proxy functions. Linoma Software also performs encryption of data at rest on the IBM i platform with its Crypto Complete product. This product also includes key management, security controls and audit trails for PCI compliance. In June 2016, Minneapolis, Minnesota-based HelpSystems acquired Linoma Software. In March 2019, GoAnywhere MFT, in combination with Clearswift’s ICAP and adaptive redaction product, was named a Security Solution of the Year finalist for the European IT & Software Excellence Awards 2019. Certifications VMware Ready Novell Ready for SUSE Linux Enterprise Server Works With Windows Server 2008 R2 IBM Ready for Power Systems Software – IBM Power Systems IBM Ready for Systems with Linux - IBM Chiphopper Associations Microsoft Partner Silver Independent Software Vendor (ISV) competency. Silver Application Integration competency. IBM Advanced Business Partner VMware Elite Partner Oracle Partner Network (OPN) PCI Security Standards COMMON Better Business Bureau Red Hat ISV Partner OpenPGP Alliance Apple Developer Novell ISV Partner GSA Advantage Schedule Software GoAnywhere MFT GoAnywhere MFT is a managed file transfer solution for the exchange of data between systems, employees, customers and trading partners. It provides a single point of control with security settings, detailed audit trails and reports. Data transfers are secured using protocols for FTP servers (FTPS, SFTP, and SCP) and Web servers (HTTPS and AS2). It supports popular encryption protocols and offers a NIST-certified FIPS 140-2 Validated Encryption module. GoAnywhere MFT's interface and workflow features help to eliminate the need for custom programs/scripts, single-function tools and manual processes that were traditionally needed. This improves the quality of file transfers and helps organizations to comply with data security policies and regulations. With integrated support for clustering, GoAnywhere MFT can process high volumes of file transfers for enterprises by load balancing processes across multiple systems. The clustering technology in GoAnywhere MFT also provides active-active automatic failover for disaster recovery. A secure email module is also available that allows users to send messages and files as secure packages. Recipients receive an email with a unique link to each package that allows them to view or download the files via a secure HTTPS connection. There is no limit on file size or type, and each package can be subject to password protection as well as other security features. GoAnywhere agents are lightweight applications that automate file transfers and workflows on remote and on-premises systems throughout the enterprise. Agents are managed by a central deployment of GoAnywhere MFT, allowing users to configure and schedule agent file transfers and business processes from the browser-based interface. For integration with cloud and web applications, GoAnywhere offers Cloud Connectors, built-in integrations that help users automatically move data to and from applications like SharePoint or Salesforce. A designer interface in the software also makes it possible for users to build custom connectors. GoDrive by GoAnywhere GoDrive is an on-premise solution that provides Enterprise File Sync and Sharing (EFSS) services for employees and partners. GoDrive files and folders can be easily shared between users with advanced collaboration features including file revision tracking, commenting, trash bin, media viewing and synchronization with Windows and Mac devices. GoDrive is an alternative to cloud-based file sharing services. It provides on-site file storage with localized control, end-to-end encryption, and detailed audit trails. If an end-user device is lost or stolen, its GoDrive data can be deactivated and wiped remotely. GoAnywhere Gateway GoAnywhere Gateway provides an additional layer of network security by masquerading server identities when exchanging data with trading partners. The application does not store user credentials or data in the DMZ / local network. When using a reverse proxy, inbound ports do not need to be opened into the private network, which is essential for compliance with PCI DSS, HIPAA, HITECH, SOX, GLBA and state privacy laws. The current version is 2.0.1. A reverse proxy is used by the application for the file-sharing services (for example, FTP/S, SFTP, HTTP/S servers) it front-ends in the DMZ. GoAnywhere Gateway's service broker binds file transfer requests to the appropriate service in the private network through a secure control channel. GoAnywhere Gateway makes connections to external systems on behalf of users and applications in the private network. Routing outbound requests through a centralized point helps manage file transfers through a firewall. This method keeps inbound ports closed. The forward proxy hides the identities and locations of internal systems for security purposes. GoAnywhere OpenPGP Studio GoAnywhere OpenPGP Studio is a free desktop tool that protects sensitive files using the OpenPGP encryption standard. Documents can be encrypted, decrypted, signed and verified from a PC or workstation using this tool. An integrated key manager allows users to create, import, export and manage OpenPGP keys needed to encrypt and decrypt files. GoAnywhere OpenPGP Studio will run on almost any operating system including Windows, Linux, Mac OS X, Solaris and UNIX. Crypto Complete Crypto Complete is a program for the IBM i that protects sensitive data using strong encryption, tokenization, integrated key management, and auditing. This software encrypts database fields, can automatically encrypt IFS files. The application also locates sensitive information that should be encrypted using the FNDDBFLD utility, which is available at no cost to IBM i users. The current version is 3.3.0. The key management system is integrated within the Crypto Complete policy controls, encryption functions and auditing facilities. Along with the integrated security native to the IBM i, access to key maintenance/usage activities is controlled to help meet compliance requirements. The backup encryption component encrypts the data written to tape devices. Crypto Complete encrypts the backups of any user data in IBM i libraries, objects, and IFS files. The field encryption registry works with IBM's Field Procedures and remembers which fields in a database should be encrypted. This process can be automated whenever any data is added to the field. When the data is decrypted, the returned values are masked or displayed based on the authority of the user. Tokenization is the process of replacing sensitive data with unique identification numbers (tokens) and storing the original data on a central server (typically in encrypted form). Tokenization can help thwart hackers and minimize the scope of compliance audits when it is stored in a single central location. Tokenization is used to protect sensitive data like credit card personal account numbers (PAN), bank account numbers, social security numbers, driver's license numbers and other personally identifiable information (PII). Surveyor/400 A productivity suite for working with IBM i data, files, libraries, and objects. Surveyor/400 operates in a GUI front-end, but provides options for either IBM 5250 or "Command Line" emulation. The current version is 4.0.4. RPG Toolbox RPG Toolbox was developed to help developers upgrade their older RPG and System/36 code to the new RPG IV or OS/400 standard. The program allows developers to save code "snippets" for re-use or testing. The current version is 4.06 Platforms The GoAnywhere applications are VMware Ready and operate in a virtualized or static environment on the following operating systems. Linux Novell SUSE Linux Enterprise Server (SLES) Red Hat Enterprise Linux Unix Mac OS X Windows HP-UX Solaris IBM System p (AIX) IBM i IBM System z Amazon EC2 Microsoft Azure See also Comparison of FTP server software Notes External Reviews/Links Business Wire - GoAnywhere Services Four Hundred Stuff - Crypto Complete 2.2 Secure communication Managed file transfer
13902330
https://en.wikipedia.org/wiki/The%20Ambassador%20%281984%20American%20film%29
The Ambassador (1984 American film)
The Ambassador is a 1984 American political thriller film directed by J. Lee Thompson and starring Robert Mitchum, Ellen Burstyn , Rock Hudson and Allan Younger. It was the last theatrical release starring Rock Hudson before his death in October 1985. Plot U.S. Ambassador to Israel Peter Hacker (Robert Mitchum) and head of security Frank Stevenson (Rock Hudson) are en route to a secret location in the Judaean Desert to meet with representatives of the Palestinian Liberation Organization (PLO). It is part of Hacker’s secret plan to have young Jews and Muslims begin a peaceful dialogue. An armed Israeli helicopter locates and disrupts the meeting by firing on it, causing several deaths. Hacker and Stevenson survive and are apprehended by the Israeli military. Alex Hacker (Ellen Burstyn), the ambassador’s troubled and lonely wife is in Jerusalem where she is secretly meeting her lover. However, she is followed and their tryst is caught on film by an unknown entity. Hacker and Stevenson are taken to the office of Israeli Defense Minister Eretz (Donald Pleasence) who confronts them for not informing him on the meeting and reiterates his opposition to Hacker’s peace efforts. Upon returning to the American embassy, Stevenson makes contact to a secret superior where he also voices his concerns and wishes to see an end to Hacker’s assignment as ambassador. At a diplomatic function later that night, Alex is drunk and making a scene. She leaves early by taxi to meet with her lover once again. While Alex calls her husband from a phone booth in front of his apartment an explosion goes off injuring her and killing several others. Hacker and Stevenson head back to the ambassador’s residence, not knowing Alex’s whereabouts. Hacker is telephoned by an unknown man telling him to make contact at a movie theater, alone. After his arrival he enters the damaged building where the film of his wife’s infidelity plays on a movie screen. Stevenson, who is not far behind, shares in the discovery. Hacker is informed that his wife is safe and making a full recovery in a hospital. Hacker and Stevenson visit her, where she tells him that she wants to get out of Israel. Back in his office Hacker is again contacted by the unknown man. Conditions are made that if one million dollars in hush money is not paid the film will be released and a private copy will be made available for the President of the United States. Hacker refuses. They also mention the name of Alex’s lover, prompting him to have Stevenson investigate further. Hacker later confronts his wife that night, and tells her about the scheme to blackmail him. Alex again visits her lover to find his true identity. He turns out to be Mustapha Hashimi (Fabio Testi), a wealthy business man and PLO member. Minister Eretz is informed of the situation and finds the film was made by Mossad agents to keep tabs on the Hackers, although some prints of the film have since been stolen. Stevenson makes headway finding the location where the film was developed and visits the print shop looking for answers. After being duped and knocked out, he catches a woman from the shop and offers her protection. She then reveals the identity of the blackmailers. Hashimi is also blackmailed, for $500,000, and decides to pay. After learning of this, Hacker sets up a meeting with Hashimi and sees an opportunity to use Hashimi's influence within the PLO to have a peaceful meeting between Jewish and Muslim students. Having learned the identity of the blackmailers from the print shop woman, Stevenson interrogates the blackmailers, who reveal that Hacker is being pursued by a KGB assassin named Stone. Hacker conducts the meeting with Israeli and Palestinian students at an ancient Roman ruin outside of Tel Aviv and it ends on a positive note with real progress being made between the two groups. However, Palestinian terrorists ambush the students, causing a bloodbath and Hashimi’s assassination. Israeli authorities, Alex and Stevenson arrive to find Hacker alive and head back to the residence where the KGB assassin (Stone) is waiting for Hacker. Just as Stone is about to make a clean shot from his car, Stevenson shoots Stone in the back of the head, leaving the ambassador unscathed. While sitting with his wife, Hacker tells her that he is thinking of resigning, but she disagrees and favors him staying on. He later walks outside onto his front porch only to see a group of young Israeli students holding a peace rally, bringing him to tears. Cast Robert Mitchum as U.S. Ambassador to Israel Peter Hacker Ellen Burstyn as Alex (née Douglas) Hacker Rock Hudson as Frank Stevenson: Head of Security Donald Pleasence as Israeli Defense Minister Eretz Fabio Testi as Mustapha Hashimi Chelli Goldenberg as Rachel Zachi Noy as Ze'ev Michal Bat-Adam as Tova Yosef Shiloach as Shimon Shmulik Kraus as Stone Production The political thriller was loosely based on the 1974 crime novel 52 Pick-Up by Elmore Leonard. Leonard is not credited on the final credits and says on his official site, "Monahem (sic) Golan hired me to adapt my novel, Fifty Two Pickup, and set it in Tel Aviv. I wrote two drafts and then told him to get another writer. He did and the result was The Ambassador which has nothing to do with Fifty-Two Pickup. It has none of my characters, none of my situations, nothing. But he still owed me for the screen rights and had to pay up before he could release the picture." The Ambassador is the first use of film rights to Leonard's novel; in 1986 the novel was adapted under its original title, 52 Pick-Up, by Cannon Films under the direction of John Frankenheimer. References External links 1984 films 1980s political thriller films Adultery in films American films English-language films Golan-Globus films American political thriller films Israeli–Palestinian conflict films Films based on American novels Films based on works by Elmore Leonard Films directed by J. Lee Thompson Films set in the Israeli Military Governorate Films about diplomats Films set in Jerusalem Films shot in Israel Films produced by Menahem Golan Films produced by Yoram Globus
57604974
https://en.wikipedia.org/wiki/Ujjwal%20Maulik
Ujjwal Maulik
Ujjwal Maulik is an Indian computer scientist and a professor. He is the former chair of the Department of Computer Science and Engineering at Jadavpur University, Kolkata, West Bengal, India. He also held the position of the principal-in-charge and the head of the Department of Computer Science and Engineering at Kalyani Government Engineering College. Education Maulik did his schooling from Nabadwip Bakul Tala High School, Nabadwip, Nadia and Rahara Ramkrishna Mission, Rahara, 24 Parganas (North) both in West Bengal. Subsequently, he completed B.Sc. in Physics and B.Tech. in Computer Science from Calcutta University, Kolkata, West Bengal, India in 1986 and 1989. He also received his M.Tech. in Computer Science and Ph.D. in Engineering in 1992 and 1997 at Jadavpur University, Kolkata, West Bengal, India. Research Maulik did post-doctoral research at the University of New South Wales, Australia in 1999 and the University of Texas, Arlington, U.S. in 2001. As an Alexander von Humboldt Experienced Researcher, he has worked at the German Cancer Research Center and Ruprecht Karl University of Heidelberg, Germany in 2010, 2011, and 2012. He was a senior associate of the International Centre for Theoretical Physics (ICTP), Italy from 2012 to 2018. Maulik is the Fellow of the West Bengal Academy of Science and Technology, Indian National Academy of Engineering (INAE), National Academy Sciences India (NASI), the International Association for Pattern Recognition (IAPR), and the Institute of Electrical and Electronics Engineers (IEEE). He is also a Distinguish member of ACM. Maulik is also a Distinguish Speaker of ACM and Distinguish Lecturer of IEEE CIS. His research interests include data science, machine learning, bioinformatics, and the Internet of things. Awards and recognition Distinguish Member, ACM, 2021 Elected Fellow, National Academy Sciences India (NASI), 2021 Siksharatna Award, Government of West Bengal, 2021 Elected Fellow, Institute of Electrical and Electronics Engineers (IEEE), 2020 Elected Fellow, International Association of Pattern Recognition (FIAPR), USA, 2018 Elected Fellow, Indian National Academy of Engineering (FNAE), India, 2014 Senior Associate, International Centre for Theoretical Physics (ICTP), 2012–2018. Alexander von Humboldt Fellowship (AvH) for Experience Researchers, Germany, 2010 References External links https://sites.google.com/site/drujjwalmaulik https://scholar.google.co.in/citations?user=CW6heYUAAAAJ&hl=en https://dblp.org/pid/93/3249.html http://www.guide2research.com/u/ujjwal-maulik https://orcid.org/0000-0003-1167-0774 Living people Jadavpur University faculty Indian computer scientists University of Calcutta alumni Jadavpur University alumni Fellows of the Indian National Academy of Engineering 1965 births
19971608
https://en.wikipedia.org/wiki/TestComplete
TestComplete
TestComplete is a functional automated testing platform developed by SmartBear Software. TestComplete gives testers the ability to create automated tests for Microsoft Windows, Web, Android (operating system), and iOS applications. Tests can be recorded, scripted or manually created with keyword driven operations and used for automated playback and error logging. TestComplete contains three modules: Desktop Web Mobile Each module contains functionality for creating automated tests on that specified platform. TestComplete is used for testing many different application types including Web, Windows, Android, iOS, WPF, HTML5, Flash, Flex, Silverlight, .NET, VCL and Java. It automates functional testing and back-end testing like database testing. Overview Uses TestComplete is used to create and automate many different software test types. Record and playback test creation records a tester performing a manual test and allows it to be played back and maintained over and over again as an automated test. Recorded tests can be modified later by testers to create new tests or enhance existing tests with more use cases. Main Features Keyword Testing: TestComplete has a built-in keyword-driven test editor that consists of keyword operations that correspond to automated testing actions. Scripted Testing: TestComplete has a built-in code editor that helps testers write scripts manually. It also includes a set of special plug-ins that help. Test Record and Playback: TestComplete records the key actions necessary to replay the test and discards all unneeded actions. Distributed Testing: TestComplete can run several automated tests across separate workstations or virtual machines. Access to Methods and Properties of Internal Objects: TestComplete reads the names of the visible elements and many internal elements of Delphi, C++Builder, .NET, WPF, Java and Visual Basic applications and allows test scripts to access these values for verification or use in tests. Bug Tracking Integration: TestComplete includes issue-tracking templates that can be used to create or modify items stored in issue-tracking systems. TestComplete currently supports Microsoft Visual Studio 2005, 2008, 2010 Team System, BugZilla, Jira and AutomatedQA AQdevTeam. Data-driven testing: Data-driven testing with TestComplete means using a single test to verify many different test cases by driving the test with input and expected values from an external data source instead of using the same hard-coded values each time the test runs. COM-based, Open Architecture: TestComplete's engine is based on an open API, COM interface. It is source-language independent, and can read debugger information and use it at runtime through the TestComplete Debug Info Agent. Test Visualizer – TestComplete automatically captures screenshots during test recording and playback. This enables quick comparisons between expected and actual screens during test. Extensions and SDK - Everything visible in TestComplete — panels, project items, specific scripting objects, and others — are implemented as plug-ins. These plug-ins are included into the product and installed on your computer along with other TestComplete modules. You can create your own plug-ins that will extend TestComplete and provide specific functionality for your own needs. For example, you can create plug-ins or use third-party plug-ins for: Support for custom controls Custom keyword test operations New scripting objects Custom checkpoints Commands for test result processing Panels Project items Menu and toolbar items Supported testing types Functional (or GUI) Testing Regression testing Unit testing Keyword testing Web Testing Mobile application testing Distributed Testing Functional and load testing of web services Coverage Testing Data-Driven Testing Manual Testing Supported scripting languages JavaScript Python VBScript JScript C++Script (specific dialect based on JScript supported by TestComplete - deprecated in version 12) C#Script (specific dialect based on JScript supported by TestComplete - deprecated in version 12) DelphiScript VB Supported applications Support for all 32-bit and 64-bit Windows applications. Extended support, access to internal objects, methods and properties, for the following: .NET (C#, VB.NET, JScript.NET, VCL.NET, C#Builder, Python .NET, Perl .NET etc.) WPF Java (AWT, SWT, Swing, WFC) Android iOS Xamarin (with the implementation of the Falafel Software bridge) Sybase PowerBuilder, Microsoft FoxPro, Microsoft Access, Microsoft InfoPath Web browsers (Internet Explorer, Firefox, Google Chrome, Opera (web browser), Safari (web browser) Visual C++ Visual Basic Visual FoxPro Delphi C++Builder Adobe Flash Adobe Flex Adobe AIR Microsoft Silverlight HTML5 Chromium (web browser) PhoneGap Awards The World of Software Development - Dr. Dobb's Jolt Awards: 2005, 2007, 2008, 2010, 2013, 2014 ATI Automation Honors: 2010, 2014 (Overall subcategory; Java subcategory) asp.netPRO Readers' Choice Awards: 2004, 2005, 2006, 2007, 2009 Windows IT Pro Editors' Best and Community Choice Awards: 2009 Delphi Informant Readers Choice Awards as the Best in the Testing/QA Tool category: 2003, 2004 See also Selenium (software) Test automation GUI software testing List of GUI testing tools References External links Software testing tools Graphical user interface testing Unit testing
3459315
https://en.wikipedia.org/wiki/Blended%20threat
Blended threat
A blended threat (also known as a blended attack) is a software exploit that involves a combination of attacks against different vulnerabilities. Blended threats can be any software that exploits techniques to attack and propagate threats, for example worms, trojan horses, and computer viruses. Description Complex threats consist of two or more attacks, such as multiple attacks of the same kind. Examples of complex threats include a series of coordinated physical hostilities, such as the Paris terrorist attacks in 2015 or a combination of threats such as a cyberattack and a distinct physical attack, which may be coordinated. In more recent years , cyber attacks have demonstrated increased ability to impact physical systems, such as Stuxnet, Triton or Trisis malware, and have caused ransomware attacks such as WannaCry and Netwalker By recognizing computer system threats occur from potential physical hazards, the term "blended threat" has also been defined as a natural, accidental, or purposeful physical or virtual danger that has the potential for crossover impacts or to harm life, information, operations, environment, and property. This is an adaptation based on terminology from the 2010 US Department of Homeland Security's Risk Lexicon. Illustrating how rapidly and dangerously this can play out, Sarah Coble (writing in Infosecurity Mag on 12 June 2020 reported, that "the life of Jessica Hatch, a Houston business owner, was “threatened after cyber-criminals hacked into her company’s social media account and posted racist messages". The founder and CEO of Infinity Diagnostics Center said that her company’s Instagram account was compromised… by an unknown malicious hacker. After gaining access to the account, the threat actor uploaded multiple stories designed to paint Hatch and her business as racist.” In this post "Blended Threats: Protests! Hacking? Death Threats!?!", Gate 15 highlighted that risk management processes need to account for our complex and blended threat environment. On 6 September 2020, the Argentina's official immigration agency, Dirección Nacional de Migraciones, suffered a Netwalker ransomware attack that temporarily halted border crossing into and out of the country. Blended threats, in the form of a cyber attack, have evolved to cause a loss of life. On 10 September 2020, German authorities say a hacker attack caused the failure of IT systems at the University Hospital Düsseldorf (UKD) Duesseldorf, and a woman who needed urgent admission died after she had to be taken to another city for treatment. According to The Guardian, in a worst-case scenario, crackers could potentially carry out "cyber-physical attacks by turning satellite antennas into weapons that can operate like microwave ovens." On September 10, 2019, the Cyber Threat Alliance (CTA) released a new joint analysis product titled "The Illicit Cryptocurrency Threat" that said illicit cryptocurrency mining had overtaken ransomware as the biggest cyber threat to businesses. The CTA said mining attacks had become one of the most common attacks their client's encounter. Blended threats may also compromise healthcare systems, many of which need an Internet connection to operate, as do numerous other medical devices such as pacemakers, making the latter part of the Internet of Things (IoT) a growing network of connected devices, which are potentially vulnerable to a cyber attack. By 2020, threats had already been reported in medical devices. Recently, a crucial flaw in 500,000 pacemakers that could expose users to an attack had been discovered. Additionally, security researchers revealed a chain of vulnerabilities in one brand of pacemaker that an attacker could exploit to control implanted pacemakers remotely and cause physical harm to patients. On 16 July 2019, a mother delivered her baby at the Springhill Medical Center in Mobile Alabama. The mother, Kidd, wasn’t informed Springhill was struggling with a cyberattack when she went in to deliver her daughter, and doctors and nurses then missed a number of key tests that would have shown that the umbilical cord was wrapped around the baby's neck, leading to brain damage and death nine months later. On February 5, 2021 unidentified cyber actors accessed the supervisory control and data acquisition (SCADA) system of a drinking water treatment plant in Oldsmar, Florida. Once the system was accessed, the intruders manipulated the level of sodium hydroxide, also known as lye or caustic soda, from a setting of 100 parts per mission to 11,100 parts per million. At high levels, sodium hydroxide can severely damage human tissue. It is the main ingredient in liquid drain cleaners, but at low levels is used to control water acidity and remove metals from drinking water. On May 7, 2021, Colonial Pipeline, an American oil pipeline system that originates in Houston, Texas, and carries gasoline and jet fuel mainly to the Southeastern United States, suffered a ransomware cyberattack that impacted computerized equipment managing the pipeline. The ransomware attack crippled delivery of about 3 million barrels of fuel per day between Texas and New York. The attack caused fuel shortages up and down the East Coast of the United States. On May 30, 2021, meat supplier JBS suffered a ransomware attack. All JBS-owned beef facilities in the United States were rendered temporarily inoperative.The attack caused a spillover effect into the farming and restaurant industries. On 21 September 2021, Iowa-based provider of agriculture services NEW Cooperative Inc. was hit by a ransomware attack forcing it to take its systems offline. The BlackMatter group that is behind the attack has put forth a $5.9 million ransom demand. NEW Cooperative Inc., a farming cooperative, said the attack could significantly impact the public supply of grain, pork, and chicken if it cannot bring its systems back online. On 26 October 2021 Schreiber Foods, a Wisconsin based milk distributor, was victimized by hackers demanding a rumored $2.5 million ransom to unlock their computer systems. Wisconsin milk handlers and haulers reported getting calls from Schreiber on Saturday (Oct. 23) saying that the company’s computer systems were down and that their plants couldn’t take the milk that had been contracted to go there. Haulers and schedulers were forced to find alternate homes for milk. See also Timeline of computer viruses and worms Comparison of computer viruses List of trojan horses References External links McAfee whitepaper on blended threats Computer security exploits Types of malware
17424405
https://en.wikipedia.org/wiki/Vishwakarma%20Institute%20of%20Information%20Technology
Vishwakarma Institute of Information Technology
The Vishwakarma Institute of Information Technology is an autonomous institute of engineering in Pune, India. Established in 2002, it is affiliated to the Savitribai Phule Pune University. The college is run by the Bansilal Ramanath Agarwal Charitable Trust. In 2017 the University Grant Commission granted it autonomous status. It is consistently ranked as one of the top colleges in Pune, with the recognition and approval of the All India Council of Technical Education, New Delhi. DsP has been accredited by NAAC and has received an A grade with CGPA of 3.14. In 2016, DsP awarded as 'Outstanding Engineering Institute - West" by Vijayavani National Education Leadership Awards.. Courses The Vishwakarma Institute offers following courses. Ph.D. Program Ph.D. in Electronics and Telecommunication Engineering Ph.D. in Civil Engineering Undergraduate Bachelor of Technology (B.Tech) courses are for a duration of four years: Computer Engineering Electronics Engineering Electronics and Telecommunication Engineering Civil Engineering Information Technology Mechanical Engineering Artificial Intelligence & Data Science College activities Gandharva (Annual Techno-Cultural Fest) Every year since 2002, the college has hosted Gandharva, an annual techno-cultural fest. It is considered as Pune's much awaited Techno-Cultural fest. Participated by students from various colleges in and outside the city. Gandharva includes various events like sports days, a Funfair, quizzes, workshops and technical competitions. In January 2012, the event was accompanied by the launch of a website, a logo and merchandise. It also includes cultural activities like singing, music, dancing. Each year, Gandharva is celebrated for three to five days. National Service Scheme The college has established a strong group under National Service Scheme for contributing towards the society. As part of this, the college has adopted a village Jamgaon Disli approximately 35 km from Pune, near Mulshi, for various social development activities. As a result of students' efforts, the village Jamgaon acquired the status of Nirmal Gram in January 2010. Sci-Tech (tech fest) Sci-tech is annual science exhibition in which students from FE(First year of Engineering) to BE(Final year of Engineering) participate with their projects, new ideas and engineering research. Robocon Robocon is Asia's largest robotics competition. A student-organized team is sponsored by the college. Firodaya and Purushottam karandak Team "Avishkar" is a cultural team of the college that participates in the prestigious acting and cultural competitions held in Pune. Student teams work on their acting and oratory skills. Over one lac is spent by the college on both the events separately. In the past few years, the Vishwakarma Institute has achieved great status in Firodiya and Purushottam karandak. II nd position in Firodiya Karandak 'Jayaprabha' in 2016. III rd position in Firodiya karandak 'Handle with care' in 2014. II nd position in Firodiya karandak 'Pazar' in 2013. I st First Position in Firodia Karandak 'Positive' in 2011. Finalist in Firodiya Karandak 'Time Please' in 2008. IV th position in Firodiya karandak 'Daav' in 2017. Best Organizing Team in Purushottam Karandak 'To Mazi Vaat Pahat Asel' 2010. Best Organizing Team in Purushottam Karandak 'Andhalyaa-Khidkyaa' in 2009. Entrepreneurship Development Cell The Entrepreneurship Development Cell is a student body organization that develops entrepreneurial skills. It aims to bridge the gap between technical knowledge and management skills, and hosts an annual national entrepreneur's summit called "Vishwapreneur" that attracts entrepreneurs and government authorities to share their knowledge and experience. Also "CorpStrata" is the event of FE student which is hosted annually. National-level business competitions, industrial visits and guest lecturers conducted throughout the year for students and professionals. As of 2018 the cell is headed by President Snehit Kumar, supported by a dedicated team of third year engineering students under the mentorship of Professors Ravindra S. Acharya and Kirti Wanjale.The cell has also incubated multiple start-ups and promote students for the same. Sister institutes The BRACT runs the following institutes: Vishwakarma Institute Of Information Technology, Kondhwa BK, Pune, Maharashtra Vishwakarma Institute Of Technology, Bibvewadi, Pune, Maharashtra Sandipani Technical Campus(VIT), Kolpa,Nanded Road,Latur. Maharashtra References External links Official website Information technology institutes Savitribai Phule Pune University Engineering colleges in Pune Educational institutions established in 2002 2002 establishments in Maharashtra
32297447
https://en.wikipedia.org/wiki/Supercomputing%20in%20Japan
Supercomputing in Japan
Japan operates a number of centers for supercomputing which hold world records in speed, with the K computer becoming the world's fastest in June 2011. and Fugaku took the lead in June 2020, and furthered it, as of November 2020, to 3 times faster than number two computer. The K computer's performance was impressive, according to professor Jack Dongarra who maintains the TOP500 list of supercomputers, and it surpassed its next 5 competitors combined. The K computer cost US$10 million a year to operate. Previous records Japan's entry into supercomputing began in the early 1980s. In 1982, Osaka University's LINKS-1 Computer Graphics System used a massively parallel processing architecture, with 514 microprocessors, including 257 Zilog Z8001 control processors and 257 iAPX 86/20 floating-point processors. It was mainly used for rendering realistic 3D computer graphics. It was the world's most powerful computer, as of 1984. The SX-3 supercomputer family was developed by NEC Corporation and announced in April 1989. The SX-3/44R became the fastest supercomputer in the world in 1990. Fujitsu's Numerical Wind Tunnel supercomputer gained the top spot in 1993. Japanese supercomputers continued to top the TOP500 lists up until 1997. The K computer's placement on the top spot was seven years after Japan held the title in 2004. NEC's Earth Simulator supercomputer built by NEC at the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) was the fastest in the world at that time. It used 5,120 NEC SX-6i processors, generating a performance of 28,293,540 MIPS (million instructions per second). It also had a peak performance of 131 TFLOPS (131 trillion floating-point operations per second), using proprietary vector processing chips. The K computer used over 60,000 commercial scalar SPARC64 VIIIfx processors housed in over 600 cabinets. The fact that K computer was over 60 times faster than the Earth Simulator, and that the Earth Simulator ranked as the 68th system in the world 7 years after holding the top spot, demonstrates both the rapid increase in top performance in Japan and the widespread growth of supercomputing technology worldwide. Supercomputing centers The GSIC Center at the Tokyo Institute of Technology houses the Tsubame 2.0 supercomputer, which has a peak of 2,288 TFLOPS and in June 2011 ranked 5th in the world. It was developed at the Tokyo Institute of Technology in collaboration with NEC and HP, and has 1,400 nodes using both HP Proliant and NVIDIA Tesla processors. The RIKEN MDGRAPE-3 for molecular dynamics simulations of proteins is a special purpose petascale supercomputer at the Advanced Center for Computing and Communication, RIKEN in Wakō, Saitama, just outside Tokyo. It uses over 4,800 custom MDGRAPE-3 chips, as well as Intel Xeon processors. However, given that it is a special purpose computer, it can not appear on the TOP500 list which requires Linpack benchmarking. The next significant system is Japan Atomic Energy Agency's PRIMERGY BX900 Fujitsu supercomputer. It is significantly slower, reaching 200 TFLOPS and ranking as the 38th in the world in 2011. Historically, the Gravity Pipe (GRAPE) system for astrophysics at the University of Tokyo was distinguished not by its top speed of 64 Tflops, but by its cost and energy efficiency, having won the Gordon Bell Prize in 1999, at about $7 per megaflops, using special purpose processing elements. DEGIMA is a highly cost and energy-efficient computer cluster at the Nagasaki Advanced Computing Center, Nagasaki University. It is used for hierarchical N-body simulations and has a peak performance of 111 TFLOPS with an energy efficiency of 1376 MFLOPS/watt. The overall cost of the hardware was approximately US$500,000. The Computational Simulation Centre, International Fusion Energy Research Centre of the ITER Broader Approach/Japan Atomic Energy Agency operates a 1.52 PFLOPS supercomputer (currently operating at 442 TFLOPS) in Rokkasho, Aomori. The system, called Helios (Roku-chan in Japanese), consists of 4,410 Groupe Bull bullx B510 compute blades, and is used for fusion simulation projects. The University of Tokyo's Information Technology Center in Kashiwa, Chiba, began operating Oakleaf-FX in April 2012. This supercomputer is a Fujitsu PRIMEHPC FX10 (a commercial version of the K computer) configured with 4,800 compute nodes for a peak performance of 1.13 PFLOPS. Each of the compute nodes is a SPARC64 IXfx processor connected to other nodes via a six-dimensional mesh/torus interconnect. In June 2012, the Numerical Prediction Division, Forecast Department of the Japan Meteorological Agency deployed an 847 TFLOPS Hitachi SR16000/M1 supercomputer, which is based on the IBM Power 775, at the Office of Computer Systems Operations and the Meteorological Satellite Center in Kiyose, Tokyo. The system consists of two SR16000/M1s, each a cluster of 432-logical nodes. Each node consists of four 3.83 GHz IBM POWER7 processors and 128 GB of memory. The system is used to run a high-resolution (2 km horizontally and 60 layers vertically, up to 9-hour forecast) local weather forecast model every hour. Grid computing Starting in 2003, Japan used grid computing in the National Research Grid Initiative (NAREGI) project to develop high-performance, scalable grids over very high-speed networks as a future computational infrastructure for scientific and engineering research. See also Computer science Computing Fifth generation computer History of supercomputing Personal supercomputer Supercomputer architecture Supercomputing in China Supercomputing in Europe Supercomputing in India Supercomputing in Pakistan TOP500 References External links GSIC Center, Tokyo Institute of Technology The GRAPE site at the University of Tokyo Supercomputer sites Supercomputing Science and technology in Japan
35862485
https://en.wikipedia.org/wiki/Microsoft%20SmartScreen
Microsoft SmartScreen
SmartScreen (officially called Windows SmartScreen, Windows Defender SmartScreen and SmartScreen Filter in different places) is a cloud-based anti-phishing and anti-malware component included in several Microsoft products, including Windows 8 and later, Internet Explorer, Microsoft Edge and Outlook.com. It is designed to help protect users against attacks that utilize social engineering and drive-by downloads to infect a system by scanning URLs accessed by a user against a denylist of websites containing known threats. With the Windows 10 Creators Update, Microsoft placed the SmartScreen settings into the Windows Defender Security Center. SmartScreen in Internet Explorer Internet Explorer 7: Phishing Filter SmartScreen was first introduced in Internet Explorer 7, then known as the Phishing Filter. Phishing Filter does not check every website visited by the user, only those that are known to be suspicious. Internet Explorer 8: SmartScreen Filter With the release of Internet Explorer 8, the Phishing Filter was renamed to SmartScreen and extended to include protection from socially engineered malware. Every website and download is checked against a local list of popular legitimate websites; if the site is not listed, the entire address is sent to Microsoft for further checks. If it has been labeled as an impostor or harmful, Internet Explorer 8 will show a screen prompting that the site is reported harmful and shouldn't be visited. From there the user can either visit their homepage, visit the previous site, or continue to the unsafe page. If a user attempts to download a file from a location reported harmful, then the download is cancelled. The effectiveness of SmartScreen filtering has been reported to be superior to socially engineered malware protection in other browsers. According to Microsoft, the SmartScreen technology used by Internet Explorer 8 was successful against phishing or other malicious sites and in blocking of socially engineered malware. Beginning with Internet Explorer 8, SmartScreen can be enforced using Group Policy. Internet Explorer 9: Application Reputation Building on top of the SmartScreen Filter introduced in Internet Explorer 8, Internet Explorer 9's protection against malware downloads is extended with SmartScreen Application Reputation that detects untrustworthy executables. This warns a person if they are downloading an executable program without a safe reputation, from a site that does not have a safe reputation. Internet Explorer Mobile 10 Internet Explorer Mobile 10 was the first release of Internet Explorer Mobile to support the SmartScreen Filter. Windows SmartScreen filtering at the desktop level, performing reputation checks by default on any file or application downloaded from the Internet, was introduced in Windows 8. Similar to the way SmartScreen works in Internet Explorer 9, if the program does not have an established good reputation, the user is alerted that running the program may harm their computer. When SmartScreen is left at its default settings, the administrator needs to launch and run the program. Microsoft faced concerns surrounding the privacy, legality and effectiveness of the new system, suggesting that the automatic analysis of files (which involves sending a cryptographic hash of the file and the user's IP address to a server) could be used to build a database of users' downloads online, and that the use of the outdated SSL 2.0 protocol for communication could allow an attacker to eavesdrop on the data. In response, Microsoft later issued a statement noting that IP addresses were only being collected as part of the normal operation of the service and would be periodically deleted, that SmartScreen on Windows 8 would only use SSL 3.0 for security reasons, and that information gathered via SmartScreen would not be used for advertising purposes or sold to third parties. Outlook Outlook.com uses SmartScreen to protect users from unsolicited e-mail messages (spam), fraudulent emails (phishing) and malware spread via e-mail. After its initial review of the body text, the system focuses on the hyperlinks and attachments. Junk mail (spam) To filter spam, SmartScreen Filter uses machine learning from Microsoft Research which learns from known spam threats and user feedback when emails are marked as "Spam" by the user. Over time, these preferences help SmartScreen Filter to distinguish between the characteristics of unwanted and legitimate e-mail and can also determine the reputation of senders by a number of emails having had this checked. Using these algorithms and the reputation of the sender is an SCL rating (Spam Confidence Level score) assigned to each e-mail message (the lower the score, the more desirable). A score of -1, 0, or 1 is considered not spam, and the message is delivered to the recipient's inbox. A score of 5, 6, 7, 8, or 9 is considered spam and is delivered to the recipient's Junk Folder. Scores of 5 or 6 are considered to be suspected spam, while a score of 9 is considered certainly spam. The SCL score of an email can be found in the various x-headers of the received email. Phishing SmartScreen Filter also analyses email messages from fraudulent and suspicious Web links. If such suspicious characteristics are found in an email, the message is either directly sent to the Spam folder with a red information bar at the top of the message which warns of the suspect properties. SmartScreen also protects against spoofed domain names (spoofing) in emails to verify whether an email is sent by the domain which it claims to be sent. For this, it uses the technology Sender ID and DomainKeys Identified Mail (DKIM). SmartScreen Filter also ensures that one email from authenticated senders can distinguish more easily by placing a green-shield icon for the subject line of these emails. Code Signing Certificates SmartScreen builds reputation based on code signing certificates that identify the author of the software. This means that once a reputation has been built, new versions of an application can be signed with the same certificate and maintain the same reputation. However, code signing certificates need to be renewed every two years. SmartScreen does not relate a renewed certificate to an expired one. This means that reputations need to be rebuilt every two years, with users getting frightening messages in the meantime. Extended Validation (EV) certificates seem to avoid this issue, but they are expensive and difficult to obtain for small developers. Effectiveness In late 2010, the results of browser malware testing undertaken by NSS Labs were published. The study looked at the browser's capability to prevent users following socially engineered links of a malicious nature and downloading malicious software. It did not test the browser's ability to block malicious web pages or code. According to NSS Labs, Internet Explorer 9 blocked 99% of malware downloads compared to 90% for Internet Explorer 8 that does not have the SmartScreen Application Reputation feature as opposed to the 13% achieved by Firefox, Chrome, and Safari; which all use a Google Safe Browsing malware filter. Opera 11 was found to block just 5% of malware. SmartScreen Filter was also noted for adding legitimate sites to its blocklists almost instantaneously, as opposed to the several hours it took for blocklists to be updated on other browsers. In early 2010, similar tests had given Internet Explorer 8 an 85% passing grade, the 5% improvement being attributed to "continued investments in improved data intelligence". By comparison, the same research showed that Chrome 6, Firefox 3.6 and Safari 5 scored 6%, 19% and 11%, respectively. Opera 10 scored 0%, failing to "detect any of the socially engineered malware samples". Manufacturers of other browsers criticized the test, focusing upon the lack of transparency of URLs tested and the lack of consideration of layered security additional to the browser, with Google commenting that "The report itself clearly states that it does not evaluate browser security related to vulnerabilities in plug-ins or the browsers themselves", and Opera commenting that the results appeared "odd that they received no results from our data providers" and that "social malware protection is not an indicator of overall browser security". In July 2010, Microsoft claimed that SmartScreen on Internet Explorer had blocked over a billion attempts to access sites containing security risks. According to Microsoft, the SmartScreen Filter included in Outlook.com blocks 4.5 billion unwanted e-mails daily from reaching users. Microsoft also claims that only 3% of incoming email is junk mail but a test by Cascade Insights says that just under half of all junk mail still arrives in the inbox of users. In a September 2011 blog post, Microsoft stated that 1.5 billion attempted malware attacks and over 150 million attempted phishing attacks have been stopped. Criticism In October 2017, criticisms regarding URL submission methods were addressed with the creation of the Report unsafe site URL submission page. Microsoft now supports URL submissions from this form as opposed to the previous experience of a user having to visit the site and use in-product reporting methods. SmartScreen Filter can be bypassed. Some phishing attacks use a phishing email linking to a front-end URL not in the Microsoft database; clicking this URL in the email redirects the user to the malicious site. The "report this website" option in Internet Explorer only reports the currently-open page; the front-end URL in the phishing attack cannot be reported to Microsoft and remains accessible. SmartScreen Filter creates a problem for small software vendors when they distribute an updated version of installation or binary files over the internet. Whenever an updated version is released, SmartScreen responds by stating that the file is not commonly downloaded and can therefore install harmful files on your system. This can be fixed by the author digitally signing the distributed software. Reputation is then based not only on a file's hash but on the signing certificate as well. A common distribution method for authors to bypass SmartScreen warnings is to pack their installation program (for example Setup.exe) into a ZIP-archive and distribute it that way, though this can confuse novice users. Another criticism is that SmartScreen increases the cost of non-commercial and small scale software development. Developers either have to purchase standard code signing certificates or more expensive extended validation certificates. Extended validation certificates allow the developer to immediately establish reputation with SmartScreen but are often unaffordable for people developing software either for free or not for immediate profit. The standard code signing certicates however pose a "catch-22" for developers, since SmartScreen warnings make people reluctant to download software, as a consequence to get downloads requires first passing SmartScreen, passing SmartScreen requires getting reputation and getting reputation is dependent on downloads. See also Anti-phishing software Google Safe Browsing macOS Gatekeeper References External links A detailed FAQ by Microsoft on SmartScreen Filter SmartScreen Computer network security
50029920
https://en.wikipedia.org/wiki/Usability%20of%20web%20authentication%20systems
Usability of web authentication systems
Usability of web authentication systems refers to the efficiency and user acceptance of online authentication systems. Examples of web authentication systems are passwords, federated identity systems (e.g. Google oAuth 2.0, Facebook connect, Mozilla persona), email-based single sign-on (SSO) systems (e.g. SAW, Hatchet), QR code-based systems (e.g. Snap2Pass, WebTicket) or any other system used to authenticate a user's identity on the web. Even though the usability of web authentication systems should be a key consideration in selecting a system, very few web authentication systems (other than passwords) have been subjected to formal usability studies or analysis. Usability and users A web authentication system needs to be as usable as possible whilst not compromising the security that it needs to ensure. The system needs to restrict access by malicious users whilst allowing access to authorised users. If the authentication system does not have sufficient security, malicious users could easily gain access to the system. On the other hand, if the authentication system is too complicated and restrictive, an authorised user would not be able to (or want to) use it. Strong security is achievable in any system, but even the most secure authentication system can be undermined by the users of the system, often referred to as the "weak links" in computer security. Users tend to inadvertently increase or decrease security of a system. If a system is not usable, security could suffer as users will try to minimize the effort required to provide input for authentication, such as writing down their passwords on paper. A more usable system could prevent this from happening. Users are more likely to oblige to authentication requests from systems that are important (e.g. online banking), as opposed to less important systems (e.g. a forum that the user visits infrequently) where these mechanisms might just be ignored. Users accept the security measures only up to a certain point before becoming annoyed by complicated authentication mechanisms. An important factor in the usability of a web authentication system is thus the convenience factor for the user around it. Usability and web applications The preferred web authentication system for web applications is the password, despite its poor usability and several security concerns. This widely used system usually contains mechanisms that were intended to increase security (e.g. requiring users to have high entropy passwords) but lead to password systems being less usable and inadvertently less secure. This is because users find these high entropy passwords harder to remember. Application creators need to make a paradigm shift to develop more usable authentication systems that take the user's needs into account. Replacing the ubiquitous password based systems with more usable (and possibly more secure) systems could lead to major benefits for both the owners of the application and its users. Measurement To measure the usability of a web authentication system, one can use the "usability–deployability–security" or "UDS" framework or a standard metric, such as the system usability scale. The UDS framework looks at three broad categories, namely usability deployability and security of a web authentication system and then rates the tested system as either offering or not offering a specific benefit linked to one (or more) of the categories. An authentication system is then classified as either offering or not offering a specific benefit within the categories of usability deployability and security. Measuring usability of web authentication systems will allow for formal evaluation of a web authentication system and determine the ranking of the system relative to others. While a lot of research regarding web authentication system is currently being done, it tends to focus on security and not usability. Future research should be evaluated formally for usability using a comparable metric or technique. This will enable the comparison of various authentication systems, as well as determining whether an authentication system meets a minimum usability benchmark. Which web authentication system to choose It has been found that security experts tend to focus more on security and less on the usability aspects of web authentication systems. This is problematic as there needs to be a balance between the security of a system and its ease-of-use. A study conducted in 2015 found that users tend to prefer Single sign-on (like those provided by Google and Facebook) based systems. Users preferred these systems because they found them fast and convenient to use. Single sign-on based systems have resulted in substantial improvements in both usability and security. SSO reduces the need for users to remember many usernames and passwords as well as the time needed to authenticate themselves, thereby improving the usability of the system. Other important considerations Users prefer systems that are not complicated and require minimal effort to use and understand. Users enjoy using biometrics and phone‐based authentication systems. However these types of systems require external devices to function, a higher level of interaction from users and need a fall back mechanism if device is unavailable or fails - which could lead to lower usability The current password system used by many web applications could be extended for better usability by using: memorable mnemonics instead of passwords. graphical or mnemonic passwords to make authentication more usable. Future work Usability will become more and more important as more applications move online and require robust and reliable authentication systems that are both usable and secure. The use of brainwaves in authentication systems have been proposed as a possible way to achieve this. However more research and usability studies are required. See also Authentication Authorization Information security Internet security OpenID OpenID Connect Password System usability scale (SUS) Usability Usability testing WebFinger WebID References Further reading Usability Computer access control
21235214
https://en.wikipedia.org/wiki/Symposium%20on%20Parallelism%20in%20Algorithms%20and%20Architectures
Symposium on Parallelism in Algorithms and Architectures
SPAA, the ACM Symposium on Parallelism in Algorithms and Architectures, is an academic conference in the fields of parallel computing and distributed computing. It is sponsored by the Association for Computing Machinery special interest groups SIGACT and SIGARCH, and it is organized in cooperation with the European Association for Theoretical Computer Science (EATCS). History SPAA was first organised on 18–21 June 1989, in Santa Fe, New Mexico, United States. In 1989–2002, SPAA was known as Symposium on Parallel Algorithms and Architectures. In 2003, the name changed to Symposium on Parallelism in Algorithms and Architectures to reflect the extended scope of the conference. In 2003 and 2007, SPAA was part of the Federated Computing Research Conference (FCRC), and in 1998, 2005, and 2009, SPAA was co-located with the ACM Symposium on Principles of Distributed Computing (PODC). See also The list of distributed computing conferences contains other academic conferences in parallel and distributed computing. The list of computer science conferences contains other academic conferences in computer science. Notes External links SPAA proceedings in ACM Digital Library. SPAA proceedings information in DBLP. Distributed computing conferences Theoretical computer science conferences Association for Computing Machinery conferences
5452698
https://en.wikipedia.org/wiki/SURBL
SURBL
SURBL (previously stood for Spam URI RBL) is a collection of URI DNSBL lists of Uniform Resource Identifier (URI) hosts, typically web site domains, that appear in unsolicited messages. SURBL can be used to search incoming e-mail message bodies for spam payload links to help evaluate whether the messages are unsolicited. For example, if http://www.example.com is listed, then e-mail messages with a message body containing this URI may be classified as unsolicited. URI DNSBLs differ from prior DNSBLs, which commonly list mail sending IP addresses. SURBL is a specific instance of the general URI DNSBL list type. Lists SURBL provides lists of different types: ABUSE - spam and abuse sites PH - phishing sites MW - malware sites CR - cracked sites All lists are gathered into multi.surbl.org. Usage A DNS query of a domain or IP address taken from a URI can be sent in the form of spamdomain.example.multi.surbl.org or 4.3.2.1.multi.surbl.org. The multi DNS zone return records contain codes that indicate which list contains the queried for domain or IP address. Many spam filters support use of SURBL. Small sites can use SURBL through public DNS queries, and an rsync data feed is available to professional users. SURBL data are also available in Response Policy Zone and CSV formats. History SURBL was created in 2004 to replace formatted text-based lists such as sa-blacklist that were previously used in SpamAssassin and distributed through web sites. The announcement of SURBL as a URI DNSBL was made April 8, 2004 to the SpamAssassin user community. SURBL is the first major list of the URI DNSBL type, later followed by uribl.com, IvmURI and Spamhaus DBL. See also DNSBL, a spam prevention method in which e-mail messages are accepted or rejected depending on the IP address of the mail server from which the message is received. References External links Spamming
4318570
https://en.wikipedia.org/wiki/MicroVAX
MicroVAX
The MicroVAX is a discontinued family of low-cost minicomputers developed and manufactured by Digital Equipment Corporation (DEC). The first model, the MicroVAX I, was introduced in 1983. They used processors that implemented the VAX instruction set architecture (ISA) and were succeeded by the VAX 4000. Many members of the MicroVAX family had corresponding VAXstation variants, which primarily differ by the addition of graphics hardware. The MicroVAX family supports Digital's VMS and ULTRIX operating systems. Prior to VMS V5.0, MicroVAX hardware required a dedicated version of VMS named MicroVMS. MicroVAX I The MicroVAX I, code named "Seahorse", introduced in October 1984, was one of DEC's first VAX computers to use very-large-scale integration (VLSI) technology. The KA610 CPU module (also known as the KD32) contained two custom chips which implemented the ALU and FPU while TTL chips were used for everything else. Two variants of the floating point chips were supported, with the chips differing by the type of floating point instructions supported, F and G, or F and D. The system was implemented on two quad-height Q-bus cards - a Data Path Module (DAP) and Memory Controller (MCT). The MicroVAX I used Q-bus memory cards, which limited the maximum memory to 4MiB. The performance of the MicroVAX I was rated at 0.3 VUPs, equivalent to the earlier VAX-11/730. MicroVAX II The MicroVAX II, code named "Mayflower", was a mid-range MicroVAX introduced in May 1985 and shipped shortly thereafter. It ran VAX/VMS or, alternatively, ULTRIX, the DEC native Unix operating system. At least one non-DEC operating system was available, BSD Unix from MtXinu. It used the KA630-AA CPU module, a quad-height Q22-Bus module, which featured a MicroVAX 78032 microprocessor and a MicroVAX 78132 floating-point coprocessor operating at 5 MHz (200 ns cycle time). Two gate arrays on the module implemented the external interface for the microprocessor, Q22-bus interface and the scatter-gather map for DMA transfers over the Q22-Bus. The module also contained 1 MB of memory, an interval timer, two ROMs for the boot and diagnostic facility, a DZ console serial line unit and a time-of-year clock. A 50-pin connector for a ribbon cable near the top left corner of the module provided the means by which more memory was added to the system. The MicroVAX II supported 1 to 16 MB of memory through zero, one or two memory expansion modules. The MS630 memory expansion module was used for expanding memory capacity. Four variants of the MS630 existed: the 1 MB MS630-AA, 2 MB MS630-BA, 4 MB MS630-BB and the 8MB MS630-CA. The MS630-AA was a dual-height module, whereas the MS630-BA, MS630-BB and MS630-CA were quad-height modules. These modules used 256 Kb DRAMs and were protected by byte-parity, with the parity logic located on the module. The modules connected to the CPU module via the backplane through the C and D rows and a 50-conductor ribbon cable. The backplane served as the address bus and the ribbon cable as the data bus. The MicroVAX II came in three models of enclosure: BA23 BA123 630QE - A deskside enclosure. KA620 KA620 referred to a single-board MicroVAX II designed for automatic test equipment and manufacturing applications which only ran DEC's real-time VAXELN operating system. A KA620 with 1 MB of memory bundled with the VAXELN Run-Time Package 2.3 was priced at US$5,000. Mira Mira referred to a fault-tolerant configuration of the MicroVAX II developed by DEC's European Centre for Special Systems located in Annecy in France. The system consisted of two MicroVAX 78032 microprocessors, an active and standby microprocessor in a single box, connected by Ethernet and controlled by a software switch. When a fault was detected in the active microprocessor, the workload was switched over to the standby microprocessor. Industrial VAX 630 A MicroVAX II in BA213 enclosure. MicroVAX III BA23- or BA123-enclosure MicroVAX upgraded with KA650 CPU module containing a CVAX chip set. MicroVAX III+ BA23- or BA123-enclosure MicroVAX upgraded with KA655 CPU module. VAX 4 BA23- or BA123-enclosure MicroVAX upgraded with KA660 CPU module. MicroVAX 2000 The MicroVAX 2000, code named "TeamMate", was a low-cost MicroVAX introduced on 10 February 1987. In January 1987, the MicroVAX 2000 was the first VAX system targeted at both universities and VAX programmers who wanted to work from remote locations. The MicroVAX 2000 used the same microprocessor and floating-point coprocessor as the MicroVAX II, but was feature reduced in order to lower the cost. Limitations were a reduced maximum memory capacity, 14 MB versus 16 MB in MicroVAX II systems and the lack of Q-Bus or any expansion bus. The system could have a Shugart-based harddrive with ST412 interface and MFM encoding and had a built in 5.25-inch floppy drive (named RX33 in DEC jargon) for software distribution and backup. Supported operating systems were VMS and ULTRIX. It was packaged in a desktop form factor. MicroVAX 3100 Series The MicroVAX 3100 Series was introduced in 1987. These systems were all packaged in desktop enclosures. MicroVAX 3100 Model 10 Teammate II KA41-A, CVAX, 11.11 MHz (90 ns) MicroVAX 3100 Model 10e Teammate II KA41-D, CVAX+, 16.67 MHz (60 ns) 32 MB of memory maximum. MicroVAX 3100 Model 20 Teammate II KA41-A, CVAX, 11.11 MHz (90 ns) A Model 10 in larger enclosure. MicroVAX 3100 Model 20e Teammate II KA41-D, CVAX+, 16.67 MHz (60 ns) A Model 10e in larger enclosure. MicroVAX 3100 Model 30 Waverley/S Entry-level model, developed in Ayr, Scotland Introduced: 12 October 1993 KA45, SOC, 25 MHz (40 ns) 32 MB of memory maximum. MicroVAX 3100 Model 40 Waverley/S Entry-level model, developed in Ayr, Scotland Introduced: 12 October 1993 KA45, SOC, 25 MHz (40 ns) 8 to 32 MB of memory A Model 30 in larger enclosure. MicroVAX 3100 Model 80 Waverley/M Entry-level model, developed in Ayr, Scotland Introduced: 12 October 1993 KA47, Mariah, 50 MHz (20 ns), 256 KB external cache 72 MB of memory maximum. MicroVAX 3100 Model 85 Waverley/M+ Introduced: August 1994 KA55, NVAX, 62.5 MHz (16 ns), 128 KB external cache 16 to 128 MB of memory. MicroVAX 3100 Model 88 Waverley/M+ Introduced: 8 October 1996 Last order date: 30 September 2000 Last ship date: 31 December 2000 KA58, NVAX, 62.5 MHz (16 ns), 128 KB external cache 64 to 512 MB of memory. MicroVAX 3100 Model 90 Cheetah Introduced: 12 October 1993 Identical to the VAX 4000 Model 100, but uses SCSI instead of DSSI KA50, NVAX, 72 MHz (14 ns), 128 KB external cache 128 MB of memory maximum. MicroVAX 3100 Model 95 Cheetah+ Introduced: 12 April 1994 Processor: KA51, NVAX, 83.34 MHz (12 ns), 512 KB external cache. MicroVAX 3100 Model 96 Cheetah++ KA56, NVAX, 100 MHz (10 ns) 16 to 128 MB of memory. MicroVAX 3100 Model 98 Cheetah++ Introduced: 8 October 1996 Last order date: 30 September 2000 Last ship date: 31 December 2000 KA59, NVAX, 100 MHz (10 ns), 512 KB external cache. InfoServer 100/150/1000 General purpose storage server (disk, CD-ROM, tape and MOP boot server) related to MicroVAX 3100 Model 10, running custom firmware, KA41-C CPU. Mayfair MicroVAX 3500 and MicroVAX 3600 The MicroVAX 3500 and MicroVAX 3600, code named "Mayfair", were introduced in September 1987 and were meant to be the higher end complement of the MicroVAX family. These new machines featured more than three times the performance of the MicroVAX II and supported 32 MB of ECC main memory (twice that of the MicroVAX II). The performance improvements over the MicroVAX II resulted from the increased clock rate of the CVAX chip set, which operated at 11.11 MHz (90 ns cycle time) along with a two-level, write-through caching architecture. It used the KA650 CPU module. MicroVAX 3300 and MicroVAX 3400 The MicroVAX 3300 and MicroVAX 3400, code named Mayfair II, were entry-level to mid-range server computers introduced on 19 October 1988 intended to compete with the IBM AS/400. They used the KA640 CPU module. MicroVAX 3800 and MicroVAX 3900 The MicroVAX 3800 and MicroVAX 3900, code-named "Mayfair III", were introduced in April 1989. They were high-end models in the MicroVAX family, replacing the MicroVAX 3500 and MicroVAX 3600, and were intended to compete with the IBM AS/400. At introduction, the starting price of the MicroVAX 3800 was US$81,000 and that of the MicroVAX 3900 was US$120,200. A variant of the MicroVAX 3800, the rtVAX 3800, was intended for real-time computing (RTC) applications such as computer-aided manufacturing (CAM). These systems used the KA655 CPU module, which contained a 16.67 MHz (60 ns cycle time) CVAX chip set. They supported up to 64 MB of memory. References DEC minicomputers Computer-related introductions in 1984 32-bit computers de:Virtual Address eXtension#Prozessor
2438634
https://en.wikipedia.org/wiki/Sage%2050cloud
Sage 50cloud
Sage 50cloud is a set of accountancy and payroll products developed by Sage Group aimed at small and medium enterprises. Sage offer different products under the Sage 50 name in different regions. The product name originally derives from the UK and Ireland version of the product where the number 50 indicated that it was aimed at companies with up to 50 employees. UK/Ireland version In the UK and Ireland there are currently four products under the Sage 50 banner; Accounts, Payroll, HR and P11D. Sage 50cloud Accounts was the market-leading accounting solution for many years. The product currently known as Sage 50cloud Accounts has its origins in some of the earliest solutions that Sage produced. A direct relative of the current product is the Sage Sterling range which became available in September 1989 as a replacement for Sage's successful Businesswise Accounts range. Sage Sterling was available for DOS and in the early 1990s for Microsoft Windows. The product was re-branded as Sage Sterling +2 and in 1993 a version of the product became available for Apple Macintosh. By 1993 Sage Sterling was market leader having accounted for 62.5% of the integrated accounting software market. In the late 1990s, Sage Instant, a cut-down version of the product line was introduced. Later, the product was rebranded as Sage Line 50, a reference to the target market of the product, and in the 2000s was rebranded to simply Sage 50. In the 2010s cloud-connected functionality was added to the product line and the current 50cloud name began to be used. The UK/Ireland Sage 50cloud products are developed in Newcastle upon Tyne, England. US version The US version of the product was previously called Peachtree Accounting. A conversion to the Peachtree/Sage 50 data format was made available when Simply Accounting was taken off the market. In 2013 it was brought under the Sage 50 banner. Peachtree Accounting was originally sold by a software publisher founded in Atlanta in 1978 by Ben Dyer, Ron Roberts, Steve Mann, and John Hayes. The company was carved out of The Computersystem Center, an early Altair dealer founded by Roberts, Mann, Jim Dunion, and Rich Stafford, which Dyer had joined as the manager and where its first software was published in 1977. Peachtree was the first successful business software made for microcomputers, supplanting the General Ledger programmed with CBASIC and distributed by Structured Systems Group. It is the oldest microcomputer computer program for business in current use. The company expanded its offerings with its acquisition of Layered, an accounting program designed for use on the Macintosh. The company's products were included in the initial launch of the IBM Personal Computer, and it was acquired by Management Science America (MSA) in June 1981. By early 1984 InfoWorld estimated that Peachtree was the world's seventh-largest microcomputer-software company, with $21.7 million in 1983 sales. After several subsequent changes of ownership ending with ADP, Peachtree was eventually acquired by the Sage Group in 1998 for million. In the US, many schools utilize this program in their Accounting classes. Canadian version The Canadian version of Sage 50 was previously known as Bedford Accounting and later renamed to Simply Accounting. In 2013 it was brought under the Sage 50 banner. Bedford Software developed Bedford Integrated Accounting for DOS in 1985 and for Macintosh in 1988, then naming it Simply Accounting. Bedford Software was acquired by Computer Associates in 1989. Simply Accounting became an Independent Business Unit of Computer Associates in 1996 and subsequently incorporated as ACCPAC International, Inc. in 1998. ACCPAC was acquired by The Sage Group in 2004 for integration with its ERP products. It is developed in Richmond, British Columbia. French version The French version of the product, known as Sage 50cloud Ciel, was originally developed by Ciel, the French software business, founded in 1986 that Sage acquired in 1992. German/Austrian version The Austrian and German versions of the product were formerly known as Sage GS-Office. It came under the Sage 50 banner in 2015. Polish version The Polish version of the product was known as Sage Symfonia 50cloud, until it was acquired by Mid Europa Partners in 2021 and rebranded as Symfonia. Spanish version The Spanish version of the product was formerly known as Sage ContaPlus. First offered in the early 1980s by Grupo SP, it gained incredible popularity in 1990 by using news stands as point of sale and was offered at very low prices at a time where professional accounting was very expensive. ContaPlus also took advantage of the Spanish accounting reform of 1990. Nowadays, ContaPlus is the "accounting standard" in Spain with more than one million customers. Grupo SP was purchased by Sage in 2003. South African version South African version, Sage 50cloud Pastel, was formerly known as Pastel Accounting and has been available since 1989. The product was initially developed by Pastel Software who were purchased by Softline in 1999. The product then became known as Softline Pastel. Sage acquired Softline in 2003 and the product eventually became known as Sage Pastel and later Sage 50cloud Pastel. It is widely used in industry, with job advertisements frequently requiring proficiency in the software, and training courses are available by third-party providers. References External links Sage 50cloud Accounting US Sage 50cloud Accounting Canada Sage 50cloud Accounts UK Accounting software 50
1584544
https://en.wikipedia.org/wiki/Open%20educational%20resources
Open educational resources
Open educational resources (OER) are freely accessible, openly licensed instructional materials such as text, media, and other digital assets that are useful for teaching, learning, and assessing, as well as for research purposes. The term OER describes publicly accessible materials and resources for any user to use, re-mix, improve, and redistribute under some licenses. These are designed to reduce accessibility barriers by implementing best practices in teaching and to be adapted for local unique contexts . The development and promotion of open educational resources is often motivated by a desire to provide an alternate or enhanced educational paradigm. Definition and scope The idea of open educational resources (OER) has numerous working definitions. The term was first coined at UNESCO's 2002 Forum on Open Courseware and designates "teaching, learning and research materials in any medium, digital or otherwise, that reside in the public domain or have been released under an open license that permits no-cost access, use, adaptation and redistribution by others with no or limited restrictions. Open licensing is built within the existing framework of intellectual property rights as defined by relevant international conventions and respects the authorship of the work". Often cited is the William and Flora Hewlett Foundation term which used to define OER as: OER are teaching, learning, and research resources that reside in the public domain or have been released under an intellectual property license that permits their free use and re-purposing by others. Open educational resources include full courses, course materials, modules, textbooks, streaming videos, tests, software, and any other tools, materials, or techniques used to support access to knowledge. The Hewlett Foundation updated its definition to: "Open Educational Resources are teaching, learning and research materials in any medium – digital or otherwise – that reside in the public domain or have been released under an open license that permits no-cost access, use, adaptation and redistribution by others with no or limited restrictions". The new definition explicitly states that OER can include both digital and non-digital resources. Also, it lists several types of use that OER permit, inspired by 5R activities of OER. 5R activities/permissions were proposed by David Wiley, which include: Retain – the right to make, own, and control copies of the content (e.g., download, duplicate, store, and manage) Reuse – the right to use the content in a wide range of ways (e.g., in a class, in a study group, on a website, in a video) Revise – the right to adapt, adjust, modify, or alter the content itself (e.g., translate the content into another language) Remix – the right to combine the original or revised content with other material to create something new (e.g., incorporate the content into a mashup) Redistribute – the right to share copies of the original content, your revisions, or your remixes with others (e.g., give a copy of the content to a friend) Users of OER are allowed to engage in any of these 5R activities, permitted by the use of an open license. The Organisation for Economic Co-operation and Development (OECD) defines OER as: "digitised materials offered freely and openly for educators, students, and self-learners to use and reuse for teaching, learning, and research. OER includes learning content, software tools to develop, use, and distribute content, and implementation resources such as open licences". (This is the definition cited by Wikipedia's sister project, Wikiversity.) By way of comparison, the Commonwealth of Learning "has adopted the widest definition of Open Educational Resources (OER) as 'materials offered freely and openly to use and adapt for teaching, learning, development and research'". The WikiEducator project suggests that OER refers "to educational resources (lesson plans, quizzes, syllabi, instructional modules, simulations, etc.) that are freely available for use, reuse, adaptation, and sharing'. The above definitions expose some of the tensions that exist with OER: Nature of the resource: Several of the definitions above limit the definition of OER to digital resources, while others consider that any educational resource can be included in the definition. Source of the resource: While some of the definitions require a resource to be produced with an explicit educational aim in mind, others broaden this to include any resource which may potentially be used for learning. Level of openness: Most definitions require that a resource be placed in the public domain or under a fully open license. Others require only that free use to be granted for educational purposes, possibly excluding commercial uses. These definitions also have common elements, namely they all: cover use and reuse, repurposing, and modification of the resources; include free use for educational purposes by teachers and learners encompass all types of digital media. Given the diversity of users, creators and sponsors of open educational resources, it is not surprising to find a variety of use cases and requirements. For this reason, it may be as helpful to consider the differences between descriptions of open educational resources as it is to consider the descriptions themselves. One of several tensions in reaching a consensus description of OER (as found in the above definitions) is whether there should be explicit emphasis placed on specific technologies. For example, a video can be openly licensed and freely used without being a streaming video. A book can be openly licensed and freely used without being an electronic document. This technologically driven tension is deeply bound up with the discourse of open-source licensing. For more, see Licensing and Types of OER later in this article. There is also a tension between entities which find value in quantifying usage of OER and those which see such metrics as themselves being irrelevant to free and open resources. Those requiring metrics associated with OER are often those with economic investment in the technologies needed to access or provide electronic OER, those with economic interests potentially threatened by OER, or those requiring justification for the costs of implementing and maintaining the infrastructure or access to the freely available OER. While a semantic distinction can be made delineating the technologies used to access and host learning content from the content itself, these technologies are generally accepted as part of the collective of open educational resources. Since OER are intended to be available for a variety of educational purposes, most organizations using OER neither award degrees nor provide academic or administrative support to students seeking college credits towards a diploma from a degree granting accredited institution. In open education, there is an emerging effort by some accredited institutions to offer free certifications, or achievement badges, to document and acknowledge the accomplishments of participants. In order for educational resources to be OER, they must have an open license. Many educational resources made available on the Internet are geared to allowing online access to digitised educational content, but the materials themselves are restrictively licensed. Thus, they are not OER. Often, this is not intentional. Most educators are not familiar with copyright law in their own jurisdictions, never mind internationally. International law and national laws of nearly all nations, and certainly of those who have signed onto the World Intellectual Property Organization (WIPO), restrict all content under strict copyright (unless the copyright owner specifically releases it under an open license). The Creative Commons license is the most widely used licensing framework internationally used for OER. History The term learning object was coined in 1994 by Wayne Hodgins and quickly gained currency among educators and instructional designers, popularizing the idea that digital materials can be designed to allow easy reuse in a wide range of teaching and learning situations. The OER movement originated from developments in open and distance learning (ODL) and in the wider context of a culture of open knowledge, open source, free sharing and peer collaboration, which emerged in the late 20th century. OER and Free/Libre Open Source Software (FLOSS), for instance, have many aspects in common, a connection first established in 1998 by David Wiley who coined the term open content and introduced the concept by analogy with open source. Richard Baraniuk made the same connection independently in 1999 with the founding of Connexions (now called OpenStax CNX). The MIT OpenCourseWare project is credited for having sparked a global Open Educational Resources Movement after announcing in 2001 that it was going to put MIT's entire course catalog online and launching this project in 2002. Other contemporaneous OER projects include Connexions, which was launched by Richard Baraniuk in 1999 and showcased with MIT OpenCourseWare at the launch of the Creative Commons open licenses in 2002. In a first manifestation of this movement, MIT entered a partnership with Utah State University, where assistant professor of instructional technology David Wiley set up a distributed peer support network for the OCW's content through voluntary, self-organizing communities of interest. The term "open educational resources" was first adopted at UNESCO's 2002 Forum on the Impact of Open Courseware for Higher Education in Developing Countries. In 2005 OECD's Centre for Educational Research and Innovation (CERI) launched a 20-month study to analyse and map the scale and scope of initiatives regarding "open educational resources" in terms of their purpose, content, and funding. The report "Giving Knowledge for Free: The Emergence of Open Educational Resources", published in May 2007, is the main output of the project, which involved a number of expert meetings in 2006. In September 2007, the Open Society Institute and the Shuttleworth Foundation convened a meeting in Cape Town to which thirty leading proponents of open education were invited to collaborate on the text of a manifesto. The Cape Town Open Education Declaration was released on 22 January 2008, urging governments and publishers to make publicly funded educational materials available at no charge via the internet. The global movement for OER culminated at the 1st World OER Congress convened in Paris on 20–22 June 2012 by UNESCO, COL and other partners. The resulting Paris OER Declaration (2012) reaffirmed the shared commitment of international organizations, governments, and institutions to promoting the open licensing and free sharing of publicly funded content, the development of national policies and strategies on OER, capacity-building, and open research. In 2018, the 2nd World OER Congress in Ljubljana, Slovenia, was co-organized by UNESCO and the Government of Slovenia. The 500 experts and national delegates from 111 countries adopted the Ljubljana OER Action Plan. It recommends 41 actions to mainstream open-licensed resources to achieve the 2030 Sustainable Development Goal 4 on "quality and lifelong education". An historical antecedent to consider is the pedagogy of artist Joseph Beuys and the founding of the Free International University for Creativity and Interdisciplinary Research in 1973. After co-creating with his students, in 1967, the German Student Party, Beuys was dismissed from his teaching post in 1972 at the Staatliche Kunstakademie Düsseldorf. The institution did not approve of the fact that he permitted 50 students who had been rejected from admission to study with him. The Free University became increasingly involved in political and radical actions calling for a revitalization and restructuring of educational systems. Advantages and disadvantages Advantages of using OER include: Expanded access to learning can be accessed anywhere at any time Ability to modify course materials can be narrowed down to topics that are relevant to course Enhancement of course material texts, images and videos can be used to support different approaches to learning Rapid dissemination of information textbooks can be put forward quicker online than publishing a textbook Cost saving for students all readings are available online, which saves students hundreds of dollars Cost savings for educators - lectures and lessons plans are available online, saving educator time, effort and money, while learning new knowledge Disadvantages of using OER include: Quality/reliability concerns some online material can be edited by anyone at anytime, which results in irrelevant or inaccurate information Limitation of copyright property protection OER licenses change "All rights reserved." into "Some rights reserved.", so that content creators must be careful about what materials they make available Technology issues some students may have difficulty accessing online resources because of slow internet connection, or may not have access to the software required to use the materials Licensing and types Open educational resources often involve issues relating to intellectual property rights. Traditional educational materials, such as textbooks, are protected under conventional copyright terms. However, alternative and more flexible licensing options have become available as a result of the work of Creative Commons, a non-profit organization that provides ready-made licensing agreements that are less restrictive than the "all rights reserved" terms of standard international copyright. These new options have become a "critical infrastructure service for the OER movement." Another license, typically used by developers of OER software, is the GNU General Public License from the free and open-source software (FOSS) community. Open licensing allows uses of the materials that would not be easily permitted under copyright alone. MOOCS: MOOCs stands for Massive Open Online Courses. These courses are free online courses available to any individual who would like to enroll. MOOCs offer a wide range of courses in many different subjects for individuals to be able to evolve their knowledge and education in an affordable and easy manner. Types of open educational resources include full courses, course materials, modules, learning objects, open textbooks, openly licensed (often streamed) videos, tests, software, and other tools, materials, or techniques used to support access to knowledge. OER may be freely and openly available static resources, dynamic resources which change over time in the course of having knowledge seekers interacting with and updating them (such as this Wikipedia article), or a course or module with a combination of these resources. OER policy OER policies (also sometimes known as laws, regulations, strategies, guidelines, principles or tenets) are adopted by governments, institutions or organisations in support of the creation and use of open content, specifically open educational resources (OER), and related open educational practices. Research The growing movement of OER has also fostered research activities on OER across the world. Generally, research on OER is categorized into four categories, called COUP Framework, based on the focus of research: Hilton (2016, 2019) reviewed studies on OER with the focus on Cost, Outcomes, and Perceptions, finding that most of the studies (e.g. Fischer, Hilton, Robinson, & Wiley, 2015; Lovett, Meyer, & Thille, 2008; Petrides, Jimes, Middleton-Detzner, Walling, & Wiess, 2011) had found that OER improve student learning while significantly reducing the cost of their educational resources (e.g. textbooks). He also found that perceptions of OER by faculty and students are generally positive (e.g. Allen & Seaman, 2014; Bliss, Hilton, Wiley, & Thanos, 2013). Few studies have investigated the usage of OER, so it is still not very clear how faculty and student use of OER (enabled by the permission given by an open license) would contribute to student learning. For example, research from the Czech Republic has proved most students said they use OER as often as or more often than classical materials. Wikipedia is the most used resource. Availability, amount of information and easy orientation are the most value benefits of OER usage. (Petiška, 2018) The approaches proposed in the COUP framework have also been used internationally (e.g. Pandra & Santosh, 2017; Afolabi, 2017), although contexts and OER use types vary across countries. A 2018 Charles University study presents that Wikipedia is the most used OER for students of environmental studies (used by 95% of students) and argues educational institutions should focus their attention on it (e.g. by support Wikipedian in residence). To encourage more researchers to join in the field of OER, the Open Education Group has created an "OER Research Fellowship" program, which selects 15-30 doctoral students and early career researchers in North America (US and Canada). To date, more than 50 researchers have joined the program and conducted research on OER. The Open University in UK has run another program aimed at supporting doctoral students researching OER from any country in the world through their GO-GN network (Global OER Graduate Network). GO-GN provides its members with funding and networking opportunities as well as research support. Currently, more than 60 students are listed as its members. At every Institute and Universities level, each and everyone Student and Research scholar should aware of open educational resources and how to Implement the license should be educated and make all them to do hands on session Open Educational Practices OER have been used in educational contexts in a variety of ways, and researchers and practitioners have proposed different names for such practices. According to Wiley & Hilton (2018), the two popular terms used are "open pedagogy" and "open educational practices". What these two terms refer to is closely related to each other, often indistinguishable. For example, Weller (2013) defines open pedagogy as follows: "Open pedagogy makes use of this abundant, open content (such as open educational resources, videos, podcasts), but also places an emphasis on the network and the learner's connections within this". Open educational practices are defined as, for example, "a set of activities around instructional design and implementation of events and processes intended to support learning. They also include the creation, use and repurposing of Open Educational Resources (OER) and their adaptation to the contextual setting. (The Open Educational Quality Initiative). Wiley & Hilton (2018) proposed a new term called "OER-enabled pedagogy", which is defined as "the set of teaching and learning practices that are only possible or practical in the context of the 5R permissions which are characteristic of OER", emphasizing the 5R permissions enabled by the use of open licenses. Costs One of the most frequently cited benefits of OER is their potential to reduce costs. While OER seem well placed to bring down total expenditures, they are not cost-free. New OER can be assembled or simply reused or repurposed from existing open resources. This is a primary strength of OER and, as such, can produce major cost savings. OER need not be created from scratch. On the other hand, there are some costs in the assembly and adaptation process. And some OER must be created and produced originally at some time. While OER must be hosted and disseminated, and some require funding, OER development can take different routes, such as creation, adoption, adaptation and curation. Each of these models provides different cost structure and degree of cost-efficiency. Upfront costs in developing the OER infrastructure can be expensive, such as building the OER infrastructure. Butcher and Hoosen noted that "a key argument put forward by those who have written about the potential benefits of OER relates to its potential for saving cost or, at least, creating significant economic efficiencies. However, to date there has been limited presentation of concrete data to back up this assertion, which reduces the effectiveness of such arguments and opens the OER movement to justified academic criticism." Institutional support A large part of the early work on open educational resources was funded by universities and foundations such as the William and Flora Hewlett Foundation, which was the main financial supporter of open educational resources in the early years and has spent more than $110 million in the 2002 to 2010 period, of which more than $14 million went to MIT. The Shuttleworth Foundation, which focuses on projects concerning collaborative content creation, has contributed as well. With the British government contributing £5.7m, institutional support has also been provided by the UK funding bodies JISC and HEFCE. The JISC/HEFCE UKOER Programme (Phase 3 from October 2011 – October 2012) was meant to build on sustainable procedure indicated in the first two phases eventually expanding in new directions that connect Open Educational Resources to other fields of work. United Nations Educational, Scientific and Cultural Organization (UNESCO) is taking a leading role in "making countries aware of the potential of OER." The organisation has instigated debate on how to apply OERs in practice and chaired vivid discussions on this matter through its International Institute of Educational Planning (IIEP). Believing that OERs can widen access to quality education, particularly when shared by many countries and higher education institutions, UNESCO also champions OERs as a means of promoting access, equity and quality in the spirit of the Universal Declaration of Human Rights. In 2012 the Paris OER Declaration was approved during the 2012 OER World Congress held at UNESCO's headquarters in Paris. Initiatives SkillsCommons was developed in 2012 under the California State University Chancellor's Office and funded through the $2 billion U.S. Department of Labor's TAACCCT initiative. Led by Assistant Vice Chancellor, Gerard Hanley, and modeled after sister project, MERLOT, SkillsCommons open workforce development content was developed and vetted by 700 community colleges and other TAACCCT institutions across the United States. The SkillsCommons content exceeded two million downloads in September 2019 and at that time was considered to be the world's largest repository of open educational and workforce training materials. A parallel initiative, OpenStax CNX (formerly Connexions), came out of Rice University starting in 1999. In the beginning, the Connexions project focused on creating an open repository of user-generated content. In contrast to the OCW projects, content licenses are required to be open under a Creative Commons Attribution International 4.0 (CC BY) license. The hallmark of Connexions is the use of a custom XML format CNXML, designed to aid and enable mixing and reuse of the content. In 2012, OpenStax was created from the basis of the Connexions project. In contrast to user-generated content libraries, OpenStax hires subject matter experts to create college-level textbooks that are peer-reviewed, openly licensed, and available online for free. Like the content in OpenStax CNX, OpenStax books are available under Creative Commons CC BY licenses that allow users to reuse, remix, and redistribute content as long as they provide attribution. OpenStax's stated mission is to create professional grade textbooks for the highest-enrollment undergraduate college courses that are the same quality as traditional textbooks, but are adaptable and available free to students. Other initiatives derived from MIT OpenCourseWare are China Open Resources for Education and OpenCourseWare in Japan. The OpenCourseWare Consortium, founded in 2005 to extend the reach and impact of open course materials and foster new open course materials, counted more than 200 member institutions from around the world in 2009. OER Africa, an initiative established by the South African Institute for Distance Education (Saide) to play a leading role in driving the development and use of OER across all education sectors on the African continent. The OER4Schools project focusses on the use of Open Educational Resources in teacher education in sub-Saharan Africa. Wikiwijs (the Netherlands), was a program intended to promote the use of open educational resources (OER) in the Dutch education sector; The Open educational resources programme (phases one and two) (United Kingdom), funded by HEFCE, the UK Higher Education Academy and Joint Information Systems Committee (JISC), which has supported pilot projects and activities around the open release of learning resources, for free use and repurposing worldwide. In 2003, the ownership of Wikipedia and Wiktionary projects was transferred to the Wikimedia Foundation, a non-profit charitable organization whose goal is to collect and develop free educational content and to disseminate it effectively and globally. Wikipedia ranks in the top-ten most visited websites worldwide since 2007. OER Commons was spearheaded in 2007 by the Institute for the Study of Knowledge Management in Education (ISKME), a nonprofit education research institute dedicated to innovation in open education content and practices, as a way to aggregate, share, and promote open educational resources to educators, administrators, parents, and students. OER Commons also provides educators tools to align OER to the Common Core State Standards; to evaluate the quality of OER to OER Rubrics; and to contribute and share OERs with other teachers and learners worldwide. To further promote the sharing of these resources among educators, in 2008 ISKME launched the OER Commons Teacher Training Initiative, which focuses on advancing open educational practices and on building opportunities for systemic change in teaching and learning. One of the first OER resources for K-12 education is Curriki. A nonprofit organization, Curriki provides an Internet site for open source curriculum (OSC) development, to provide universal access to free curricula and instructional materials for students up to the age of 18 (K-12). By applying the open source process to education, Curriki empowers educational professionals to become an active community in the creation of good curricula. Kim Jones serves as Curriki's Executive Director. In August 2006 WikiEducator was launched to provide a venue for planning education projects built on OER, creating and promoting open education resources (OERs), and networking towards funding proposals. Its Wikieducator's Learning4Content project builds skills in the use of MediaWiki and related free software technologies for mass collaboration in the authoring of free content and claims to be the world's largest wiki training project for education. By 30 June 2009 the project facilitated 86 workshops training 3,001 educators from 113 different countries. Between 2006 and 2007, as a Transversal Action under the European eLearning Programme, the Open e-Learning Content Observatory Services (OLCOS) project carries out a set of activities that aim at fostering the creation, sharing and re-use of Open Educational Resources (OER) in Europe and beyond. The main result of OLCOS was a Roadmap, in order to provide decision makers with an overview of current and likely future developments in OER and recommendations on how various challenges in OER could be addressed. Peer production has also been utilized in producing collaborative open education resources (OERs). Writing Commons, an international open textbook spearheaded by Joe Moxley at the University of South Florida, has evolved from a print textbook into a crowd-sourced resource for college writers around the world. Massive open online course (MOOC) platforms have also generated interest in building online eBooks. The Cultivating Change Community (CCMOOC) at the University of Minnesota is one such project founded entirely on a grassroots model to generate content. In 10 weeks, 150 authors contributed more than 50 chapters to the CCMOOC eBook and companion site. In 2011–12, academicians from the University of Mumbai, India created an OER Portal with free resources on Micro Economics, Macro Economics, and Soft Skills available for global learners. Another project is the Free Education Initiative from the Saylor Foundation, which is currently more than 80% of the way towards its initial goal of providing 241 college-level courses across 13 subject areas. The Saylor Foundation makes use of university and college faculty members and subject experts to assist in this process, as well as to provide peer review of each course to ensure its quality. The foundation also supports the creation of new openly licensed materials where they are not already available as well as through its Open Textbook Challenge. In 2010 the University of Birmingham and the London School of Economics worked together on the HEA and JISC funded DELILA project, the main aim of the project was to release a small sample of open educational resources to support embedding digital and information literacy education into institutional teacher training courses accredited by the HEA including PGCerts and other CPD courses. One of the main barriers that the project found to sharing resources in information literacy was copyright that belonged to commercial database providers In 2006, the African Virtual University (AVU) released 73 modules of its Teacher Education Programs as open education resources to make the courses freely available for all. In 2010, the AVU developed the OER Repository which has contributed to increase the number of Africans that use, contextualize, share and disseminate the existing as well as future academic content. The online portal serves as a platform where the 219 modules of Mathematics, Physics, Chemistry, Biology, ICT in education, and teacher education professional courses are published. The modules are available in three different languages English, French, and Portuguese, making the AVU the leading African institution in providing and using open education resources In August 2013, Tidewater Community College become the first college in the U.S. to create an Associate of Science degree based entirely on openly licensed content the "Z-Degree". The combined efforts of a 13-member faculty team, college staff and administration culminated when students enrolled in the first "z-courses" which are based solely on OER. The goals of this initiative were twofold: 1) to improve student success, and 2) to increase instructor effectiveness. Courses were stripped down to the Learning Outcomes and rebuilt using openly licensed content, reviewed and selected by the faculty developer based on its ability to facilitate student achievement of the objectives. The 21 z-courses that make up an associate of science degree in business administration were launched simultaneously across four campus locations. TCC is the 11th largest public two-year college in the nation, enrolling nearly 47,000 students annually. During this same time period from 2013–2014, Northern Virginia Community College (NOVA) also created two zero-cost OER degree pathways: one an associate degree in General Studies, the other an associate degree in Social Science. One of the largest community colleges in the nation, NOVA serves around 75,000 students across six campuses. NOVA Online (formerly known as the Extended Learning Institute or ELI) is the centralized online learning hub for NOVA, and it was through ELI that NOVA launched their OER-Based General Education Project. Dr. Wm. Preston Davis, Director of Instructional Services at NOVA Online, led the ELI team of faculty, instructional designers and librarians on the project to create what NOVA calls "digital open" courses. During the planning phase, the team was careful to select core, high-enrollment courses that could impact as many students as possible, regardless of specific course of study. At the same time, the team looked beyond individual courses to create depth and quality around full pathways for students to earn an entire degree. From Fall 2013 to Fall 2016, more than 15,000 students had enrolled in NOVA OER courses yielding textbook cost savings of over 2 million dollars over the three-year period. Currently, NOVA is working to add a third OER degree pathway in Liberal Arts. Nordic OER is a Nordic network to promote open education and collaboration amongst stakeholders in all educational sectors. The network has members from all Nordic countries and facilitates discourse and dialogue on open education but also participates in projects and development programs. The network is supported by the Nordic OER project co-funded by Nordplus. In Norway the Norwegian Digital Learning Arena (NDLA) is a joint county enterprise offering open digital learning resources for upper secondary education. In addition to being a compilation of open educational resources, NDLA provides a range of other online tools for sharing and cooperation. At project startup in 2006, increased volume and diversity were seen as significant conditions for the introduction of free learning material in upper secondary education. The incentive was an amendment imposing the counties to provide free educational material, in print as well as digital, including digital hardware. In Sweden there is a growing interest in open publication and the sharing of educational resources but the pace of development is still slow. There are many questions to be dealt with in this area; for universities, academic management and teaching staff. Teachers in all educational sectors require support and guidance to be able to use OER pedagogically and with quality in focus. To realize the full potential of OER for students' learning it is not enough to make patchwork use of OER resources have to be put into context. Valuable teacher time should be used for contextual work and not simply for the creation of content. The aim of the project OER for learning OERSweden is to stimulate an open discussion about collaboration in infrastructural questions regarding open online knowledge sharing. A network of ten universities led by Karlstad University will arrange a series of open webinars during the project period focusing on the use and production of open educational resources. A virtual platform for Swedish OER initiatives and resources will also be developed. The project intends to focus in particular on how OER affects teacher trainers and decision makers. The objectives of the project are: To increase the level of national collaboration between universities and educational organisations in the use and production of OER, To find effective online methods to support teachers and students, in terms of quality, technology and retrievability of OER, To raise awareness for the potential of webinars as a tool for open online learning, To increase the level of collaboration between universities' support functions and foster national resource sharing, with a base in modern library and educational technology units, and To contribute to the creation of a national university structure for tagging, distribution and storage of OER. Founded in 2007, the CK-12 Foundation is a California-based non-profit organization whose stated mission is to reduce the cost of, and increase access to, K-12 education in the United States and worldwide. CK-12 provides free and fully customizable K-12 open educational resources aligned to state curriculum standards and tailored to meet student and teacher needs. The foundation's tools are used by 38,000 schools in the US, and additional international schools. LATIn Project brings a Collaborative Open Textbook Initiative for Higher Education tailored specifically for Latin America. This initiative encourages and supports local professors and authors to contribute with individual sections or chapters that could be assembled into customized books by the whole community. The created books are freely available to the students in an electronic format or could be legally printed at low cost because there is no license or fees to be paid for their distribution, since they are all released as OER with a Creative Commons CC-BY-SA license. This solution also contributes to the creation of customized textbooks where each professor could select the sections appropriate for their courses or could freely adapt existing sections to their needs. Also, the local professors will be the sink and source of the knowledge, contextualized to the Latin American Higher Education system. In 2014, the William and Flora Hewlett Foundation started funding the establishment of an OER World Map that documents OER initiatives around the world. Since 2015, the hbz and graphthinking GmbH develop the service with funding by the Hewlett Foundation. The first version of the website was launched in March 2015 and the website is continuously developing. The OER World Map invites people to enter a personal profile as well to add their organization, OER project or service to the database. In March 2015, Eliademy.com launched the crowdsourcing of OER courses under CC licence. The platform expects to collect 5000 courses during the first year that can be reused by teachers worldwide. In 2015, the University of Idaho Doceo Center launched open course content for K-12 schools, with the purpose of improving awareness of OER among K-12 educators. This was shortly followed by an Open Textbook Crash Course, which provides K-12 educators with basic knowledge about copyright, open licensing, and attribution. Results of these projects have been used to inform research into how to support K-12 educator OER adoption literacies and the diffusion of open practices. In 2015, the MGH Institute of Health Professions, with help from an Institute of Museum and Library Services Grant (#SP-02-14-0), launched the Open Access Course Reserves (OACR). With the idea that many college level courses rely on more than a single textbook to deliver information to students, the OACR is inspired by library courses reserves in that it supplies entire reading lists for typical courses. Faculty can find, create, and share reading lists of open access materials. Today, OER initiatives across the United States rely on individual college and university librarians to curate resources into lists on library content management systems called LibGuides. Find OER repositories by discipline through the use of an individualized LibGuide such as the one found here from Indian River State College,. In response to COVID-19, the Principal Institute has partnered with Fieth Consulting, LLC, California State University's SkillsCommons and MERLOT to create this FREE online resource hub designed to help Administrators, Teachers, Students, and Families more effectively support teaching and learning online. Resources for Leaders Resources for Teachers Learner Resources Resources for Families of Online Learners Several universities of higher education, initiated OER : notable OER sites are Open Michigan, BCcampus Open Textbook collection, RMIT, Open access at Oxford University Press, Maryland Open Source Textbook (M.O.S.T.), OpenEd@UCL, OER initiative by the University of Edinburgh, etc. There were several initiatives taken by faculties of higher education, such as Affordability Counts by faculties across Florida state universities and colleges and Affordable Learning Georgia which is across public Georgian institutions. The North Dakota University System was appropriated funding from the North Dakota state legislature to train instructors to adopt OER and has a repository of OER. There were several initiatives taken by faculties of higher education, such as Affordability Counts by faculties across Florida state universities and colleges and also by individual faculties offering free textbooks affordable by initiating Green tea press International programs High hopes have been voiced for OERs to alleviate the digital divide between the global North and the global South, and to make a contribution to the development of less advanced economies. Europe Learning Resource Exchange for schools (LRE) is a service launched by European Schoolnet in 2004 enabling educators to find multilingual open educational resources from many different countries and providers. Currently, more than 200,000 learning resources are searchable in one portal based on language, subject, resource type and age range. India National Council Of Educational Research and Training (NCERT) digitized all its textbooks from 1st standard to 12th standard. The textbooks are available online for free. Central Institute of Educational Technology (CIET), a constituent Unit of NCERT, digitized more than thousand audio and video programmes. All the educational AV material developed by CIET is presently available at Sakshat Portal an initiative of Ministry of Human Resources and Development. In addition, National Repository for Open Educational Resources (NROER) houses a variety of e-content. US Washington State's Open Course Library Project is a collection of expertly developed educational materials including textbooks, syllabi, course activities, readings, and assessments for 81 high-enrolling college courses. All course have now been released and are providing faculty with a high-quality option that will cost students no more than $30 per course. However, a study found that very few classes were actually using these materials. Japan Since its launch in 2005, Japan OpenCourseWare Consortium (JOCW) has been actively promoting OER movement in Japan with more than 20 institutional members. Dominica The Free Curricula Centre at New World University expands the utility of existing OER textbooks by creating and curating supplemental videos to accompany them, and by converting them to the EPUB format for better display on smartphones and tablets. Bangladesh is the first country to digitize a complete set of textbooks for grades 1–12. Distribution is free to all. Uruguay sought up to 1,000 digital learning resources in a Request For Proposals (RFP) in June 2011. In 2011, South Korea announced a plan to digitize all of its textbooks and to provide all students with computers and digitized textbooks by 2015. The California Learning Resources Network Free Digital Textbook Initiative at high school level, initiated by former Gov. Arnold Schwarzenegger. The Michigan Department of Education provided $600,000 to create the Michigan Open Book Project in 2014. The initial selection of OER textbooks in history, economics, geography and social studies was issued in August, 2015. There has been significant negative reaction to the materials' inaccuracies, design flaws and confusing distribution. The Shuttleworth Foundation's Free high school science texts for South Africa Saudi Arabia had a comprehensive project in 2008 to digitize and improve the Math and Science text books in all k-12 grades. Saudi Arabia started a project in 2011 to digitize all text books other than Math and Science. The Arab League Educational, Cultural and Scientific Organization (ALECSO) and the U.S. State Department launched an Open Book Project in 2013, supporting "the creation of Arabic-language open educational resources (OERs)". With the advent of growing international awareness and implementation of open educational resources, a global OER logo was adopted for use in multiple languages by UNESCO. The design of the Global OER logo creates a common global visual idea, representing "subtle and explicit representations of the subjects and goals of OER". Its full explanation and recommendation of use is available from UNESCO. Major academic conferences Open Education Conference Held annually in North America (US and Canada). OER Conference Held annually in Europe. OE Global Conference Run by Open Education Global, OE Global conference is held annually in a variety of locations across the world. Creative Commons Global Summit Creative Commons hosts its global summit annually and one of the main topics is Open Education and OER. Critical discourse about OER as a movement External discourse The OER movement has been accused of insularity and failure to connect globally: "OERs will not be able to help countries reach their educational goals unless awareness of their power and potential can rapidly be expanded beyond the communities of interest that they have already attracted." More fundamentally, doubts were cast on the altruistic motives typically claimed by OERs. The project itself was accused of imperialism because the economic, political, and cultural preferences of highly developed countries determine the creation and dissemination of knowledge that can be used by less-developed countries and may be a self-serving imposition. To counter the general dominance of OER from the developed countries, the Research on OER for development (ROER4D) research project, aims to study how OER can be produced in the global south (developing countries) which can meet the local needs of the institutions and people. It seeks to understand in what ways, and under what circumstances can the adoption of OER address the increasing demand for accessible, relevant, high-quality and affordable post-secondary education in the Global South. One of the sub-projects of Research on OER for development project aimed to work with teachers from government schools in Karnataka, to collaboratively create OER, including in the Kannada language spoken in the state. The aim was to create a model where teachers in public education systems (who number hundreds of thousands in most countries) can collaborate to create and publish OER. Internal discourse Within the open educational resources movement, the concept of OER is active. Consider, for example, the conceptions of gratis versus libre knowledge as found in the discourse about massive open online courses, which may offer free courses but charge for end-of-course awards or course verification certificates from commercial entities. A second example of essentially contested ideas in OER can be found in the usage of different OER logos which can be interpreted as indicating more or less allegiance to the notion of OER as a global movement. Stephen Downes has argued that, from a connectivist perspective, the production of OER is ironic because "in the final analysis, we cannot produce knowledge for people. Period. The people who are benefiting from these open education resource initiatives are the people who are producing these resources." See also Bookboon (adware) Connectivism Distance education Educational research Educational technology Flexbook (CC BY-NC) Free education Free High School Science Texts Free and open-source software George Siemens Gooru Internet Archive Joint Information Systems Committee Khan Academy Language MOOC Libre knowledge MERLOT North Carolina Learning Object Repository OER4Schools Open access Open content Open Course Library OpenCourseWare OpenEd OpenLearn Open Library Open source curriculum OpenStax Outline of open educational resources PhET Interactive Simulations Project Gutenberg Question and Test Interoperability specification Stephen Downes Virginia Open Education Foundation Writing Commons Online credentials for learning Open educational resources in Canada Open educational practices in Australia References External links Educational materials Articles containing video clips
5396155
https://en.wikipedia.org/wiki/Joseph%20C.%20Porter
Joseph C. Porter
Joseph Chrisman Porter (12 September 1809 – 18 February 1863) was a Confederate officer in the American Civil War, a key leader in the guerrilla campaigns in northern Missouri, and a figure of controversy. The main source for his history, Joseph A. Mudd (see below) is clearly an apologist; his opponents take a less charitable view of him, and his chief adversary, Union Colonel John McNeil, regarded him simply as a bushwacker and traitor, though his service under General John S. Marmaduke in the Springfield campaign ("Marmaduke's First Raid") and following clearly shows he was regarded as a regular officer by the Confederacy. Early life Joseph C. Porter was born in Jessamine County, Kentucky, to James and Rebecca Chrisman Porter. The family moved to Marion County, Missouri, in 1828 or 1829, where Porter attended Marion College in Philadelphia, Missouri, and was a member of the Presbyterian Church. About 1844, Porter married Mary Ann E. Marshall (d. DeWitt, AR "about two years after the war closed," according to Porter's sister). They subsequently moved to Knox County, remaining there until 1857, when they moved to Lewis County, and settled five miles east of Newark. Family members assert that only one photograph of Porter was known to exist, and it was destroyed when his home was burned by Union soldiers. Porter had strong Southern sympathies, and was subject to harassment by pro-Union neighbors, since he lived in an area where loyalties were sharply divided. His brother, James William Porter (b. 1827, m. Carolina Marshall, sister to Joseph's wife Mary Ann, 1853), was also a Confederate officer and Joseph's trusted subordinate, reaching the rank of major. The brothers went to California during the Gold Rush of 1849, then returned to Missouri and farmed together before the war. Civil War The Porter brothers enrolled with Colonel Martin E. Green's Missouri State Guard regiment and participated in the attack on the union Home Guard at Athens; and they later participated in the Confederate attack on Lexington, September 1861. Joseph Porter had no prior military experience, but proved to be a natural leader and was elected a lieutenant colonel (an official commission would come later) in the Missouri State Guard. Following his participation in the Battle of Pea Ridge in March 1862, Porter returned home on the orders of General Sterling Price, to raise recruits throughout northeast Missouri. His duties included the establishment of supply drops, weapons caches and a network of pro-Southern informants. As a Colonel he commanded the 1st Northeast Missouri Cavalry. Throughout Porter's brief military career, his status as a regular army officer was not fully recognized by his adversaries, particularly Colonel John McNeil. Those serving behind Union lines were not recognized as legal combatants and were threatened with execution if captured. Though most of his activities were guerrilla operations or harassment, a few battles were fought. On June 17, 1862, near Warren or New Market, in Warren Township, Marion County with 43 mounted men, he captured four men of the Union regiment he found there. The prisoners' weapons and horses were taken, then they were paroled on their oath not to take up arms against the Confederacy until exchanged. Cherry Grove Moving northward through the western part of Marion, the eastern portion of Knox, and the western border of Lewis counties, Porter approached Sulphur Springs, near Colony, in Knox County. Along his route he collected perhaps 200 recruits. From Sulphur Springs he moved north, threatened the Union Home Guards at Memphis, picked up additional recruits in Scotland County, and moved westward into Schuyler County to get a company known to be there under Captain Bill Dunn. Union forces under Colonel Henry S. Lipscomb and others responded with a march on Colony. They overtook Porter at Cherry Grove, in the northeastern part of Schuyler County, near the Iowa line, where, with a superior force, they attacked and defeated him, routing his forces and driving them southward. Losses on both sides were minor. Porter retreated rapidly, pursued by Lipscomb, until his forces dispersed at a point about 10 miles west of Newark. Porter, with perhaps 75 men, remained in the vicinity of his home for some days, gathering recruits all the time, and getting ready to strike again. Memphis On Sunday, July 13, Porter approached Memphis, Missouri in four converging columns totalling 125–169 men and captured it with little or no resistance. They first raided the Federal armory, seizing about a hundred muskets with cartridge boxes and ammunition, and several uniforms (Mudd, see below, was among those who would wear the Union uniform, as he claimed, for its superior comfort in the heat, a fact which would later draw friendly fire and aggravate the view of Porter's troops as bushwhackers, neither obeying nor protected by the rules of war). They rounded up all adult males, who were taken to the court house to swear not to divulge any information about the raiders for forty-eight hours. Porter freed all militiamen or suspected militiamen to await parole, a fact noted by champions of his character. Citizens expressed their sympathies variously; Porter gave safe passage to a physician, an admitted supporter of the Union, who was anxious to return to his seriously ill wife. A verbally abusive woman was threatened with a pistol by one of Porter's troops, perhaps as a bluff; Mudd intervened to prevent bloodshed. Porter's troops entered the courthouse and destroyed all indictments for horse-theft; the act is variously understood as simple lawlessness, intervention on behalf of criminal associates, or interference with politically motivated, fraudulent charges. At Memphis, a key incident occurred which would darken Porter's reputation, and which his detractors see as part of a consistent behavioral pattern which put him and his men beyond the norms of warfare. According to the "History of Shelby County," which is generally sympathetic to Porter, "Most conceded that Col. Porter's purpose for capturing Memphis, MO. was to seize Dr. Wm. Aylward, a prominent Union man of the community." Aylward was captured during the day by Captain Tom Stacy's men and confined to a house. After rousing him overnight and removing him, ostensibly to see Porter, guards claimed that he escaped. However, witnesses reported hearing the sounds of a strangling, and his body was found the next day, with marks consistent with hanging or strangulation. At Memphis, Porter had been joined by Tom Stacy, generally regarded as a genuine bushwhacker – even the sympathetic Mudd says of him "if one of his men were captured and killed he murdered the man who did it if he could catch him, or, failing him, the nearest man he could catch to the one who did it." Stacy's company was called "the chain gang" by the other members of Porter's command. Supporters of Porter attribute the murder of Aylward to Stacy (who would be mortally wounded at Vassar Hill.) However, a Union gentleman who came to inquire about Aylward and a captured officer before the discovery of the body stated that when he asked Porter about Aylward, the response was, "He is where he will never disturb anybody else." Vassar Hill Union Col. (later General) John McNeil pursued Porter, who planned an ambush with perhaps 125 men according to participant Mudd (though Federal estimates of Porter's strength ran from 400 to 600 men). The battle is called "Vassar Hill" in the History of Scotland County; Porter himself called it "Oak Ridge," and Federal forces called it "Pierce's Mill," after a location 1.5 miles northwest of the battlefield. A detachment of three companies (C, H, I), about 300 men of Merrill's Horse, under Major John Y. Clopper, was dispatched by McNeil from Newark against Porter, and attacked him at 2 p.m. on Friday, July 18, on the south fork of the Middle Fabius River, ten miles southwest of Memphis. Porter's men were concealed in brush and stayed low when the Federals stopped to fire prior to each charge. Porter's men held their fire until the range was very short, increasing the lethality of the volley. Clopper was in the Federal front, and out of 21 men of his advance guard, all but one were killed and wounded. The Federals made at least seven mounted charges according to Mudd, doing little but adding to the body count. A battalion of roughly 100 men of the 11th Missouri State Militia Cavalry under Major Rogers arrived and dismounted. While Clopper claimed to have driven the enemy from the field after this, Mudd indicates that the Federals instead fell back and ended the engagement leaving Porter in possession of the field until he withdrew. Clopper's reputation suffered as a result of his poor tactics. Before the final charge one company officer angrily asked, "Why don't you dismount those men and stop murdering them?" On page 86 of "With Porter in North Missouri", Mudd describes "One of our boys, down the line out of my sight, losing his head fired too soon and when the Federal was about to ride him down, had an empty gun in his hand. This he clubbed and striking his assailant a powerful blow on the neck, killed him." In Joseph Budd's pension records, his death is described as occurring due to "a stroke of a weapon breaking his neck". Joseph is pictured on the right. Union casualties were about 24 killed and mortally wounded (10 from Merrill's Horse and 14 from the 11th MSM Cavalry), and perhaps 59 wounded (24 from Merrill's Horse, and 35 from the 11th MSM Cavalry.) Porter's loss was as little as three killed and five wounded according to Mudd, or six killed, three mortally wounded, and 10 wounded left on the field according to the Shelby County History. The Union dead were originally buried on the Jacob Maggard farm, which served as a temporary hospital. After the fight, Porter moved westward a few miles, then south through Paulville, in the eastern part of Adair County; thence south-east into Knox County, passing through Novelty, four miles east of Locust Hill, at noon on Saturday, July 19, having fought a battle and made a march of sixty-five miles in less than twenty-four hours. Florida July 22: Detachments of F & G Companies (60 men total) of 3rd Iowa Volunteer Cavalry under Major Henry Clay Caldwell encountered Porter with 300 rebels at Florida in Monroe County, Missouri. The detachment fought outnumbered for one hour and fell back upon the post of Paris, Missouri, with 22 wounded and 2 captured. Santa Fe July 24: Major Caldwell and 100 men of his 3rd Iowa Volunteer Cavalry pursued Porter and his 400 men into dense brush near Botts' farm, near Santa Fe, Missouri. Porter fled and was pursued into Callaway County, Missouri. The Second Battalion suffered one killed and ten wounded. Moore's Mill July 28: Union forces under Colonel (later General ) Odon Guitar engaged Porter near Moore's Mill (now the village of Calwood) in Callaway County. The Union losses were 19 killed, 21 wounded. Guerrilla losses were 36-60 killed, 100 wounded. This was one of Porter's most aggressive actions, involving a daring charge and disabling the Federal artillery, until forced to retreat by the arrival of Union reinforcements and the exhaustion of his ammunition. Newark August 1: McNeil had dispatched Lair to Newark. Porter headed westward from Midway, putting his brother Jim Porter in charge of one column, himself at the head of another, approaching the town from east and south simultaneously, and closing the trap on the completely surprised federals at 5 p.m. on July 31. Porter forced a company of 75 Federals to take refuge in a brick schoolhouse; when they refused terms, he had a loaded haywagon fired and threatened to run it into the building. The Federals surrendered, were paroled and permitted to keep their sidearms. The Federal loss in the Newark fight was 4 killed, 6 wounded, and 72 prisoners. The Confederate loss was reported at from 10 to 20 killed, and 30 severely wounded. Union soldiers were treated well, but the Union-sympathizing storekeepers had their businesses gutted, and citizens were subjected to abuse. Some claim this was in spite of Porter's orders, and claimed that he bore his old neighbors no malice, while others view this action as Porter's revenge for previous ill-treatment. Despite the victory at Newark, the high casualties on the winning side, attributed to chaotic advance and undisciplined exposure of Porter's troops to hostile fire, suggest growing disorder in his ranks. From here, records of his activities—and even the degree to which he can be said to have a unified command—are unclear. Various forces with varying degrees of official relation to Porter's command are credited with capturing Paris and Canton, and with bringing in new supplies and recruits. Porter's numbers had swelled to a size likely to be unmanageable, particularly considering the lack of trained officers and that not more than a quarter of his 2000 or so troops had regulation equipment. Perhaps another quarter had squirrel-guns or shotguns, while the rest no arms at all. Porter's objective was now to get south to Arkansas with his recruits, in order that they might be properly trained and equipped. Kirksville August 6, 1862 At Kirksville, Porter made a serious mistake in engaging Union forces under Col. John McNeil, whom he knew to have cannon – perhaps in overconfidence, as a result of his sharpshooters' ability to pick off the Federal artillerymen at Santa Fe. Traveling light had been Porter's great advantage -- "His troops lived off the country, and every man was his own quartermaster and commissary," in contrast to the elaborate baggage and supply trains of McNeil ("History of Shelby County"). Here Porter suffered unequivocal defeat, from which he would not recover. Dispersal of forces At Clem's Mills, five miles west of Kirksville, Porter crossed the Chariton River, seeking to link up with Col. John A. Poindexter in Chariton County, known to have 1,200 or 1,500 recruits; their combined forces would be able to force a passage of the Missouri River at Glasgow or Brunswick, and open a line to the Confederacy. Three miles north of Stockton (now New Cambria), in western Macon County, Porter encountered 250 men of the First Missouri State Militia, under Lieut. Col. Alexander Woolfolk, coming up to unite with McNeil. There was a brief fight at Panther Creek, Friday, August 8. Porter was turned from his course and retreated toward the northeast, away from his intended line of march and ultimate goal. The next day, Col. James McFerran, of the First Missouri State Militia, joined Woolfolk with 250 more men and took command. He caught up with Porter at Walnut Creek, in Adair County and drove him eastward to the Chariton. At See's Ford, where he recrossed the Chariton, Porter set up an ambush on the east bank with 125 men. Porter's forces opened fire at short range. Only two Federals were killed outright and 15 wounded, but the action seemed to have caused McFerran to break off pursuit. Porter passed on to Wilsonville, in the south-east part of Adair. Here, a mass desertion took place among his discouraged troops; in a few hours, 500 had drifted away. Capture of Palmyra and the Allsman incident Porter wandered around the wilderness, his desertion-diminished troops feeding off the land, although there were some new recruits as well. On Friday, September 12, Porter, with 400 men, captured Palmyra, with 20 of its garrison, and held the place two hours, losing one man killed and one wounded. One Union citizen was killed and three Federals wounded. Porter's objectives were to liberate Confederates held in the jail there, and to draw Federal forces away from the Missouri River, so as to open it to southward crossing by rebels seeking to join Confederate units. The Confederates carried away an elderly Union citizen named Andrew Allsman. The fate of Allsman remains something of a mystery, and there is disagreement as well about his character and his legitimacy as a target (see Palmyra Massacre). Porter quickly abandoned Palmyra to McNeil, and another period of wandering ensued, in the general direction of his own home near Newark. There were further desertions, and a number of bands of organized rebels refused to place themselves under Porter's command, clearly indicating that he had lost public confidence. At Whaley's Mill, his men were definitively scattered, almost without a fight. Death After his rout by McNeil at Whaley's Mill, and the dispersion of his troops at Bragg's school house, Col. Porter kept himself hidden for a few days. He abandoned the idea of raising a militarily significant force, and entered Shelby County on a line of march to the South with fewer than 100 men remaining. He made his way safely through Monroe, Audrain, Callaway and Boone counties, and crossed the Missouri River in a skiff, continuing into Arkansas. Here he organized, from the men who had accompanied him and others whom he found in Arkansas, a regiment of Missouri Confederate cavalry. From Pocahontas, Arkansas, in the latter part of December 1862, as acting brigadier, he moved with his command and the battalions of Cols. Colton Greene and J. Q. A. Burbridge, to cooperate with Gen. John S. Marmaduke in his attack on Springfield. Through a mistake of Gen. Marmaduke, Col. Porter's command did not participate in this attack. It moved on a line far to the east. After the expedition had failed, the commands of Marmaduke and Porter united east of Marshfield, and started to retreat into Arkansas. At the Battle of Hartville, in Wright Country on January 11, 1863, a small Federal force was encountered and defeated, although at severe loss to the Confederates, who had many valuable officers killed and mortally wounded. Among the latter was Colonel Porter, commanding a brigade, shot from his horse with wounds to the leg from an artillery shell. In Oates's account, (118-119), Porter died an hour later. According to Mudd, however, Porter was shot from his horse with wounds to the leg and the hand while leading a charge; in this account, Porter managed to accompany the army on a difficult trek into Arkansas, arriving at Camp Sallado on January 20, and at Batesville January 25, where he died from his wounds on February 18, 1863. The early date is refuted by Porter's own report, dated February 3, referencing the journey after the battle, as well as eyewitness Major G.W.C. Bennett's reference to "Porter's column" on the march several days after and dozens of miles away from the battle, and finally by Marmaduke's noting Porter among the wounded, in contrast to the listing of officers killed; additional near-contemporary sources also affirm Porter's survival of the journey to Arkansas. The January 11 date seems to originate with General Fitz Henry Warren, who reported as fact the speculation that a burial observed by a recently paroled Lieutenant Brown was that of Porter. The location of Col. Porter's grave remains unknown. Oral traditions suggest that he was at some point buried on the farm of his cousin Ezekiel Porter (said to be a volunteer ambulance driver during the war), just north of Hartville, in what is now known as Porter's Cemetery, near Competition, Missouri. Legacy and evaluation Porter is credited variously with five and nine children, only two of whom were living at the time of Mudd's book, his daughter, Mrs. O.M. White, and his son, Joseph I. Porter of Stuttgart, AR, who wrote: "I know but little about the war and have been trying to forget what I do know about it. I hope never to read a history of it." Porter's daughter O. M. White wrote that the family did not have a picture of their father, "the only one we ever had was destroyed when our home was burned by the soldiers during the war." Porter's character is hard to estimate: clearly he possessed considerable personal courage, but was also a prudent tactician, often declining battle when he could not choose his ground and when he thought the potential for casualties disproportionate to projected gains. Declining the option to pursue the retreating Union force at Santa Fe, Mudd has him say "I can't see that anything would be accomplished by pursuing the enemy. We might give them a drive and kill a dozen of them and we might lose a man or two, and I wouldn't give them one of my men for a dozen dead federals unless to gain some particular purpose." A number of atrocities are attributed to him, but the partisanship of accounts makes it difficult to ascertain his responsibility for the killings of Dr. Aylward, Andrew Allsman, James Dye at Kirksville, a wounded Federal at Botts' Farm, and others, though it must be concluded that he failed to communicate the unacceptability of such actions to his subordinates. There is reliable eyewitness testimony to his intervening to prevent the lynching of two captured Federals in retaliation for the execution of a Confederate prisoner at the Battle of Florida. References Further reading Oates, Stephen B., Confederate Cavalry West of the River: Raiding Federal Missouri, U-TX, 1961, rpt 1992. House, Grant, "Colonel Joseph C. Porter's 1862 Campaign in Northeast Missouri." M.A. thesis. Western Illinois University, 1989. Mudd, Joseph A., With Porter in North Missouri. Washington, DC: National Publishing Co., 1909. 452p. Roth, Dave and Sallee, Scott E., "Porter's Campaign in Northeast Missouri and the Palmyra Massacre." Blue & Gray Magazine 17 (February 2000): 52-60. A tour of modern-day Northeast Missouri sites involved in Porter's campaign of 1862. Illus. History of Shelby County, Chapter 8. (1884). Shelby County Historical Society. The War of the Rebellion: a Compilation of the Official Records of the Union and Confederate armies, Volume XXII, Part 1, pages 205-207 contain Porter's report. The header is: "HDQRS. PORTER'S BRIG., MISSOURI CAV., C. S. ARMY, Camp Allen, February 3, 1863." People from Jessamine County, Kentucky Confederate States Army officers People of Missouri in the American Civil War Bushwhackers Confederate States military personnel killed in the American Civil War Missouri State Guard 1819 births 1863 deaths People from Memphis, Missouri
58767019
https://en.wikipedia.org/wiki/Oppo%20F5/F5%20Youth
Oppo F5/F5 Youth
The Oppo F5 and the Oppo F5 Youth are both smartphones manufactured by the Chinese tech company Oppo Electronics Corporation (known as Oppo). The F5 was released in October 2017, whilst the F5 Youth was released in November 2017, under Oppo's F Series. The series began in January 2016, with the release of the Oppo F1, delivering succeeding smartphones on a 6-month time interval with alternative variants such as the F3 Plus and F5 Youth. These phones have a heavy emphasis on the capabilities of its front-facing camera. These phones are made to be the successors of the Oppo F3 and Oppo F3 Plus, released in May 2017 and April 2017, respectively. Now, the F5 and F5 Youth have become the predecessors to the Oppo F7 and Oppo F7 Youth, as the company continues their production and development of the F series. Specifications Hardware The Oppo F5 and Oppo F5 Youth possess similar specifications in terms of their hardware. Both phones are touchscreen operated, measured at 156.5 x 76 x 7.5 mm, and weigh 152 grams. These are both lighter than their predecessor, the Oppo F3, which weighs 153 grams. The Oppo F5 has a plastic backing, whilst the F5 Youth uses an aluminium back. Both of the front panels are LCD glass touchscreens that use Corning's Gorilla Glass 5. Their technology is both capable of running on GSM (2G), HSPA (3G), and LTE (4G) networks. The phones are also equipped with internal GPS, Bluetooth 4.2, and supports dual-SIM functions, just as its predecessors, the Oppo F3 and F3 Plus, did. Both the F5 and F5 Youth comes with a 3200mAh battery, which remain unchanged from the F3. Despite not changing the battery size, users and critics have said that Oppo have successfully optimized the power consumption of the phones, as there has been a significant increase in screen-on time. Oppo has continued to keep the batteries on the series non-removable. These phones both have 6-inch screens, with a FHD+ touchscreen with a screen resolution of 1080 x 2160 pixels in an 18:9 aspect ratio. These screens both support multi-touch. This is an incremental change from the F3, which had a 1080 x 1920 5.5 inch display in a 16:9 aspect ratio. The F5 and F5 Youth are powered by the same Octa-core 2.5GHz Cortex-A53 processor, and also the same Mali-G71 MP2 graphics card. The Oppo F5 offers either 4GB or 6GB of RAM, whilst the F5 Youth variant is available in either 3GB or 4GB of RAM, depending on the country of purchase. The internal storage of both phones depends on their variant. The Oppo F5 4GB RAM comes with 32GB of internal storage, whilst the 6GB variant comes with 64GB internal storage. The Oppo F5 Youth is available with only 32GB of internal storage, regardless of model. Both phones have an expandable microSD slot, which can be expanded to 256GB. The camera on these phones differ, as the F5 rear camera has 16MP with features such as LED flash, HDR, panorama, and 1080p/30fps recording capability. Its front-facing camera, which is strongly marketed by Oppo, has a 20MP lens. The F5 Youth shares similar specifications to the F3, as they both carry 13MP rear cameras, which are capable of LED Flash, HDR, panorama, and 1080p/30fps recording capability. Their front-facing cameras shoot at 16MP. Both the F5 and F5 Youth support the 3.5mm headphone jack, and also have a loudspeaker located on the bottom of the phones, next to its microUSB 2.0 On-The-Go charging port. Oppo has kept the main sensors found in the F3 and F3 Plus: the accelerometer, proximity sensor, light sensor, and magnetometer. They have decided to move the fingerprint sensor from the front of the phone to the back on both the F5 and F5 Youth. Software The Oppo F5 and F5 Youth's operating system by default is Android 7.1 (Nougat), but may vary to 7.1.1, depending on the variant distributed per region. The OS of these phones can be further updated until the phone is no longer supported by the latest Android version. Both phones shipped with ColorOS 3.2, which is based on Android 7.1.1, whilst the F3 and F3 Plus ran on ColorOS Android 6.0. Reception GSM Arena have provided a very detailed review of the qualities which they liked and disliked about the Oppo F5. They have categorized the phone to be 'trendy', as it possessed the main features that most modern 2017 phones would have, such as trimmed bezels, face unlock, and ultra-wide aspect ratios. The reviewer found the display of the phone to have excellent color contrast along with sharp and precise images because of its 402ppi density. GSM Arena have tested the battery life of the phone and gave a 91-hour endurance rating when they conducted their daily tests and was described to be a good score based on their standards. Overall, GSM Arena have recommended the phone as a whole, but they believed it should have been priced better as a mid-range tier phone. GSM Arena did not provide their own review for the Oppo F5 Youth. Andrew Williams from Trusted Reviews discussed his opinions about the Oppo F5 on 19 December 2017. He praised the phones' widescreen with thin bezels, which are uncommonly found on a mid-tier phone. In terms of performance, Williams finds that the phone ran decently, but struggled to compete with the processors of its competition. He said that it did however possess decent frame rates to play games and navigate through the phone smoothly without many issues. The front-facing camera is highly applauded by Williams, as it was able to capture detailed and accurate selfies. However the rear camera did not receive the same positive feedback, as he mentioned that it was heavily dependent on the lighting of the scenery, but was capable of capturing detail-rich images occasionally. The reviewer also states that the battery life of the phone was not an issue, however it did not last long enough to become one of its strengths. Williams concluded that the phone would be a solid choice as a budget Android operating phone, especially for those who enjoy taking selfies. The Oppo F5 was also reviewed by Abhishek Baxi from Android Authority on 26 August 2018 in an article titled "Great selfies come at a premium". After spending two weeks with the phone, Baxi wrote that the phone did not fit well with its 'selfie expert' claim. According to the review, the phone's overall performance was enough to satisfy most people, but wrote that there were better and cheaper alternatives in the market. Baxi criticized the phone's plastic covered back and heavy weight, but commended the ergonomic build of the phone, noting the rounded edges and the ease of single-handed use. He claimed that the display had great viewing angles, showing clear and crisp color accuracy, despite being under direct sunlight. Baxi said that processor and chipset of the phone performed decently "if you don't stretch it too much". Again, the micro USB charging port has been criticized by the reviewer, as it did not support fast charging. His review of the camera has also claimed that it was heavily dependent on the lighting, as there is a noticeable quality drop in low lighting conditions, but is more than capable of taking high-quality images. In a heavily positive review, the Oppo F5 Youth was reviewed by Criss Galit from Gadget Pilipinas. Galit labelled the phone as "Great Value for Less". Galit started off by complementing the generous screen to body ratio and adds on to this by saying that "it doesn't feel like you're holding a 6-inch smartphone", as it feels comfortable when holding this in use. The reviewer describes the display of this phone as crisp, vibrant, and color accurate, even when placed under direct sunlight. He writes that the quality of the phone's audio is loud and crisp, with a present bass, but can sound slightly affected when playing at full volume. Galit applauded the operating system of the phone, as it is simple, clean, and pleasant to the eyes. Both the front-facing and rear camera are heavily commended by the reviewer, as they are capable of capturing sharp and accurate images with "true-to-life" color reproduction, even under low light conditions. The chipset of the phone was also praised by Galit, as it was powerful enough to run high-end games smoothly. He also briefly added that the battery life of this phone was not an issue, as one full charge was sufficient enough for a full day's use. Galit had come to the verdict that the Oppo F5 Youth was simply a more affordable version of the F5, as they possess very similar specifications, but was significantly cheaper for the performance it delivers. Issues Battery One of the main concerns with the battery on the Oppo F5 was that the phone did not support any type of fast charging, as it continued to use the older micro USB port. Performance Given that both phones were considered as mid-tier phones, their hardware and specifications would take some cutbacks, as their chipsets struggled to run the latest demanding high-end games. Moreover, the Oppo F5 Youth heated up quickly when attempting to run powerful applications. Availability Oppo has created two variants of the Oppo F5 and Oppo F5 Youth. The Oppo F5 can come with either 4GB RAM with 32GB of internal storage, or 6GB RAM with 64GB of internal storage. The Oppo F5 Youth offers either 3GB RAM of 4GB RAM, both with 32GB of internal storage. The Oppo F5 comes in 3 different color options, black, red, or gold. The Oppo F5 Youth offers 2 color choices, which are black or gold. Oppo has announced that they will be selling the Oppo F5 in the Philippines, India, Russia, Indonesia, Malaysia, Myanmar, Thailand, Vietnam, Pakistan, and Nepal. The Oppo F5 was unveiled in the Philippines on 27 October 2017, putting the phone available for pre-ordering until 3 November 2017. Its official release began on 4 November 2017 onwards, with a starting price of 15,990 PHP. The market in India had launched the 4GB variant Oppo F5 on 2 November 2017, with a starting price of 19,990 INR. Likewise, Malaysia has also released the phone on 2 November 2017, at 1,298 MYR, and later released the 6GB variant of the Oppo F5 on 6 December 2017, available for pre-ordering at 1,698MYR. Other countries that have also released this phone placed its initial pricing at similar ranges as Pakistan's, whose started at 39,999 PKR, while Nepal's had settled at 33,990NPR. Russia, in particular, have launched the Oppo F5 slightly later, on 10 November 2017. References Android (operating system) devices F5/F5 Youth Mobile phones introduced in 2017 Smartphones Discontinued smartphones
10106070
https://en.wikipedia.org/wiki/IBM%20JX
IBM JX
The IBM JX (or JXPC) was a personal computer released in 1984 into the Japanese, Australian and New Zealand markets. Designed in Japan, it was based on the technology of the IBM PCjr and was designated the IBM 5511. It was targeted in the Australasian market towards the public education sector rather than at consumers, and was sold in three levels: JX (64 KiB), JX2 (128 KiB) and JX3 (256 KiB). Upgrades were available to both 384 KiB and 512 KiB. The JX was the first IBM PC to use 3.5" floppy drives. IBM Japan expected to sell 200,000 units of JX, but only 40,000 units were produced. The JX was discontinued in 1987, and IBM Japan gave 15,000 units of JX to its employees in honor of the company's 50th anniversary. General The IBM JX's main difference from the PCjr was a professional keyboard (rather than the PCjr's disparaged chiclet keyboard), dual 3.5" floppy drives, as well as options for a 5.25" floppy drive and a hard drive, both of which sat atop the main unit. The JX did not support PCjr-like "sidecar" add-ons for hardware expansion. In common with the PCjr, however, it had no DMA controller. It also supported the otherwise unique-in-the-IBM-PC-world ECGA (Enhanced Color Graphics Adapter—16 simultaneous colors, but only at 320×200 resolution) and the PCjr's 4-channel sound. Support for these two features was utilised by only a handful of software developers—Sierra On-line being the most well-known. Configuration It had several innovative features: Single or twin 3.5" 720 KB (initially only 360 KB) diskette drives Wireless infra-red keyboard 16-color video output Stackable expansion Joystick ports Cartridge slots In Japan, both white and dark gray units were available, but elsewhere all IBM JXs were dark gray—very unusual in the days of the standard color of IBM "beige boxes". All models sold in Japan have a Japanese font stored on 128 KB of ROM, but the basic system only has the capability to display 40×11 Japanese text. The Extended Display Cartridge provides 40×25 Japanese text mode, and its display resolution is 720×512 pixels like a 16 pixel font model of the IBM 5550. This cartridge contains a BASIC interpreter compatible with 5550's. However, one disadvantage it shared with the PCjr was that it could not use the standard ISA bus cards of the IBM PC. The system operated PC DOS 2.11 as well as Microsoft Disk BASIC and Microsoft Advanced BASIC. Like the PC, if the system was left to boot without inserting a diskette into one of the drives the Microsoft Cassette BASIC interpreter would be loaded, which was compatible with IBM PCjr BASIC, including Cartridge BASIC. PC DOS 2.11 could only use half of the tracks of a 3.5" drive, however, since it didn't really understand what a 3.5" drive even was. The PCjx's BIOS could only address the first 40 tracks, like a 5.25" drive. The PCjx later had a BIOS upgrade chip, sold together with PC DOS 3.21, which could use the full 720 KB capacity of the diskette drives. Some popular options for the PCjx were a 5.25" 360 KB capacity diskette drive, a 10 MB external hard disk (both of these as stackable units the same size as the JX itself) and a joystick. IBM never released a 3270 emulation adapter for the PCjx, in order to steer enterprise customers to more-expensive IBM PCs and XTs. Reception BYTE in 1985 called the JX "a Japanese product for the Japanese; its price and capabilities reflect its target market". The magazine stated that its compatibility with PCjr peripherals rather than the PC's, and joystick ports and audio, "suggests that IBM Japan is hedging its bets by pursuing a share of the easily saturated video-game sector". BYTE concluded that "the JX will enjoy, at best, a modest and short-lived success—it's too little, too late" against more-sophisticated rival computers. IBM Japan advertised the JX as a home computer, but its sales didn't grow even in 1986. According to the Nikkei Personal Computing journal, a distributor revealed the number of units sold was "around 2,000 units in Japan alone", and an industry insider expected "Sales to retail stores, overseas stores, IBM's employees, their family, and direct sales to large customers. Including all of these, about 10,000 units". One computer store declared that customers wouldn't buy it even at one quarter list price. The Japanese home computer market was much smaller than its video game console market compared with Western countries, yet NEC sold 75,000 units of PC-88 in the four months since November 1985. Many people pointed out the matter was adopting the 8088 processor. In Japan, the mainstream of Intel microprocessors was moving from the 8086 to the 80286, and computer enthusiasts considered the 8088 as a branch of them. Also, a novice personal computer user generally chose his new machine with the advice of his closest acquaintance familiar with personal computers. Such advices spread, and damaged buyer's reputations for the JX. A developer of the JX insisted it was designed to run western PCjr softwares without modification, but few Japanese users wanted them. In another point, there were not enough software titles for the JX. An independent software company said IBM Japan was uncooperative for developing JX software. Another company complained "Some software for the JX have a brand called IBM, don't they? Even if another company creates better software, it can never beat them". The JX was the first IBM PC compatible computer sold by IBM Japan, but they started selling the PC/XT and PC/AT in November 1985. The IBM 5550 sold well for Japanese companies who used IBM's mainframe computer. The JX providing the Japanese text mode and word processor had the potential to expand into the small-business sector. However, in February 1985, IBM Japan released the IBM 5540 as the entry-level line of the IBM 5550. The IBM 5540 offered a fully compatibility with the 5550 at the price between the 5550 and JX. A sales manager of IBM Japan expected it expanded their lineup of computers, but the announcement confused IBM users. A businessman who used the 5550 in his office and the JX at his home complained "If I knew the 5540 would release four months later, I wouldn't have purchased the JX". Both JX and 5540 took a long time to develop. The product manager explained that they spent 70% of their efforts on compatibility with older machines every time they developed new machines. The Nikkei Personal Computing journal pointed out both were developed at the same time at the Fujisawa Development Laboratory, and suspected that IBM Japan was imposed to release the JX first by its parent company, IBM. Masahiko Hatori, who developed the BIOS and DOS for the JX, recalled the development staff were anxious that it would be too late to compete with other Japanese machines although the management thought it went well whatever IBM made. They were using NEC and other companies' computers at their home. He also revealed the reason why the JX used the 8088 processor was both the development and sales teams thought a consumer-class JX mustn't surpass a business-class 5550. The JX was dedicated to be inexpensive for personal use, but it didn't suit consumers who preferred the fast response time for gaming. IBM Japan didn't disclose its unit sales, but the Nikkei Computer journal reported in 1987 that only 40,000 units were produced according to the insider. The company planned a 100% PC/AT compatible machine "JX II" which could handle Japanese text like the JX did, but was cancelled in 1986. They never developed a consumer product until they entered the education sector with the PS/55Z in 1988. References External links IBM JX, The As-Yet Unnamed Computer Museum!! IBM JX Information Page, IBM JX Information Page IBM PC JX, OLD-COMPUTERS.COM Museum 1984 (month unknown), Chronology of IBM Personal Computers (1983-1986), archive from the original on March 15, 2012. Photo:Vintage IBM 5511 No flag waving for the excellent IBM JX, Sunday Times Magazine (Australia), November 10, 1985 Photo:IBM JX joystick Jx 8086-based home computers Computer-related introductions in 1984
27592349
https://en.wikipedia.org/wiki/List%20of%20repetitive%20strain%20injury%20software
List of repetitive strain injury software
Repetitive strain injuries (RSI) are to the body's muscles, joints, tendons, ligaments, bones, or nerves caused by repetitive movements. Such injuries are more likely if the movements required force or were accompanied by vibrations, compression, or the maintenance of sustained or awkward positions. Prolonged use of computer equipment can result in upper limb disorders, notably in the wrist or the back. RSIs are a subset of musculoskeletal disorders. This article discusses and lists some specialized software that is available to aid individuals avoid injury or manage current discomfort/injury associated with computer use. Software categories Software for RSIs generally address these functional categories: Break reminder – Some tools are reminders to take breaks based on factors like elapsed time, how much or how intensely a person is working, natural rest patterns, and times of day. Activity mitigation – Some tools reduce the amount of typing or mouse clicking (e.g. speech recognition tools, automatic clicking tools, hotkey/macro tools). Tracking – Some tools track information, like time spent working each day, break-taking patterns, repetitions (e.g. keystrokes, mouseclicks). Some tools have much more sophisticated statistics, including predictive risk assessments based on fairly sophisticated and research-based methodologies. Some tools also include discomfort assessments and reporting tools to help in finding associative patterns between objectively collected statistics and subjectively reported discomfort information. Networking – Some tools are able to handle multiple-computer use (e.g. for profiles settings or for aggregating usage statistics) via networked data, including the ability to handle intermittent connectivity. Training – Some tools include a training component with information on topics including: workstation setup, body positioning, work-efficiency tips, and psycho-social information. Break reminders This can be an important component for many users. Considerations for selecting a tool include the mechanism the tools use to decide when alert to take a break are needed, how to take a break, and how flexible the tool is. Many tools are simple timers (e.g. reminders to rest every 60 minutes). That may work well if a job requires constant and consistent computer work, but can be distracting if work is not constantly on the computer. Other tools consider natural rests and delay break suggestions accordingly. Some tools also consider patterns in activity and will suggest breaks sooner or later depending on activity. These tools can be less frustrating to people whose computer work is interspersed with other activities throughout the day. The various mechanisms for reminding you to take a break can include visual and audio indicators, workflow limiters (e.g. popup windows, screen dimmers/blankers), and much more. The best tools allow you to select which of these mechanisms you want to use. Flexibility is important since each person has different needs. Some tools have extensive customization capability that allows you to configure exactly how and when breaks will be suggested. Features to enforce breaks can also be helpful to people who want to take breaks but whose personalities are such that they have a hard time stopping work. Some tools have advanced features like the ability to block break suggestions during some activities (e.g. when showing a presentation, or in full-screen mode). Activity mitigation Applications with these tools seek to mitigate the impact of particular activities by either changing or reducing the associated exposure. This could involve changing or reducing input device use, improving a user-interface to reduce stress, speeding up a process to reduce the time a user needs to be at the computer, etc. An example of a tool that changes the impact would be speech recognition. Speech recognition replaces keyboard (and sometimes mouse) input with vocal input. This type of solution can be very helpful at reducing some types of strain, but it's important to recognize that another significant strain may be created. An example of a tool that reduces the impact would be a hotkey tool or automatic clicking tool. These tools ideally reduce the number of keystrokes and mouse clicks that a user need do to accomplish a particular task. You can find a list of software names in the :Category:Automation software. An example of a tool that reduces the impact would also be breathing scrolling. Breathing scrolling requires no mouse or keyboard for scrolling. It uses microphone to scroll websites. A tip, in order to use the mouse less often in the software menus, is to learn the keyboard shortcuts. Partial list of solutions This is an alphabetical list, This list does not rank application quality, nor is it complete. Many other applications exist. A "pages-of-Google-hits" score is provided with the reference to each program's home page. AntiRSI AntiRSI is a program for OS X that helps prevent RSI (repetitive strain injury) and other computer related stress. It does so by forcing you to take regular breaks, yet without getting in the way. It also detects natural breaks. Auto Mouse AutoMouse software lets users click their mouse using keyboard hotkeys. By eliminating the need to click the mouse altogether, the strain associated with clicking is also eliminated. For highly repetitive form-based computer work, AutoMouse can also click the mouse when the cursor stops moving (e.g resting a cursor on a button will click that button). As a result of its patent-pending design, there are no restrictions on mouse gestures such as drag-and-drop and repetitive-clicking. The mouse remains fully functional to the user but without the need to physically click the mouse. AT Mouse AT Mouse allows for PC users to use the keyboard to navigate the mouse pointer in a very efficient way using the keyboard keys. This way the use of a regular or dedicated mouse device is avoided, and the typing posture may be optimized for keyboard usage. As a result RSI symptoms can be avoided or reduced. The solution is also targeting users that only wants to boost productivity, as well as users with reduced dexterity. Buddymove Buddymove is a macOS app which constantly analyses how long you’ve been working on your computer and chooses the perfect moment for a break. During the break, it shows you a set of exercises to do. CIP - Computer Injury Prevention Program CIP software provides a holistic injury prevention approach. CIP is a coati character which pops up on your computer screen every hour (serving as a break reminder) and takes the computer user through a series of short but effective injury prevention exercises. The exercises are for the users eyes, hands, wrists, shoulders and legs. The program targets eye strain, repetitive strain injury and deep vein thrombosis. Clickless Mouse Open source windows application which allows to use mouse without clicking - by moving it only. By reacting to user mouse movements this application simulates left/right mouse click, double left mouse click and mouse button holding. It can be used with a virtual keyboard (e.g. Free Virtual Keyboard) to type by moving mouse. CtrlWORK - Efficiency Software CtrlWORK helps employees perform computer tasks faster, better and with less effort. CtrlWORK prevents both physical and mental fatigue, thereby improving health and demonstrably improving performance. The software is succeeded by Work & Move. Dragon NaturallySpeaking, speech recognition software Software used by many disability users other than RSI as well. Actions may be verbally dictated or controlled by a mouse. EyeLeo EyeLeo reminds to take breaks regularly, shows you simple eye exercises and prevents you from using computer at break times. Healthy Hints software that will detect your periods of computer usage, recommends when a rest break is due and gives you a 5-star achievement rating. Also displays information on other factors that can affect your wellbeing as a computer user, such as lighting and posture. Mouse keys Some operating systems allow using the numpad as a mouse. NoClick Open source linux/windows application for using the mouse without clicking - only mouse motion is required. NoClick simulates left, right, double click and drag. It has a configurable hot key for activating/deactivating the program. No Pain (RSI Away) RSI Away software sends you a Windows 10 notification after a specified number of minutes to remind you to take a rest/stretch. Features 6 medical infographic pages that suggest several exercises to perform on each break (from 28 exercises total). PastTense Software for which a number of different configuration options can be defined, each of which contains one or more timers to remind the user to have a rest. Timers can be set to remind the user of anything from taking a short walk around the office every hour to stretching the user's wrists every 30 seconds – the user defines the reminder message. Rest A commercial app for OS X and Windows that supports many features, including preventing the user from working, showing exercise suggestions, sync with a mobile (iOS) app for reminders on the go & automatic break detection for a completely distraction-free use. RSI-Shield RSI-Shield provides breaks and can operate the computer. The user can record frequently made operations, so they can be replayed. RSI Guard Software that suggests breaks based on work intensity as well as natural rest patterns, insuring that breaks are recommended when they're really needed. During breaks, RSIGuard shows stretch suggestions via 31 clear video demonstrations. Includes automatic clicking tool to eliminate mouse clicks, and a hot-key tool to reduce mousing/typing. Tracks your work patterns to help you improve and identify risk areas. RSIStopWatch Windows software that allows work and rest periods to be configured, with idle periods taken into consideration. Includes the option of reduced work periods late in the day. Includes easy to read countdown timers. During rest periods, the display is covered to enforce the rest period and RSIStopWatch suggests activities. This rest display can be customized with your own posture/stretch reminder, or just your favorite photo. Includes mini rest posture and stretch reminders. Includes a hot-key tool to reduce mousing. Logs work and rest periods. Stretch Stretch is an app to have people take a break and stretch with easy-to-follow and attractive user interfaces. StretchClock StretchClock is a customizable stretch reminder for office professionals and computer users. The time between breaks is configurable (one hour is recommended). During each break StretchClock shows a quick and easy stretch video, with directions that can usually be followed in an office environment. The no-sweat exercises are specifically targeted to prevent the problems that computer users most commonly develop. Voice Finger Software that uses speech recognition to control the keyboard and mouse by voice commands. Voice Finger was made by a developer with Repetitive Strain Injury, and was designed to eliminate the need to touch the computer at any time. VisionProtect Software that tracks the blink rate of the user and notifies when the blinking rate is too low. VisionProtect includes a training component as well, allowing users to build healthier habits for the long term. VisionProtect was developed by a person who had Computer Vision Syndrome, and was designed to eliminate dry and red eyes, which are often the result of excessive time in front of a screen. WorkPace Software that helps avoiding Repetitive Strain Injury at the computer by educating about muscle fatigue and recovery, providing basic and work/rest ratio timers to alert you to take micro-pauses and breaks, and monitoring the user's exposure and intensity of computer use and providing the user with feedback on progress. WorkRave Workrave An open-source free program that assists in the recovery and prevention of repetitive strain injury. The program frequently alerts user to take micro-pauses, rest breaks and restricts user to a predefined daily limit. Break reminders See also List of speech recognition software Notes References Repetitive strain injury software Musculoskeletal disorders Repetitive strain injury software Overuse injuries Automation software
62192408
https://en.wikipedia.org/wiki/Carlisle%20Canal
Carlisle Canal
The Carlisle Canal opened in 1823, to link Carlisle to the Solway Firth, to facilitate the transport of goods to and from the city. It was a short-lived venture, being replaced by a railway which used the canal bed for most of its route in 1854. History The River Eden flows through the city of Carlisle, and into the Solway Firth. There were coal mines at Maryport, a little further down the coast, and prior to 1720, places along the river were supplied with coal by boats. However, this trade ended in 1720, when duties were levied on all goods carried around the coast by sea, and it became cheaper to transport the coal by land. Three traders from Carlisle, John Hicks, Henry Orme and Thomas Pattinson, sought an Act of Parliament which would waive the coastwise duties between Ellen Foot, as Maryport was then known, and Bank End, which was locate on the river close to Carlisle. The Act enabled them to build wharves, cranes and warehouses, and to dredge the river. They obtained the Act in 1721, and could charge tolls on goods for 31 years, but there were no powers to make cuts or build locks to improve the river. On the other side of the country, there was a scheme to extend navigation on the River Tyne westwards from Newburn to Hexham, which was not actioned, but from 1794 there were various schemes to extend or bypass the River Tyne, most of which were described as being part of a coast to coast canal which might end at Carlisle or Maryport. The aspirations of the traders of Carlisle were more local, as they wanted improved facilities for ships bringing goods to Carlisle from Liverpool, Ireland, and Scottish ports. In 1795, three ships carrying around 35 tons each were regularly running between Liverpool and Sandsfield, about from Carlisle, from where some 1,000 tons of goods per year were carried by road to the city. In 1807, Mersey flats regularly made the journey from the River Mersey, carrying timber, and in 1818, six boats were engaged in the trade from Liverpool. Access along the river was not ideal, as the boats often had to wait in the estuary until the tide was suitable. The idea of a canal gained support locally when a public meeting was held on 21 May 1807. The principal aim was to provide the city with a better and cheaper supply of coal, and a committee was appointed to push the plan forwards. They asked the engineer William Chapman to advise them, and he proposed a route from Carlisle to Maryport, which he had also promoted in 1795 as part of a coast to coast route. He estimated that it would cost between £90,000 and £100,000 to build, but conceded that a terminus near Bowness on the Solway Firth would be cheaper. £40,000 would pay for a canal suitable for 45-ton boats, but a larger canal, suitable for 90-ton boats that could cross the Irish Sea or reach the Forth and Clyde Canal, would cost between £55,000 and £60,000. The larger canal could still be part of a coast to coast route. The options as to the size and destination of the canal were put to subscribers by the committee. In August 1807 Chapman suggested that a ship canal for the Irish, Scottish and Liverpool trade, and a 50-ton canal to Maryport for the coal trade could both be built, with both finding support in the newspapers. The Committee sought a second opinion from Thomas Telford, who produced a report on 6 February 1808. He described a Cumberland Canal, which would allow sea-going vessels to reach Carlisle, but would also be part of a grander plan to link Carlisle to other parts of the country, and could be incorporated into the coast to coast waterway. He suggested that locks should be at least as big as those on the Forth and Clyde Canal, with a width of and a depth of water of over the lock cills. His canal would leave the Solway Firth about upstream of Bowness-on-Solway to reach Carlisle, and would cost £109,393. In order to provide a water supply, a navigable feeder would continue onwards to Wigton, which would be suitable for wide narrow boats, and would cost an additional £38,139. He also quoted two other prices for narrower canals, but noted that these would require goods to be transferred to smaller boats, with the inherent costs and inconvenience. Chapman suggested that a steam pump would be a better way to supply water, unless the Wigton route was likely to be commercially profitable, and also suggested somewhat smaller locks, at long, wide with of water over the cills. This would enable Mersey flats to reach Carlisle, without resorting to transhipment. However, no further progress was made at that time. After eight and a half years, another meeting was held at Carlisle on 7 October 1817, and Chapman was asked to produce a survey for a canal suitable for vessels of at least 70 tons. He was to ensure that it could become part of the coast to coast link. His canal started at Fisher's Cross, subsequently named Port Carlisle, although this name had also been applied to Sandsfield in earlier days. It would feature locks , while the channel would be wide by deep, and would cost £75,392. A link to Newcastle upon Tyne could be built on a smaller scale, and another link could be built along the valley of the Eden to serve slate quarries near Ullswater. His plan was accepted, money was raised locally, and an Act of Parliament was obtained in 1819, which authorised the Carlisle Canal to raise £80,000 in capital, and an extra £40,000 if required. The chairman of the committee, Dr John Heysham, suggested they look at other canals before starting work, and visits were made to the Lancaster Canal and the Forth and Clyde Canael. Construction The committee appointed Chapman as consulting engineer, but who held the position of resident engineer is less clear. Richard Buck had helped Chapman with the initial surveys, and it appears that his brother Henry fulfilled that role at the start of the project. Contracts to build the entire canal had been awarded by early 1820, but relationships between Chapman, Buck and the committee were not good, and the committee asked Thomas Ferrier from the Forth and Clyde Canal to oversee the works in March. Buck was not happy with this and resigned in July. but Richard Buck stayed on, effectively working for Ferrier. Chapman was not happy with this situation, and in November 1822, when most of the work had been completed, criticised Ferrier's workmanship, and recommended that Buck should be allowed to complete the canal. The committee took exception to this and dismissed him. Two months later, just before the canal was due to open, they dismissed Buck as well, although he did not leave and stayed on until May 1823. The canal was long, had a surface width of and was deep. At Fisher's Cross, a basin had been built, which was connected to the Solway Firth by a sea lock with a long timber jetty. Seven more locks raised the level of the canal by , and at Carlisle there was a second basin, , complete with wharves and a warehouse. The locks were long and wide, and water supply was provided by a reservoir on Mill Beck near Grinsdale. The sea lock was built so that its top was at the same level as high tides on the lowest neap tides, and there was a second lock nearby, which maintained the level of the canal at above the level of the highest tides. Beyond the two locks, the canal ran on a level for , and then the remaining six locks were grouped together in the next , after which the canal ran level again to reach Carlisle. Facilities at Carlisle were improved in 1838 by the construction of a timber pond below the basin. There were no fixed bridges on the route, so that it could be used by coastal vessels, and where crossings were required, they were built using two-leaved drawbridges, similar in style to those on the Forth and Clyde Canal. While Paget-Tomlinson and Hadfield & Biddle agree on the number of locks, Priestley, writing in 1831, suggests that there were two locks immediately above the sea lock, each with a rise of , and that the top level was above the level of the sea lock, this being the level reached by an abnormally high tide which was recorded in January 1796. He also says that there were two basins between the locks, known as the Upper and Lower Solway basins, and that the six locks further along raised the level by . The committee had succeeded in raising £70,600 of the authorised capital, most of it coming from local people. To complete the project, they had borrowed around £10,000, so the total cost was just over the estimated £80,000. The committee consisted of nine proprietors, each of whom was required to hold at least ten shares, and they were to be elected annually. Operation The committee set about encouraging trade on the canal, and built a timber yard at Carlisle. Shortly afterwards, the Treasury altered the rules on coastal taxes, and repealed the duties on coal, stone and slate carried between Whitehaven and Carlisle. However, no one could be found to run a trade in coal, so in June the committee sent a boat called Mary to Harrington to fetch some, which they then sold, but decided not to run the coal trade directly. Towing of boats on the canal was organised by a group of men call Trackers, and by the end of the year, tolls of £928 had been collected. In 1824, they kick-started a trade in bricks, by importing two boat loads, which they sold from the quays at Carlisle. Suspecting that the reservoir might not be adequate for the number of boats using the canal, they built a feeder from the river, with a water wheel to raise the water to the level of the canal. There were attempts to avoid the tolls on the canal, with some ships carrying timber waiting for favourable tides, and using the river to reach Rockcliffe, where the timber was loaded into carts. In 1825 the Carlisle & Liverpool Steam Navigation Company were looking to start a passenger service from Liverpool, and asked for an exclusive berth for their ship. The committee paid for a new berth, the cost to be repaid by the Navigation Company over the next ten years, and also bought a second-hand packet boat called Bailie Nicol Jarvie, to ferry the passengers from Port Carlisle to Carlisle. They leased it to a local innkeeper, Alexander Cockburn, for £30 per year, and the service began on 1 July 1826. The steamer service to Liverpool began at about the same time, although the packet boat only ran in the summer months to begin with. As well as passengers, the steamer also carried goods, and these were carried along the canal by lighters. The Solway Hotel opened in Port Carlisle soon afterwards. In August 1824, there were public meetings in Newcastle, to consider again the idea of a canal to Carlisle, or possibly a railway. William Chapman, who had surveyed a route for a canal in 1796, suggested that the route was also suitable for a railway, and was asked to cost both options. He quoted £888,000 for a canal and £252,488 for a railway. A company was created to build a railway, although they did not obtain an Act of Parliament until 1829. There was support in Carlisle, and an agreement was reached that the railway would terminate at the canal basin. It opened in stages from 19 July 1836, reaching Redheugh, Gateshead on 18 June 1838, and Newcastle the following year. Through traffic boosted the profits of the canal. Tolls had averaged £2,905 for the three years to 1835, but by 1840, they had reached £6,605. Receipts from the packet boat had also climbed steadily, to £829 in 1850, and the company had been able to pay dividends to shareholders, starting at 1 per cent in 1833, and rising to 4 per cent by 1839. Further progress was made. In early 1832, several shipowners had placed buoys in the Solway Firth, to mark the channel, and started collecting funds from ships to cover their costs. They asked the canal company to take over responsibility for this, and they did so. In 1833, the Carlisle and Annan Navigation Company asked for a berth at Port Carlisle for their new service to Annan and Liverpool, and one was built. With arrival of the railway imminent, the committee asked William Houston, of the Glasgow, Paisley and Ardrossan Canal to arrange the construction of a faster packet boat, and the Arrow entered service in 1834. The company purchased an ice-breaker, which enabled the packet service to run all year from the winter of 1836-37. The old packet boat, Bailie Nicol Jarvie, was sold for £7 12 shillings (£7.60), and the company also started an omnibus service between Carlisle basin and the town centre. To improve the water supply, William Fairbairn was paid £1,391 to construct a new waterwheel and pumps, and these were commissioned in 1835. However, they pumps did not work as well as anticipated, and Harvey's of Hoyle provided a steam engine and pumps in 1838, at a cost of £3,700. These supplemented the waterwheel, being used when river levels were too low to drive it, or when the reservoir needed filling. The Admiralty surveyed the Solway Firth in 1835, and were asked for advice on buoys to mark the channel. The berths at Port Carlisle were dredged, and plans for an inner and outer dock were formulated. John Hartley of Liverpool had produced designs by November 1835, and an Act of Parliament was obtained in 1836. It allowed the company to borrow another £40,000, and included powers to light and buoy the Solway Firth. Hartley's plans to start enclosing the dock area in June 1836 were delayed due to objections from Lord Lonsdale, who had rights over the foreshore, but work eventually started in August 1838. The purchase of eighteen new buoys was begun in May 1837, and they were installed during 1838. A lightship was built in 1840, and a lighthouse was constructed at Lees Scar near Silloth. Railways Traffic on the canal increased with the arrival of the railway at Carlisle basin. This included coal from Lord Carlisle's mines, and also from the Blenkinsopp Coal Company, who were based at Greenhead. The company decided to carry coal in barges, which were towed by a tug when operating on the Solway Firth, although they had initially considered using boats or rafts onto which the loaded railway wagons would be shunted. A second packet boat was obtained from Paisley in July 1838, and tolls on the canal and railway were reduced in 1838 and 1839, to encourage through traffic. The increase in traffic was sufficient that the men who worked as bridge and lock keepers were paid extra amounts in view of their increased workload. An intermittent traffic was carrying railway locomotives, notably ones built in Newcastle destined to be exported to the USA and some destined for the new Liverpool and Manchester Railway. Stephenson's Rocket was a pioneer made famous at the Rainhill Trials, but by 1837 it had been overtaken by more powerful designs. It was moved to the Brampton Railway near Carlisle to end its career in colliery use. Its journey to Brampton included being shipped from Liverpool then along the Carlisle Canal. However, the boom did not last long, and the company found that it was in competition with the railways. The Lancaster and Carlisle Railway was authorised in 1844, and was a direct threat to the steamer service and canal. The Maryport and Carlisle Railway had been authorised in 1837, but opening was delayed until 1845 by financial difficulties. It was extended to Whitehaven in early 1847 by the opening of the Whitehaven Junction Railway, and at the end of the year the Lancaster and Carlisle Railway opened. The Caledonian Railway opened in February 1848, running northwards from Carlisle to Scotland. By the autumn of 1846, the company was seriously considering converting the canal into a railway, and commissioned a report, which was produced in February 1847, and suggested the idea was feasible. They entertained the directors of the Newcastle and Carlisle Railway in May, and in July were instructed by the shareholders to begin negotiations with that company, or another of the railway companies with lines to Carlisle. Little progress was made, and the canal company and steamboat companies looked at ways to reduce costs, and thus lower their tolls. Despite the sale of the packet boat Clarence in 1847, and the withdrawal of the steamer service from Port Carlisle to Annan, passenger traffic remained good, but in April 1850 was affected by the introduction of cheaper fares to Liverpool, using the railway from Carlisle to Whitehaven, and a much shorter sea voyage from there to Liverpool. In March 1852 the company decided that the best option was to convert the canal into a railway, raised some money from shareholders and loan holders, and sought an Act of Parliament. Work began in June 1853, although the Act was not obtained until 3 August. An omnibus service was used to ferry passengers between Carlisle and the steamers at Port Carlisle, and the canal closed on 1 August 1853. The Act wound up the canal company, and created the Port Carlisle Dock and Railway Company. In less than a year, construction was completed, with the line opening for goods traffic on 12 May 1854. Passenger services followed on 22 June. A second Act of Parliament was obtained on 16 July 1855, to authorise the Carlisle and Silloth Bay Railway and Dock Company, with a working capital of £165,000. Their railway left the Port Carlisle line at and ran to , where a dock to rival Maryport was constructed. The North British Railway leased both lines in 1862, and they all merged in 1880. The stub from Drumburgh to became known as the Port Carlisle Branch, and lasted until 1 June 1932, when it closed. Bibliography References External links images & map of 2 mile markers from the Carlisle canal Canals in England Canals in Cumbria Canals opened in 1823
55891
https://en.wikipedia.org/wiki/GNUstep
GNUstep
GNUstep is a free software implementation of the Cocoa (formerly OpenStep) Objective-C frameworks, widget toolkit, and application development tools for Unix-like operating systems and Microsoft Windows. It is part of the GNU Project. GNUstep features a cross-platform, object-oriented IDE. Apart from the default Objective-C interface, GNUstep also has bindings for Java, Ruby, GNU Guile and Scheme. The GNUstep developers track some additions to Apple's Cocoa to remain compatible. The roots of the GNUstep application interface are the same as the roots of Cocoa: NeXTSTEP and OpenStep. GNUstep thus predates Cocoa, which emerged when Apple acquired NeXT's technology and incorporated it into the development of the original Mac OS X, while GNUstep was initially an effort by GNU developers to replicate the technically ambitious NeXTSTEP's programmer-friendly features. History GNUstep began when Paul Kunz and others at Stanford Linear Accelerator Center wanted to port HippoDraw from NeXTSTEP to another platform. Instead of rewriting HippoDraw from scratch and reusing only the application design, they decided to rewrite the NeXTSTEP object layer on which the application depended. This was the first version of libobjcX. It enabled them to port HippoDraw to Unix systems running the X Window System without changing a single line of their application source. After the OpenStep specification was released to the public in 1994, they decided to write a new objcX which would adhere to the new APIs. The software would become known as "GNUstep". Software architecture Rendering GNUstep contains a set of graphical control elements written in the Objective-C programming language. The graphical user interface (GUI) of GNUMail is composed of graphics control elements. GNUMail has to interact with the windowing system, e.g. X11 or Wayland, and its graphical user interface has to be rendered. GNUstep's backend provides a small set of functions used by the user interface library to interface to the actual windowing system. It also has a rendering engine which emulates common Postscript functions. The package gnustep-back provides the following backends: cairo – default backend using the Cairo 2D graphics library. winlib – default backend on Microsoft Windows systems. Cairo and Windows API variants. art – old (deprecated) backend on unix-like systems. Uses the vector-based PostScriptlike 2d graphics library Libart. xlib – old (deprecated) X11 backend. Paradigms GNUstep inherits some design principles proposed in OPENSTEP (GNUstep predates Cocoa, but Cocoa is based on OPENSTEP) as well as the Objective-C language. Model–view–controller paradigm Target–action Drag-and-drop Delegation Message forwarding (through NSInvocation) Other interfaces In addition to the Objective-C interface, some small projects under the GNUstep umbrella implement other APIs from Apple: The Boron library aims to implement the Carbon API. It is very incomplete. The CoreBase library is designed to be compatible with Core Foundation. It is not complete enough to for the Base (Foundation Kit) component to simply be a wrapper around it. The QuartzCore library implements Core Animation APIs. The Opal library implements Quartz 2D. , there are no projects that builds the Swift programming language against the GNUstep Objective-C environment. Applications Here are some examples of applications written for or ported to GNUstep. Written from scratch Addresses, an address/contacts manager Étoilé, a desktop environment GNUMail, an e-mail client GNUstep Database Library 2, an Enterprise Objects Framework clone GNUstepWeb, an application server compatible with WebObjects 4.x Gorm, an interface builder GWorkspace, a workspace and file manager Grr, an RSS feed reader Oolite, a clone of Elite, a space simulation game with trading components PRICE, imaging application ProjectCenter, the Project Builder or Xcode equivalent. TalkSoup, an IRC client Terminal Zipper, a file archiver tool Ported from NeXTSTEP, OPENSTEP, or macOS Adun BioCocoa Chess Cenon EdenMath Eggplant Emacs Fortunate Gomoku NeXTGO PikoPixel TextEdit TimeMon DoomEd Forks of GNUstep Universal Windows Platform, which includes a WinObjC suite consisting of various parts of GNUstep and Microsoft's own implementations of things like the Cocoa Touch API. Class capabilities Foundation Kit The Foundation Kit provides basic classes such as wrapper classes and data structure classes. strings collections (arrays, sets, dictionaries) and enumerators file management object archiving advanced date manipulation distributed objects and inter-process communication URL handling notifications (and distributed notifications) easy multi-threading timers locks exception handling Application Kit The Application Kit provides classes oriented around graphical user interface capabilities. user interface elements (table views, browsers, matrices, scroll views) graphics (WYSIWYG, postscript-like graphics, bezier paths, image handling with multiple representations, graphical contexts) color management (calibrated vs. device colors; CMYK, RGB, HSB, gray and named color representations; alpha transparency) text system features: rich text format, text attachments, layout manager, typesetter, rules, paragraph styles, font management, spelling document management printing features: print operations, print panel and page layout help manager pasteboard (aka clip board) services spell checker workspace bindings for applications drag and drop operations services sharing among applications See also Darling (software), a compatibility layer that relies on GNUstep GNUstep Renaissance, framework for XML description of portable GNUstep/Mac OS X user interfaces Miller Columns, the method of file tree browsing the GWorkspace File Viewer uses Property list, often used file format to store user settings StepTalk, Scripting framework Window Maker, a window manager designed to emulate the NeXT GUI as part of the wider GNUstep project References External links GNUstep.org project homepage GNUstep Applications and Developer Tutorials The GNUstep Application Project A 2003 interview with GNUstep developer Nicola Pero FLOSS Weekly Interview with Gregory Casamento and Riccardo Mottola from GNUstep GNUstep on Debian, FreeBSD, MacPorts NEXTSPACE desktop environment, based on GNUstep Compatibility layers Cross-platform free software Free software programmed in Objective-C GNU Project software NeXT Software that uses Cairo (graphics) Widget toolkits X Window System
39535253
https://en.wikipedia.org/wiki/The%20Binding%20of%20Isaac%3A%20Rebirth
The Binding of Isaac: Rebirth
The Binding of Isaac: Rebirth is an indie roguelike video game designed by Edmund McMillen and developed and published by Nicalis. Rebirth was released for Linux, Microsoft Windows, OS X, PlayStation 4 and PlayStation Vita in November 2014, for Xbox One, New Nintendo 3DS and Wii U in July 2015, for iOS in January 2017 and for Nintendo Switch in March 2017. The PlayStation 5 and Xbox Series X/S versions were released in November 2021. Rebirth is a remake of The Binding of Isaac, which was developed by McMillen and Florian Himsl and released in 2011 as an Adobe Flash application. This platform had limitations and led McMillen to work with Nicalis to produce Rebirth with a more advanced game engine, which in turn enabled the substantial addition of new content and gameplay features. Three expansions have been released. Afterbirth and Afterbirth+, in October 2015 and January 2017, respectively, with more game content and gameplay modes; Afterbirth+ also added support for user-created content. The third and final expansion, Repentance, was released in March 2021. Similar to the original Binding of Isaac, the plot is based on the biblical story of the same name and was inspired by McMillen's religious upbringing. The player controls the eponymous Isaac, a young boy whose mother, convinced that she is doing God's work, strips him of everything and locks him in his room. When Isaac's mother is about to sacrifice him, he escapes to the basement and fights through random, roguelike dungeons. The player defeats monsters, using Isaac's tears as projectiles, and collects items which modify his appearance, attributes, and abilities, potentially creating powerful combinations. Unlike the game's predecessor, Rebirth has a limited multiplayer mode, allowing an additional player in Rebirth, later increased to three additional players in Afterbirth and Afterbirth+. Full co-op support was added to Repentance, where up to four players are able to play as any of the playable characters. Rebirth released to critical acclaim. Reviewers praised its gameplay and improvements compared to the original Binding of Isaac, but criticized its graphic imagery. Afterbirth, Afterbirth+ and Repentance also had a generally-favorable reception, with reviewers criticizing their difficulty but praising their added content. Tools for modding Afterbirth+ were criticized by users. By July 2015, Rebirth and The Binding of Isaac had sold over five million copies combined. The game is widely regarded as one of the best roguelike games of all time. Gameplay The Binding of Isaac: Rebirth (like the original) is a top-down 2D game where the player controls the character Isaac, alongside sixteen other unlockable characters, as he traverses his mother's basement, fighting off monsters and collecting items. The gameplay is presented in a roguelike style; the dungeon levels are procedurally generated through a randomly generated seed into a number of self-contained rooms, including at least one boss battle. Like most roguelike games, it has permadeath; when the chosen character dies from too much damage, the game is over. Rebirth allows a play-through to be saved at any point. Map seeds can be shared, allowing for multiple people to try the same dungeon layout. The game is controlled similarly to a multidirectional shooter. The player moves their character around the screen, shooting their tears in other directions; the tears are bullets which defeat enemies. The player-character's health is tracked by a number of hearts. The character can find items which replenish hearts; other items give the character additional hearts, extending their health. Throughout the dungeons, the player will find bombs to damage foes and destroy obstacles; keys to open doors and treasure chests; and coins to buy items. Many items impact the character's attributes (such as speed and the damage and range of their tears) and other gameplay effects, including a character who floats behind the player-character and aids in combat. Some items are passive; some are active and reusable (requiring the player to wait a number of rooms before they can reuse them), and others are single-use items which then disappear. The player can collect any number of passive items, whose effects build on previous ones (creating potentially powerful combinations). A player can only carry one reusable item or one single-use item, replacing it with another if found. Other rooms in the dungeons include special challenges and mini-boss fights. In addition to expanding The Binding of Isaac number of items, monsters, and room types (including those spanning multiple screens), Rebirth provides integrated controller support and allows a second local player to join in with a drop-in-drop-out mechanic. The second player controls a follower of the first player-character with the same attributes and abilities of that character, costing the first player-character one heart. The second character cannot plant bombs or carry items. Plot The Binding of Isaac: Rebirth plot follows the biblical story of the same name, similar to the original game. Isaac, a child, and his mother live in a small house on a hill, both happily keeping to themselves, with Isaac drawing pictures and playing with his toys, and his mother watching Christian broadcasts on television. Isaac's mother then hears "a voice from above", stating her son is corrupted with sin, and needs to be saved. She removes all his possessions (including toys and clothing), believing they were the corrupting agents, and later locks him in his room to protect him from the evil outside. When she receives instructions to sacrifice her son to prove her devotion to her faith, Isaac flees through a trap door in his room, leading to "the unknown depths below". After venturing through various floors and reaching The Depths, Isaac battles his mother. After defeating her, the game cuts back to Isaac in his room, where his mother attempts to kill him, grasping a butcher's knife. A Bible is knocked off a shelf, striking Isaac's mother in the head, killing her. Isaac celebrates, before the game cuts again to a smiling Isaac, where his mother once again opens his door, holding a knife. The game, including all DLCs, features 23 possible endings. The base game contains 17 endings, with the ending obtained for defeating Isaac's mother being called the Epilogue. The first ten endings are unlocked by defeating the Mom's Heart boss and serve as introductions to newly unlocked items, mechanics and characters. The 11th ending permanently replaces Mom's Heart with a harder version, called "It Lives." Endings 12 and 13 are unlocked by defeating the bosses Satan and Isaac respectively. Ending 12 shows Isaac climbing into his toy chest. Endings 14 and 15 are unlocked by defeating the bosses The Lamb and ???, who in turn are unlocked by defeating Satan and Isaac 6 times. Ending 15 shows a missing poster for Isaac attached to a telephone pole, with Isaac's mother seen looking for her son. The final ending in the base game, ending 16, is unlocked by defeating the "true" final boss, Mega Satan. It shows Isaac curled up in his toy chest from Ending 12, suffocating. The Binding of Isaac: Afterbirth The Binding of Isaac: Afterbirth expansion contains 2 new endings, both of which involve the expansion's new content. Ending 17 is unlocked by defeating the newly added boss, Hush. It opens with the missing poster from Ending 15 and zooms in on Isaac's house. The scene cuts to Isaac's mother opening his toy chest, revealing Isaac's skeletal remains. Isaac is seen in a dull-colored landscape, where a shadow forms behind him. Ending 18 is unlocked by defeating the boss Ultra Greed at the end of the new Greed Mode. It shows Isaac in a small cave, which caves in on him. The scene then changes to show one of the shopkeepers found throughout the game. It smiles, and the scene ends. The Binding of Isaac: Afterbirth+ The Binding of Isaac: Afterbirth+, like the expansion before it, contains 2 new endings involving the new content added in the expansion. Ending 19 is unlocked by defeating the boss Ultra Greed at the end of the new Greedier mode. It is almost exactly the same as Ending 18, with the major difference being that after the shopkeeper smiles, its head falls off and the body begins to spew out a geyser of spiders. Ending 20, called "Final Ending" prior to The Binding of Isaac: Repentance, is unlocked after defeating the boss Delirium, who is Isaac's representation of the "light at the end of the tunnel". It builds upon events seen in Endings 12, 15, and 17, and shows Isaac recalling his memories in his final moments before dying. Isaac is shown lying in his toy chest, breathing heavily. He recalls a memory of him overhearing a fight between his parents while he is drawing, and looks at a drawing he made of his house, with the caption "WE LIVED HERE". He then recalls a memory of his mother crying in front of a TV. His breathing gets faster as he recalls another memory, this one of him looking at a burnt family photo with his father removed. His breathing slows down, with Isaac visibly turning blue. Another memory is then shown, showing a wall covered in drawings Isaac made. As the scene pans to the left, the drawings get more disturbed, and the noises of an argument in the background becomes more audible, culminating in a drawing of a demon towering over Isaac's dead and bloody parents, and Isaac's father saying "I'm outta here!" After a cut to black, the scene changes to Isaac's skeleton lying in the chest, covered in cobwebs. The chest is opened like in Ending 12, and the missing poster from Ending 15 is shown flying off the telephone pole. Isaac is then seen roaming the dull-colored landscape from Ending 17. The Binding of Isaac: Repentance The Binding of Isaac: Repentance contains 2 new endings involving the new content added. Ending 21 is unlocked for defeating the newly added Mother boss. It shows Isaac drawing a picture of the boss. His mother walks in, and Isaac tries to hide the drawing from her. His mother reacts by throwing him into a closet, saying "You think I'm a monster, Isaac? I'll show you a monster!" The scene cuts to Isaac in the closet, hyperventilating. His mother then begins to recite the Lord's Prayer, with the closet growing darker and darker. A statue of Satan materializes behind Isaac, and the scene ends. The 23rd ending is the Final Ending, and is unlocked by defeating The Beast, the true final boss of the game. This ending completes the story of the game and its expansions. Normally, after defeating the Mom boss on Depths II (or its equivalent alternate floor), the player is locked out of leaving the room. If the player brings an item to teleport out of the room, they can enter a special door. Isaac finds a note left from his dad, and the game begins the Ascent sequence. Isaac begins to go up through the floors he visited throughout the game in reverse order. Arguments can be heard that explain what happened to Isaac's family prior to the game. Isaac, a child, lives with his parents in a small house, on a hill. His parents become unhappy with each other, commonly fighting late at night while Isaac watches from a crack in the living room door. Isaac's father starts stealing money from his mother, who slowly develops into a religious fanatic to cope with her domestic abuse, prompting Isaac's father to leave and divorce Isaac's mother. Without Isaac's father around, her mental health worsens as she begins watching Christian broadcasts on the television, which in turn causes her to abuse Isaac. Isaac throws himself into his toy chest from Ending 12 to hide from both his mother and himself, believing he is the reason his parents fought. The chest locks, preventing Isaac from leaving. Isaac slowly suffocates to death, with the events of the game being his final delusions. His mother realizes Isaac has gone missing, and puts up missing posters around the neighborhood in an attempt to find him, but no word comes. After an indeterminate amount of time, she opens the locked chest Isaac was in. Seeing that her son is dead, she cries over his body, mourning him. In-game, upon venturing back from the depths of the basement to the surface, Isaac finds himself in a memory of his house. He sleeps in his mother's bed, and is awoken by a nightmare. Isaac enters the living room, where he fights Dogma, an embodiment of the Christian broadcasts watched by his mother. After defeating Dogma, Isaac fights the Four Horsemen of the Apocalypse, followed by The Beast (who is wearing his mother's dress), the true final boss of the game. Isaac then finally ascends up to the sky, and his life flashes back before him once more, before he sees nothing. In a final hallucination, Isaac's dad interjects, asking if Isaac really wants the story to end like this, and changes it to have a happy ending, with the opening narration changed to "Isaac and his parents lived in a small house, on the top of a hill...". Isaac then dies. Development The Binding of Isaac was developed by Edmund McMillen and Florian Himsl in 2011 during a game jam after the completion of Super Meat Boy, McMillen's previous game. Since Super Meat Boy was successful, McMillen was not concerned about making a popular game; he wanted to craft a game which melded The Legend of Zelda's top-down dungeon approach with the roguelike genre, wrapping it in religious allegory inspired by his upbringing. They used Adobe Flash, since it enabled them to develop the game quickly. McMillen quietly released the game to Steam for PC, where it became very popular. Wanting to expand the game, McMillen and Himsl discovered limitations in Flash which made an expansion difficult. Although they could incorporate more content with the Wrath of the Lamb expansion, McMillen had to abandon a second expansion due to the limitations. After The Binding of Isaac release, McMillen was approached by Tyrone Rodriguez of Nicalis (a development and publishing studio which had helped bring the PC games Cave Story and VVVVVV to consoles). Rodriguez offered Nicalis' services to help port The Binding of Isaac to consoles. McMillen was interested, but required they recreate the game outside Flash to incorporate the additional content he had to forego and fix additional bugs found since release. He also asked to be left out of the business side of the game's release (after his negative experiences dealing with business matters with Super Meat Boy), and Rodriguez agreed. Rebirth was announced in November 2012 as a console version of The Binding of Isaac, with plans to improve its graphics to 16-bit colors and incorporate the new content and material originally planned for the second expansion. Local cooperative play would also be added to the game, but McMillen said that they could not add online cooperative play because it would drastically lengthen development time. McMillen wanted to overhaul the entire game, particularly its graphics (which he called an "eyesore"). After polling players about which art style to use for the remake, McMillen and Nicalis brought in artists to improve the original assets in the new style and began working on the new content. McMillen commissioned a new soundtrack for the remake from Matthias Bossi and Jon Evans. Release McMillen and Rodriguez initially wanted to develop The Binding of Isaac: Rebirth for the Nintendo 3DS as a tribute to its roots in Nintendo's Legend of Zelda series. Nintendo, however, did not authorize the game's release for the 3DS in 2012 for content reasons. Although they had spent some time creating the 3DS version, McMillen and Rodriguez decided to focus on PC and PlayStation versions instead; those platforms allowed them to increase the game's capabilities. In addition to the PlayStation 3 and Vita consoles, Nicalis was in discussions with Microsoft for a release on the Xbox systems and McMillen had also considered a future iOS release. McMillen and Nicalis opted to move development from the PlayStation 3 to the new PlayStation 4 in August 2013, announcing its release at Sony's Gamescom presentation. The PlayStation 4 and Vita versions were released with the PC versions on November 4, 2014. During development, three senior Nintendo employees—Steve Singer, vice president of licensing; Mark Griffin, a senior manager in licensing, and indie development head Dan Adelman—championed the game within the company. They continued to work within Nintendo, and secured approval of Rebirth release for the 3DS and Wii U in 2014. McMillen and Nicalis, after tailoring the game to run on more powerful systems, worked to keep it intact for the 3DS port. They spent about a year on the conversion and, although they got the game to work on the original 3DS, its performance was sub-optimal. They were one of the first developers (with Nintendo help) to obtain a development kit for the New Nintendo 3DS, which had more powerful hardware and memory to run the game at a speed matching that of the other platforms. The announcement of the New 3DS and Wii U versions was made with plans for an Xbox One version, and the game was released for all three systems on July 23, 2015. In January 2016, Nicalis reported that it was working on an iOS port of the game. The company reported the following month that Apple rejected its application to Apple's app, citing "violence towards children" violating content policies. Nicalis has worked with Apple to obtain preapproval and will release a universal iOS version of Rebirth (including the Afterbirth+ expansion) with improvements for that platform, including the use of iCloud for ease of play on multiple devices. Although Nicalis wants to add this to the Vita port, the company said it was a low priority due to the Vita's limited ability to handle many weapon combos. The initial iOS version of the core game, without expansions, was released on January 11, 2017. After hinting at a release on the upcoming Nintendo Switch console, Nicalis confirmed in January 2017 that Rebirth (with both expansions) would be released for the Switch in March 2017 as retail and digital titles. Scheduled for release on March 3 as a launch title, last-minute adjustments required the company to delay it until March 17. Because of the existing relationship with Nintendo for the Wii U and New Nintendo 3DS versions, Rodriguez said that they could obtain developer-prototype hardware for the Switch to port the game to that system. McMillen said that they could get Rebirth working on the Switch easily due to their approach to developing the game (with hooking integrated into respective system features, such as achievements, to simplify porting) and the ease of the Switch's development platform. The game was released for Switch on March 17, 2017. The version allows up to four players in a drop-in/drop-out cooperative mode, with the other three players using Joy-Con to control one of Isaac's "buddies" (similar to the two-player cooperative mode for PC). The physical version of the Switch game includes a manual similar to the manual which shipped with The Legend of Zelda for the Nintendo Entertainment System. Expansions Afterbirth McMillen announced The Binding of Isaac: Afterbirth, the first expansion for Rebirth, in February 2015. Afterbirth added items, enemies, alternate floors and bosses, and endings (including Greed Mode, which differs from the main game and is reportedly more difficult). Afterbirth was released on October 30, 2015, for Windows, OS X, and Linux computers. The expansion was released for the PlayStation 4 and Xbox One versions on May 10, 2016. The expansion is unlikely to be released on any other platforms due to limitations in the platforms' hardware capabilities and Afterbirth more complex mechanics. McMillen had programmed a number of hidden secrets into The Binding of Isaac (which fans were discovering and discussing on a Reddit subforum), and took additional care to hide them in patches and updates. He knew that players would be looking for hidden secrets in Rebirth, and took steps to completely hide the Lost (a new playable character). Unlocking it required a number of steps (including having the player-character repeatedly die in specific circumstances), and hints for what needed to be done were scattered among the game's assets; therefore, McMillen and his team anticipated that it would take a long time before players would discover the Lost. However, players on the Reddit subforum went to its executable files to search for clues to secrets and discovered the Lost (and how to unlock it) within 109 hours of the game's release. McMillen said that he was disappointed with the community because his team hid the secrets for discovery in gameplay and clues in the game; although he still planned to release Afterbirth, he said that he would not rush its release. McMillen wanted to hide the Keeper (another character) and elements already hinted at in the game about Isaac's father in Afterbirth, but knew that players would data-mine its program files to find them; instead, he planned an alternate reality game (ARG) which would require players to discover real-world clues. Since he expected the birth of his daughter at the end of September 2015 and the expansion was planned for release in October, he arranged the ARG to continue without him. When Afterbirth was released, players found what they thought were bugs (such as missing new items which had been promised on the game's store page); some accused McMillen of deceiving them. Although some of these omissions were planned as part of the ARG, McMillen discovered that the released game accidentally lacked some new items because it used a different build than originally planned. His team raced to patch the game and tried to provide support (and hints) about the Keeper, using the number 109. McMillen later said that the items missing from the released game distracted players from the secrets he had hidden. With the release of the patch, players began discovering in-game hints about the Keeper and engaged in McMillen's ARG as planned. Clues included calling a special phone number and identifying actual locations in the Santa Cruz area (where McMillen lives) which were related to the game. Following additional clues (including locating a buried figure of one of the game's mini-bosses), they unlocked the Keeper and additional in-game items to collect. Although McMillen thought that the ARG ultimately worked out, he would not engage the community in a similar manner again to avoid seeming egotistical. Afterbirth+ Nicalis announced in December 2015 that a second expansion, Afterbirth+, was in development. In addition to adding monsters, bosses, items and a playable character called Apollyon to the game, the expansion includes a bestiary which tracks how many of each type of creature (and boss) the player has defeated and modding support to allow players to craft room types, import graphics, and script events with Lua. The expansion was released for Windows on January 3, 2017, and for PlayStation 4 on September 19, 2017. The expansion later released to Xbox One as downloadable content on October 24, 2019. The Switch version of the game was released in North America on March 17, 2017, and in Europe and Australasia on September 7 of that year. This version includes Afterbirth and Afterbirth+; limited-time launch editions of the game are available physically and digitally, making it the first Nicalis-published game to be released physically. Some of the best community mods were added to the game in "booster packs" (initially planned monthly, becoming less frequent), with the first release in March 2017 and the fifth (and final) release on May 1, 2018. The last two packs include material developed by players who created the Antibirth fan expansion and whom McMillen enlisted. Repentance Before the release of Afterbirth+, The Binding of Isaac: Antibirth (a fan-made mod of Rebirth) was released in December 2016. Similar to the official expansions, Antibirth adds playable characters, bosses, power-ups and other content, and returns some gameplay aspects (which had been changed in the Afterbirth expansion) to the original Rebirth version. Alice O'Connor of Rock, Paper, Shotgun called the mod "more difficult than [The Binding of Isaac]" and a new challenge compatible with the official game expansions. At McMillen's request, the group reworked some Antibirth content (which was incorporated into the Afterbirth+ booster packs). McMillen said at PAX West in September 2018 that Antibirth would be made into Repentance (official DLC for Rebirth), and he was working with some of the mod's creators on balance tweaking and ensuring that its narrative was consistent with Isaac. The expansion was released for PC on March 31, 2021. Repentance was released for Nintendo Switch, PlayStation 4, PlayStation 5, Xbox One, and Xbox Series X/S on November 4, 2021. Future development Although McMillen wanted to support the modding community and its expansions as part of The Binding of Isaac: Rebirth, he found that several ideas began overlapping with his own thoughts about what a sequel to The Binding of Isaac should be; in addition, further expansion of the game would require him to rework the base game engine. With the last booster packs (containing Antibirth content), he considered The Binding of Isaac complete. The addition of the Antibirth content somewhat extends the game, but McMillen does not plan anymore updates. He plans to continue to develop The Binding of Isaac franchise; a prequel, The Legend of Bum-bo, was released on November 12, 2019. During an investigation by Kotaku exploring questionable business practices and behavior from Nicalis, McMillen announced that he would be severing his working relationship with the company, with Repentance being their final planned collaboration. Reception According to review aggregator Metacritic, The Binding of Isaac: Rebirth received "generally favorable" reviews; the iOS version received "universal acclaim". Dan Stapleton of IGN praised Rebirth for the seemingly-endless variation in gameplay created by each run-through, giving him "plenty of motivation" to continue playing; his only criticism was its lack of in-game information on available power-ups. GameSpots Brent Todd wrote that the game's story and imagery may be initially disturbing, Rebirth has "speedy, varied gameplay and seemingly neverending new features" which would keep the player entertained for a long time. Simon Parkin of Eurogamer said that Rebirth "feels like the product of the psychotherapeutic process", but is "the most accessible Rogue-like [game] yet made" due to its easy control scheme and randomization of each run. Nic Rowen of Destructoid said that Rebirth was a great improvement on The Binding of Isaac, "an incredible experience that can't be missed". Afterbirth+ received generally-favorable reviews from critics. Jose Otero of IGN praised its variety: "The unpredictable items and varied enemies make it one of the most wacky and replayable games I've ever experienced." Although Peter Glagowksi of Destructoid gave its DLC a positive review, calling it an "impressive effort", he wrote that the DLC's base content has little to offer newcomers to the series. Rock, Paper, Shotgun was critical of the DLC's difficulty, which it thought was largely derived from random, untelegraphed enemy behavior. About Afterbirth+ design cohesion, reviewer Adam Smith characterized its DLC as "mashing together existing parts of the game and producing either a weak cover version or a clumsy remix". Review website Beastby criticized of Afterbirth+ fairness: "The question isn't always 'Will I enjoy the gameplay loop?' but rather 'How many unfair runs will it take for me to have one in which I stand a chance? The expansion's modding application programming interface was called "a disappointment" by members of the Team Alpha modding group, who expressed frustration with the API's "massive shortcomings" and Nicalis' lack of support. By July 2015, The Binding of Isaac and Rebirth had combined sales of over five million copies; three million copies of the former had been sold by July 2014. Notes External links References 2014 video games Child abuse in fiction Cooperative video games Criticism of Christianity Four Horsemen of the Apocalypse in popular culture IOS games Linux games Multiplayer and single-player video games Nicalis games New Nintendo 3DS games Nintendo 3DS eShop games Nintendo Switch games MacOS games PlayStation 4 games PlayStation 5 games PlayStation Network games PlayStation Vita games Roguelike video games Seven deadly sins in popular culture Video game remakes Video games about children Video games about religion Video games based on mythology Video games developed in the United States Video games using procedural generation Video games with downloadable content Video games with expansion packs Wii U eShop games Windows games Xbox One games Xbox Series X and Series S games
4163064
https://en.wikipedia.org/wiki/Glossary%20of%20machine%20vision
Glossary of machine vision
The following are common definitions related to the machine vision field. General related fields Machine vision Computer vision Image processing Signal processing 0-9 1394. FireWire is Apple Inc.'s brand name for the IEEE 1394 interface. It is also known as i.Link (Sony's name) or IEEE 1394 (although the 1394 standard also defines a backplane interface). It is a personal computer (and digital audio/digital video) serial bus interface standard, offering high-speed communications and isochronous real-time data services. 1D. One-dimensional. 2D computer graphics. The computer-based generation of digital images—mostly from two-dimensional models (such as 2D geometric models, text, and digital images) and by techniques specific to them. 3D computer graphics. 3D computer graphics are different from 2D computer graphics in that a three-dimensional representation of geometric data is stored in the computer for the purposes of performing calculations and rendering 2D images. Such images may be for later display or for real-time viewing. Despite these differences, 3D computer graphics rely on many of the same algorithms as 2D computer vector graphics in the wire frame model and 2D computer raster graphics in the final rendered display. In computer graphics software, the distinction between 2D and 3D is occasionally blurred; 2D applications may use 3D techniques to achieve effects such as lighting, and primarily 3D may use 2D rendering techniques. 3D scanner. This is a device that analyzes a real-world object or environment to collect data on its shape and possibly color. The collected data can then be used to construct digital, three dimensional models useful for a wide variety of applications. A Aberration. Optically, defocus refers to a translation along the optical axis away from the plane or surface of best focus. In general, defocus reduces the sharpness and contrast of the image. What should be sharp, high-contrast edges in a scene become gradual transitions. Aperture. In context of photography or machine vision, aperture refers to the diameter of the aperture stop of a photographic lens. The aperture stop can be adjusted to control the amount of light reaching the film or image sensor. aspect ratio (image). The aspect ratio of an image is its displayed width divided by its height (usually expressed as "x:y"). Angular resolution. Describes the resolving power of any image forming device such as an optical or radio telescope, a microscope, a camera, or an eye. Automated optical inspection. B Barcode. A barcode (also bar code) is a machine-readable representation of information in a visual format on a surface. Blob discovery. Inspecting an image for discrete blobs of connected pixels (e.g. a black hole in a grey object) as image landmarks. These blobs frequently represent optical targets for machining, robotic capture, or manufacturing failure. Bitmap. A raster graphics image, digital image, or bitmap, is a data file or structure representing a generally rectangular grid of pixels, or points of color, on a computer monitor, paper, or other display device. C Camera. A camera is a device used to take pictures, either singly or in sequence. A camera that takes pictures singly is sometimes called a photo camera to distinguish it from a video camera. Camera Link. Camera Link is a serial communication protocol designed for computer vision applications based on the National Semiconductor interface Channel-link. It was designed for the purpose of standardizing scientific and industrial video products including cameras, cables and frame grabbers. The standard is maintained and administered by the Automated Imaging Association, or AIA, the global machine vision industry's trade group. Charge-coupled device. A charge-coupled device (CCD) is a sensor for recording images, consisting of an integrated circuit containing an array of linked, or coupled, capacitors. CCD sensors and cameras tend to be more sensitive, less noisy, and more expensive than CMOS sensors and cameras. CIE 1931 Color Space. In the study of the perception of color, one of the first mathematically defined color spaces was the CIE XYZ color space (also known as CIE 1931 color space), created by the International Commission on Illumination (CIE) in 1931. CMOS. CMOS ("see-moss")stands for complementary metal-oxide semiconductor, is a major class of integrated circuits. CMOS imaging sensors for machine vision are cheaper than CCD sensors but more noisy. CoaXPress. CoaXPress (CXP) is an asymmetric high speed serial communication standard over coaxial cable. CoaXPress combines high speed image data, low speed camera control and power over a single coaxial cable. The standard is maintained by JIIA, the Japan Industrial Imaging Association. Color. The perception of the frequency (or wavelength) of light, and can be compared to how pitch (or a musical note) is the perception of the frequency or wavelength of sound. Color blindness. Also known as color vision deficiency, in humans is the inability to perceive differences between some or all colors that other people can distinguish Color temperature. "White light" is commonly described by its color temperature. A traditional incandescent light source's color temperature is determined by comparing its hue with a theoretical, heated black-body radiator. The lamp's color temperature is the temperature in kelvins at which the heated black-body radiator matches the hue of the lamp. Color vision. CV is the capacity of an organism or machine to distinguish objects based on the wavelengths (or frequencies) of the light they reflect or emit. computer vision. The study and application of methods which allow computers to "understand" image content. Contrast. In visual perception, contrast is the difference in visual properties that makes an object (or its representation in an image) distinguishable from other objects and the background. C-Mount. Standardized adapter for optical lenses on CCD - cameras. C-Mount lenses have a back focal distance 17.5 mm vs. 12.5 mm for "CS-mount" lenses. A C-Mount lens can be used on a CS-Mount camera through the use of a 5 mm extension adapter. C-mount is a 1" diameter, 32 threads per inch mounting thread (1"-32UN-2A.) CS-Mount. Same as C-Mount but the focal point is 5 mm shorter. A CS-Mount lens will not work on a C-Mount camera. CS-mount is a 1" diameter, 32 threads per inch mounting thread. D Data matrix. A two dimensional Barcode. Depth of field. In optics, particularly photography and machine vision, the depth of field (DOF) is the distance in front of and behind the subject which appears to be in focus. Depth perception. DP is the visual ability to perceive the world in three dimensions. It is a trait common to many higher animals. Depth perception allows the beholder to accurately gauge the distance to an object. Diaphragm. In optics, a diaphragm is a thin opaque structure with an opening (aperture) at its centre. The role of the diaphragm is to stop the passage of light, except for the light passing through the aperture. E Edge detection. ED marks the points in a digital image at which the luminous intensity changes sharply. It also marks the points of luminous intensity changes of an object or spatial-taxon silhouette. Electromagnetic interference. Radio Frequency Interference (RFI) is electromagnetic radiation which is emitted by electrical circuits carrying rapidly changing signals, as a by-product of their normal operation, and which causes unwanted signals (interference or noise) to be induced in other circuits. F FireWire. FireWire (also known as i. Link or IEEE 1394) is a personal computer (and digital audio/video) serial bus interface standard, offering high-speed communications. It is often used as an interface for industrial cameras. Fixed-pattern noise. Flat-field correction. Frame grabber. An electronic device that captures individual, digital still frames from an analog video signal or a digital video stream. Fringe Projection Technique. 3D data acquisition technique employing projector displaying fringe pattern on a surface of measured piece, and one or more cameras recording image(s). Field of view. The field of view (FOV) is the part which can be seen by the machine vision system at one moment. The field of view depends from the lens of the system and from the working distance between object and camera. Focus. An image, or image point or region, is said to be in focus if light from object points is converged about as well as possible in the image; conversely, it is out of focus if light is not well converged. The border between these conditions is sometimes defined via a circle of confusion criterion. G Gamut. In color reproduction, including computer graphics and photography, the gamut, or color gamut , is a certain complete subset of colors. Grayscale. A grayscale digital image is an image in which the value of each pixel is a single sample. Displayed images of this sort are typically composed of shades of gray, varying from black at the weakest intensity to white at the strongest, though in principle the samples could be displayed as shades of any color, or even coded with various colors for different intensities. GUI. A graphical user interface (or GUI, sometimes pronounced "gooey") is a method of interacting with a computer through a metaphor of direct manipulation of graphical images and widgets in addition to text. H Histogram. In statistics, a histogram is a graphical display of tabulated frequencies. A histogram is the graphical version of a table which shows what proportion of cases fall into each of several or many specified categories. The histogram differs from a bar chart in that it is the area of the bar that denotes the value, not the height, a crucial distinction when the categories are not of uniform width (Lancaster, 1974). The categories are usually specified as non-overlapping intervals of some variable. The categories (bars) must be adjacent. Histogram (Color). In computer graphics and photography, a color histogram is a representation of the distribution of colors in an image, derived by counting the number of pixels of each of given set of color ranges in a typically two-dimensional (2D) or three-dimensional (3D) color space. A histogram is a standard statistical description of a distribution in terms of occurrence frequencies of different event classes; for color, the event classes are regions in color space. HSV color space. The HSV (Hue, Saturation, Value) model, also called HSB (Hue, Saturation, Brightness), defines a color space in terms of three constituent components: Hue, the color type (such as red, blue, or yellow) Saturation, the "vibrancy" of the color and colorimetric purity Value, the brightness of the color I Image file formats. Image file formats provide a standardized method of organizing and storing image data. This article deals with digital image formats used to store photographic and other image information. Image files are made up of either pixel or vector (geometric) data, which is rasterized to pixels in the display process, with a few exceptions in vector graphic display. The pixels that make up an image are in the form of a grid of columns and rows. Each of the pixels in an image stores digital numbers representing brightness and color. Image segmentation. Infrared imaging. See Thermographic camera. Incandescent light bulb. An incandescent light bulb generates light using a glowing filament heated to white-hot by an electric current. J JPEG. JPEG (pronounced jay-peg) is a most commonly used standard method of lossy compression for photographic images. K Kell factor. It is a parameter used to determine the effective resolution of a discrete display device. L Laser. In physics, a laser is a device that emits light through a specific mechanism for which the term laser is an acronym: light amplification by stimulated emission of radiation. Lens. A lens is a device that causes light to either converge and concentrate or to diverge, usually formed from a piece of shaped glass. Lenses may be combined to form more complex optical systems as a Normal lens or a Telephoto lens. Lens Controller. A lens controller is a device used to control a motorized (ZFI) lens. Lens controllers may be internal to a camera, a set of switches used manually, or a sophisticated device that allows control of a lens with a computer. Lighting. Lighting refers to either artificial light sources such as lamps or to natural illumination. M Metrology. Metrology is the science of measurement. There are many applications for machine vision in metrology. machine vision. MV is the application of computer vision to industry and manufacturing. Motion perception. MP is the process of inferring the speed and direction of objects and surfaces that move in a visual scene given some visual input. N Neural network. A NN is an interconnected group of artificial neurons that uses a mathematical or computational model for information processing based on a connectionist approach to computation. In most cases an ANN is an adaptive system that changes its structure based on external or internal information that flows through the network. Normal lens. In machine vision a normal or entrocentric lens is a lens that generates images that are generally held to have a "natural" perspective compared with lenses with longer or shorter focal lengths. Lenses of shorter focal length are called wide-angle lenses, while longer focal length lenses are called telephoto lenses. O Optical character recognition. Usually abbreviated to OCR, involves computer software designed to translate images of typewritten text (usually captured by a scanner) into machine-editable text, or to translate pictures of characters into a standard encoding scheme representing them in (ASCII or Unicode). Optical resolution. Describes the ability of a system to distinguish, detect, and/or record physical details by electromagnetic means. The system may be imaging (e.g., a camera) or non-imaging (e.g., a quad-cell laser detector). Optical transfer function. P Pattern recognition. This is a field within the area of machine learning. Alternatively, it can be defined as the act of taking in raw data and taking an action based on the category of the data. It is a collection of methods for supervised learning. Pixel. A pixel is one of the many tiny dots that make up the representation of a picture in a computer's memory or screen. Pixelation. In computer graphics, pixelation is an effect caused by displaying a bitmap or a section of a bitmap at such a large size that individual pixels, small single-colored square display elements that comprise the bitmap, are visible. Prime lens. Mechanical assembly of lenses whose focal length is fixed, as opposed to a zoom lens, which has a variable focal length. Q Q-Factor (Optics). In optics, the Q factor of a resonant cavity is given by , where is the resonant frequency, is the stored energy in the cavity, and is the power dissipated. The optical Q is equal to the ratio of the resonant frequency to the bandwidth of the cavity resonance. The average lifetime of a resonant photon in the cavity is proportional to the cavity's Q. If the Q factor of a laser's cavity is abruptly changed from a low value to a high one, the laser will emit a pulse of light that is much more intense than the laser's normal continuous output. This technique is known as Q-switching. R Region of interest. A Region of Interest, often abbreviated ROI, is a selected subset of samples within a dataset identified for a particular purpose. RGB. The RGB color model utilizes the additive model in which red, green, and blue light are combined in various ways to create other colors. ROI. See Region of Interest. Foreground, figure and objects. See also spatial-taxon. S S-video. Separate video, abbreviated S-Video and also known as Y/C (or erroneously, S-VHS and "super video") is an analog video signal that carries the video data as two separate signals (brightness and color), unlike composite video which carries the entire set of signals in one signal line. S-Video, as most commonly implemented, carries high-bandwidth 480i or 576i resolution video, i.e. standard definition video. It does not carry audio on the same cable. Scheimpflug principle. Shutter. A shutter is a device that allows light to pass for a determined period of time, for the purpose of exposing the image sensor to the right amount of light to create a permanent image of a view. Shutter speed. In machine vision the shutter speed is the time for which the shutter is held open during the taking an image to allow light to reach the imaging sensor. In combination with variation of the lens aperture, this regulates how much light the imaging sensor in a digital camera will receive. Smart camera. A smart camera is an integrated machine vision system which, in addition to image capture circuitry, includes a processor, which can extract information from images without need for an external processing unit, and interface devices used to make results available to other devices. Spatial-Taxon. Spatial-taxons are information granules, composed of non-mutually exclusive pixel regions, within scene architecture. They are similar to the Gestalt psychological designation of figure-ground, but are extended to include foreground, object groups, objects and salient object parts. Structured-light 3D scanner. The process of projecting a known pattern of illumination (often grids or horizontal bars) on to a scene. The way that these patterns appear to deform when striking surfaces allows vision systems to calculate the depth and surface information of the objects in the scene. SVGA. Super Video Graphics Array, almost always abbreviated to Super VGA or just SVGA is a broad term that covers a wide range of computer display standards. T Telecentric lens. Compound lens with an unusual property concerning its geometry of image-forming rays. In machine vision systems telecentric lenses are usually employed in order to achieve dimensional and geometric invariance of images within a range of different distances from the lens and across the whole field of view. Telephoto lens. Lens whose focal length is significantly longer than the focal length of a normal lens. Thermography. Thermal imaging, a type of Infrared imaging. TIFF. Tagged Image File Format (abbreviated TIFF) is a file format for mainly storing images, including photographs and line art. U USB. Universal Serial Bus (USB) provides a serial bus standard for connecting devices, usually to computers such as PCs, but is also becoming commonplace on cameras. V VESA. The Video Electronics Standards Association (VESA) is an international body, founded in the late 1980s by NEC Home Electronics and eight other video display adapter manufacturers. The initial goal was to produce a standard for 800×600 SVGA resolution video displays. Since then VESA has issued a number of standards, mostly relating to the function of video peripherals in IBM PC compatible computers. VGA. Video Graphics Array (VGA) is a computer display standard first marketed in 1987 by IBM. Vision processing unit. A class of microprocessors aimed at accelerating machine vision tasks. W Wide-angle lens. In photography and cinematography, a wide-angle lens is a lens whose focal length is shorter than the focal length of a normal lens. X X-rays. A form of electromagnetic radiation with a wavelength in the range of 10 to 0.01 nanometers, corresponding to frequencies in the range 30 to 3000 PHz (1015 hertz). X-rays are primarily used for diagnostic medical and industrial imaging as well as crystallography. X-rays are a form of ionizing radiation and as such can be dangerous. Y Y-cable. A Y-cable or Y cable is an electrical cable containing three ends of which one is a common end that in turn leads to a split into the remaining two ends, resembling the letter "Y". Y-cables are typically, but not necessarily, short (less than 12 inches), and often the ends connect to other cables. Uses may be as simple as splitting one audio or video channel into two, to more complex uses such as splicing signals from a high density computer connector to its appropriate peripheral . Z Zoom lens. A mechanical assembly of lenses whose focal length can be changed, as opposed to a prime lens, which has a fixed focal length. See an animation of the zoom principle below. See also Glossary of artificial intelligence Frame grabber Google Goggles Machine vision glossary Morphological image processing OpenCV Smart camera Computer vision Machine vision
6225625
https://en.wikipedia.org/wiki/MAX%20%28operating%20system%29
MAX (operating system)
MaX is a Linux distribution sponsored by the Office of Education of the autonomous Community of Madrid of Spain. MaX stands for Madrid LinuX. It used to be based on Ubuntu. Its last release, MaX 10, is based on Ubuntu 16.04 LTS "Xenial Xerus". Since 2003 MaX has been installed on all the computers of the schools in the Community of Madrid. MaX is an educational Linux with a large set of instructive programs in addition to usual desktop programs. MaX supports the most common desktop environments but, in MaX 10, MATE is installed by default. Main features are simplicity, stability and a huge collection of software. MaX comes as a Live DVD and an installable system, and as a USB version. The changelog is here: http://ftp.rediris.es/mirror/MaX-Linux/MaXdesktop/MAX7.5/cambios.txt External links Homepage (in Spanish) Distrowatch page ISO images of MaX References EducaMadrid web site Educational operating systems Spanish-language Linux distributions State-sponsored Linux distributions Ubuntu derivatives Linux distributions
719065
https://en.wikipedia.org/wiki/MacGuffin%20%28cipher%29
MacGuffin (cipher)
In cryptography, MacGuffin is a block cipher created in 1994 by Bruce Schneier and Matt Blaze at a Fast Software Encryption workshop. It was intended as a catalyst for analysis of a new cipher structure, known as Generalized Unbalanced Feistel Networks (GUFNs). The cryptanalysis proceeded very quickly, so quickly that the cipher was broken at the same workshop by Vincent Rijmen and Bart Preneel. The algorithm Schneier and Blaze based MacGuffin on DES, their main change being that the data block is not split into equal halves in the Feistel network. Instead, 48 bits of the 64-bit data block are fed through the round function, whose output is XORed with the other 16 bits of the data block. The algorithm was experimental, intended to explore the security properties of unbalanced Feistel networks. The adjacent diagram shows one round of MacGuffin. The 64-bit data block is broken into four 16-bit words (each represented by one line). The rightmost three are XORed with subkey bits derived from the secret key. They are then fed through eight S-boxes, each of which takes six bits of input and produces two bits of output. The output (a total of 16 bits) is then recombined and XORed with the leftmost word of the data block. The new leftmost block is then rotated into the rightmost position of the resulting data block. The algorithm then continues with more rounds. MacGuffin's key schedule is a modified version of the encryption algorithm itself. Since MacGuffin is a Feistel network, decryption is easy; simply run the encryption algorithm in reverse. Schneier and Blaze recommended using 32 rounds, and specified MacGuffin with a 128-bit key. Cryptanalysis of MacGuffin At the same workshop where MacGuffin was introduced, Rijmen and Preneel showed that it was vulnerable to differential cryptanalysis. They showed that 32 rounds of MacGuffin is weaker than 16 rounds of DES, since it took "a few hours" to get good differential characteristics for DES with good starting values, and the same time to get good differential characteristics for MacGuffin with no starting values. They found that it is possible to get the last round key with differential cryptanalysis, and from that reverse the last round and repeat the attack for the rest of the rounds. Rijmen and Preneel tried attacking MacGuffin with different S-boxes, taken directly from DES. This version proved to be slightly stronger, but they warn that designing an algorithm to resist only known attacks is generally not a good design principle. References Broken block ciphers 1994 introductions
5720797
https://en.wikipedia.org/wiki/Sierra%20Wireless
Sierra Wireless
Sierra Wireless is a Canadian multinational wireless communications equipment designer, manufacturer and services provider headquartered in Richmond, British Columbia, Canada. It also maintains offices and operations in California, Sweden, Korea, Japan, China, Taiwan, France, Australia, New Zealand and Hong Kong. Sierra Wireless is an Internet of Things (IoT) solutions provider. The company sells mobile computing and machine-to-machine (M2M) communications products that work over cellular networks, 2G, 3G, 4G and 5G mobile broadband wireless routers and gateways, modules, as well as software, tools, and services. Sierra Wireless products and technologies are used in a variety of markets and industries, including automotive and transportation, energy, field service, healthcare, industrial and infrastructure, mobile computing and consumers, networking, sales and payment, and security. It also maintains a network of experts in mobile broadband and M2M integration to support customers worldwide. The company's products are sold directly to OEMs, as well as indirectly through distributors and resellers. History Sierra Wireless was founded in 1993 in Vancouver, Canada. In August 2003, it completed an acquisition of privately held, high-speed CDMA wireless modules supplier, AirPrime, issuing approximately 3.7 million shares to AirPrime shareholders. On March 6, 2007, the company announced its purchase of Hayward, California-based AirLink Communications, a privately held developer of high-value fixed, portable, and mobile wireless data solutions. Prior to the May 2007 completion of its sale to Sierra Wireless for a total of $27 million in cash and stock, AirLink had reported $24.8 million in revenues and gross margin of 44 percent. On August 5, 2008, Sierra Wireless announced its purchase of the assets of Junxion, a Seattle based producer of Linux-based mobile wireless access points and network routers for enterprise and government customers. Terms of the acquisition were not made public. In December 2008, Sierra Wireless made a friendly all-cash bid to acquire Wavecom, a French M2M wireless technology developer. The sale was completed for $275 million. On August 1, 2012, the company announced it had completed its acquisition of SAGEMCOM's M2M business in a cash transaction of EUR44.9 million plus assumed liabilities. In November 2012, Sierra Wireless was recognized as an M2M market leader by market research and intelligence firm, ABI Research. Based on the company's aggregated revenues, ABI found that Sierra Wireless held a 34 percent share of the M2M module market. On January 28, 2013, it was announced that Sierra Wireless had entered a definitive agreement to sell all assets and operations related to its AirCard business to NETGEAR, Inc. for $138 million in cash plus approximately $6.5 million in assumed liabilities as of December 31, 2012. On April 3, 2017, Sierra Wireless announced its acquisition of GlobalTop Technology's GNSS embedded modules business for $3.2 million. In 2020, Sierra Wireless completed the acquisition of M2M Group in Australia for USD18.4 million and M2M One in New Zealand for USD3.5 million. In 2020 the company also sold its automotive business for USD144 million. In April 2020, Sierra Wireless appointed Samuel Cochrane as Chief Financial Officer. On November 18, 2020, Sierra Wireless announced completion of the sale of its automotive modules division to Rolling Wireless. In July 2021, Sierra Wireless appointed Phil Brace as CEO. Patents and technology Sierra Wireless has been granted more than 550 unique patents for an array of technologies ranging from built-in antennas and product form factors, to battery power usage and management and network efficiency improvements. The company currently has patents pending in the US, Europe, Asia, Australia, Mexico, and South Africa. Products Sierra Wireless offers IoT products and services including the following: AirLink, wireless gateways used in industrial, enterprise, and automotive applications. AirLink intelligent gateways and routers use embedded intelligence and the ALEOS Application Framework to support remote management, control, and configuration, and application services for vertical market solutions. Embedded modules providing cellular connectivity for wireless mobile computing and M2M often used by device manufacturers to bring new 3G - 5G, GSM, and CDMA devices to market via PCI Express MiniCards, surface-mount modules, modem-only functionality, and programmable devices. AirVantage, a cloud-based application facilitating M2M service delivery consisting of the AirVantage Management Service M2M device management application, and AirVantage Enterprise Platform for collecting, sharing, and integration of machine data using API standards, as well as development and deployment of M2M applications. Octave, The All-in-One Edge-to-Cloud Solution for connecting industrial assets. Global Connectivity Services. Managed Network Services, providing internet management under a single vendor for a complete solution. Managed IoT Solutions, for agencies and enterprises that require tracking and monitoring. Available as fully-integrated managed services that includes asset/cargo tracking, satellite tracking, fleet tracking, offender monitoring, remote monitoring and cellular alarm communicators. Until the 2013 sale of all assets and operations, Sierra Wireless also manufactured and marketed AirCard, mobile broadband devices permitting users to connect notebooks, netbooks, and other electronics to the Internet over 3G and 4G mobile broadband networks via PC or express card slots, USB ports, or mobile WiFi hotspots. Community support Sierra Wireless is an active United Way of the Lower Mainland supporter. For its efforts, the company was recognized with United Way Employee Gold Awards and a 2002 Award of Excellence. In 2003, Canadian public university Simon Fraser University announced the creation of the Sierra Wireless Chair in Wireless Communication, which was funded through annual donations for a three-year period. Upon completion of the original three-year term, the company extended its partnership with the university by establishing an endowed Sierra Wireless Professorship in Mobile Communication. In partnership with the university's School of Engineering Science in the Faculty of Applied Sciences, the Sierra Wireless Mobile Communications Laboratory was opened in November 2012. See also Mobile data terminal References Companies listed on the Toronto Stock Exchange Companies based in Richmond, British Columbia Electronics companies established in 1993 Canadian companies established in 1993 Companies listed on the Nasdaq Electronics companies of Canada Networking companies of Canada Networking hardware companies Routers (computing) Wireless networking Canadian brands 1993 establishments in British Columbia 1999 initial public offerings
45060803
https://en.wikipedia.org/wiki/ECU-TEST
ECU-TEST
ECU-TEST is a software tool developed by TraceTronic GmbH, based in Dresden, Germany, for test and validation of embedded systems. Since the first release of ECU-TEST in 2003, the software is used as standard tool in the development of automotive ECUs and increasingly in the development of heavy machinery as well as in factory automation. The development of the software started within a research project on systematic testing of control units and laid the foundation for the spin-off of TraceTronic GmbH from TU Dresden. ECU-TEST aims at the specification, implementation, documentation, execution and assessment of test cases. Owing to various test automation methods, the tool ensures an efficient implementation of all necessary activities for the creation, execution and assessment of test cases. Functionality Methodology ECU-TEST automates the control of the whole test environment and supports a broad range of test tools. Various abstraction layers for measured quantities allow its application on different testing levels, e.g. within the context of model in the loop, software in the loop and hardware in the loop as well as in real systems (vehicle and driver in the loop). Creating test cases using ECU-TEST is conducted graphically and does not require programming skills. Test-case descriptions have a generic form, which together with extensive parameterization and configuration options, allows uniform access to all test tools and thereby simplifies re-use of existing tests over multiple development phases. Structure ECU-TEST is organized in four parts: Editor and Project manager Configurator Test engine Analyzer and Protocol generator In order to create a test case, one or more sequences of test steps and their parameterizations are specified using the editor. Test steps comprise reading and evaluating measured quantities of the test object, manipulating the test environment as well as the execution of diagnostic functions and control structures. Multiple test cases can be organized using the project manager. Additional settings for test object and test environment can be made using the configurator. The execution of test cases is performed using a multi-stage test engine. The generated log data serve as the basis for the creation of test reports. Subsequent to the test execution, optional checks of recorded measured quantities are performed in the analyzer. From the results of test-execution and subsequent checks, the protocol generator produces a detailed test report, which is displayed interactively and can be archived in files and data bases. Interfaces ECU-TEST provided clear interfaces for extensions and for the integration in existing test and validation processes. A large amount of test hardware and software is supported by default. Using user-defined test steps, plug-ins and Python scripts, additional tools can be integrated with little effort. Via a specific client-server-architecture, software tools of multiple test-bench computers in distributed test environments can be addressed. Using a COM interface, further tools, e.g. for requirements management, revision control and model-based testing can be integrated. ECU-TEST supports the following hardware and software tools and is based on the following standards: Supported hardware and software A&D: iTest AKKA: Gigabox ASAM: ACI ASAM: XiL ASAP: STEP ATI: VISION AVL: LYNX AVL: PUMA AVSimulation: SCANeR Beckhoff: TwinCAT CARLA Team: CARLA Digitalwerk: ADTF dSPACE: ControlDesk dSPACE: ModelDesk dSPACE: MotionDesk EA: UTA12 ESI: SimulationX ETAS: BOA ETAS: INCA ETAS: LABCAR ETAS: LABCAR-PINCONTROL FEP FEP3 FEV: Morphée froglogic:Squish Göpel: Video Dragon HORIBA FuelCon: TestWork HMS: ACT - Restbussimulation HMS: Bus interfaces IPG: CarMaker JS Foundation: Appium KS Engineers: Tornado Lauterbach: Trace32 MAGNA: BluePiraT Mathworks: MATLAB® & Simulink Mechanical Simulation Corporation: CarSim MicroNova: NovaSim Modelica Association: FMI National Instruments: LabVIEW National Instruments: VeriStand National Instruments: VISA OPAL-RT: RT-LAB PEAK: PCAN PLS: UDE QUANCOM: QLIB RA Consulting: DiagRA D SAE: PassThru Scienlab: Charging Discovery System Scienlab: Energy Storage Discover Softing: CAN L2 API Softing: Diagnostic Tool Set Softing: EDIABAS Speedgoat: Simulink RT Synopsys: Silver Synopsys: Virtualizer Technica: BTS The GNU Project: GDB TraceTronic: cTestBed TraceTronic: Ethernet TraceTronic: Multimedia TraceTronic: RemoteCommand TraceTronic: Serial interface TraceTronic: SSH TTTech: TTXConnexion Typhoon HIL: Typhoon HIL Control Center Vector: CANalyzer Vector: CANape Vector: CANoe Vector: DYNA4 Vector XL API ViGEM: Car Communication Analyzer Vires: Virtual Test Drive VW: ODIS X2E: Xoraya Test management tools Broadcom Rally Software IBM RQM Micro Focus ALM /HP Quality Center Micro Focus Octane PTC Integrity LifeCycle Manager SIEMENS Polarion ALM Test42 Source code management tools Apache Subversion Git System requirements OS: Windows 10, 64 bit Free hard disk capacity: at least 3 GB RAM: at least 2 GB Screen resolution: at least 1200 x 800 pixel References External links ECU-TEST product page on TraceTronic website. Retrieved 05 February 2020. TraceTronic GmbH. Retrieved 05 February 2020. Automotive electronics Computer-aided engineering software Control engineering Data analysis software Software testing tools
12095590
https://en.wikipedia.org/wiki/Kerrie%20Holley
Kerrie Holley
Kerrie Lamont Holley is an American software architect, author, researcher, consultant, and inventor. He recently joined Industry Solutions, Google Cloud. Previously he was with UnitedHealth Group / Optum, their first Technical Fellow, where he focused on ideating healthcare assets and solutions using IoT, AI, graph database and more. His main focus centered on advancing AI in healthcare with an emphasis on deep learning and natural language processing. Holley is a retired IBM Fellow. Holley served as vice president and CTO at Cisco responsible for their analytics and automation platform. Holley is known internationally for his innovative work in architecture and software engineering centered on the adoption of scalable services, next era computing, service-oriented architecture and APIs. Biography (== Early life and education ==) Holley was raised by his maternal grandmother on Chicago's south side. While never having met his father and living in a neighborhood marked by poverty and gang activity, Holley defied social odds by channeling his love for math and science through his academic studies. He became a student at the Sue Duncan Children's Center in 1961 where he was tutored in math and science. As he excelled in the program, he became a tutor at the center, later tutoring former United States Secretary of Education, Arne Duncan and actor Michael Clarke Duncan. After graduating from Kenwood Academy in 1972, Holley went on to receive his Bachelor of Arts in mathematics from DePaul University in Chicago; followed by a Juris Doctor Degree in 1982 from DePaul University College of Law. In 2016, Holley was conferred a Doctor of Humane Letters from DePaul University, College of Communication / College of Computing and Digital Media. Career Holley joined IBM in 1986 as an advisory systems engineer. In 1990 he became an analytics consultant with IBM's consulting group, now called IBM Global Business Services. He was appointed chief technology officer of IBM's GBS, AIS and IBM's SOA Center of Excellence where he works with clients to create flexible applications that enable companies to respond to rapidly changing markets. SOA (service-oriented architecture) is a software design methodology based on structured collections of discrete software modules, known as services, that collectively provide the complete functionality of a large or complex software application. Kerrie’s engineering work addressed one of the biggest world-wide challenges in enterprise software system development: future-proofing. He is one of the early pioneers of thinking, practices and tools for evolving software architectures via service oriented architecture (SOA) for engineering large complex enterprise systems. For his work Holley was recognized as an IBM Fellow. In 2000 he was appointed to IBM Distinguished Engineer and in that same year elected to IBM's Academy of Technology for his sustained contributions in designing high performance financial services applications. Holley is a co-patent owner of the industry's first SOA method and SOA maturity model, which helps companies develop SOA-based applications and infrastructures. Holley's experience with cognitive services and analytics at IBM prompted Cisco to ask him to join, to mature their analytics software and automation portfolio focused on machine learning and streaming analytics. In 2016, the opportunity and challenge to contribute to UnitedHealth Group mission to help people live healthier lives and make health care work better made an easy decision for him to join Optum Technology. At Optum, Holley focused on advancing UnitedHealth Group in several strategic technology imperatives. While at Optum he led the UnitedHealth Group Fellow, Distinguished Engineer, and Principal Engineer Technical Leadership Career Program. Awards and honors 2003 Black Engineer of the Year 2004 The 50 Most Important Blacks in Research Science 2005-2010 Naval Studies Board member 2006 IBM Fellow, 2nd African American to be appointed in 100 years 2007 Most Important Blacks in Technology 2009 Most Important Blacks in Technology 2011 Red Herring 100 Global Award Finalist 2012 IBM Master Inventor 2016 Honorary Doctorate Degree and Commencement Speaker, DePaul University 2016 UHG and Optum Fellow Work IBM Fellow Holley was a Fellow in the Thomas J. Watson Research Center focused on scalable business services and API economy. Previously, he served as a CTO for IBM Global Business Services. In 2006 he was named an IBM Fellow, the company's highest technical leadership position. The Fellows program, founded by Thomas J. Watson in 1962, promotes creativity among IBM's most exceptional technical professionals. The IBM Fellow recognition is the most prestigious recognition in the IBM technical community where the criteria for appointment includes: Distinguished, sustained record of technical achievements (usually a creative contribution to science and technology, landmarks to IBM) and a strong potential for continuing contributions to IBM's growth and stature. Technical abilities considered are: Originality and creativity Inventive activities Insight into the technical field of expertise Consulting effectiveness and leadership Technical publications Professional society contributions The criteria for appointment are stringent and take into account only the most significant technical achievements. Appointment as an IBM Fellow, is made by the chairman, president and chief executive officer, and is a career designation. Since 1963, IBM shows a directory of 325 IBM Fellows appointments of which 102 are active as of April 2021. Publications Holley's most recent book, AI First Healthcare, was published by O'Reilly Media in 2021. In November 2010 Holley's first book "100 SOA Questions: Asked and Answered" was published. The book describes how enterprises can adopt service-oriented architecture. His next book "Is Your Company Ready for Cloud", co-authored with Pam Isom, was released in 2012. Patents Holley owns several patents ranging from how to maintain functionality when faced with component failure, to how to locate lost mobile devices and software engineering patents in service-oriented architecture. Holley is a co-patent owner of the industry's first SOA development method and first SOA maturity model. The maturity model helps enterprises assess where they are on the road to adopting a Service-Oriented Architecture and provides a plan for achieving an SOA-based infrastructure. Selected publications Holley, Kerrie, and Siupo Becker. AI First Healthcare. O'Reilly Media, 2021. Holley, Kerrie, and Ali Arsanjani. 100 SOA Questions: Asked and Answered. Pearson Education, 2010. Articles, a selection: Channabasavaiah, Kishore, Kerrie Holley, and Edward Tuggle. "Migrating to a service-oriented architecture." IBM DeveloperWorks 16 (2003). Crawford, C. H., Bate, G. P., Cherbakov, L., Holley, K., & Tsocanos, C. (2005). Toward an on demand service-oriented architecture." IBM Systems Journal, 44(1), 81-107. Arsanjani, A., Ghosh, S., Allam, A., Abdollah, T., Ganapathy, S., & Holley, K. (2008). "SOMA: A method for developing service-oriented solutions." IBM systems Journal, 47(3), 377–396. References External links The Sue Duncan Children's Center Article: The Great SOA IBM Man Book: 100 SOA Questions Asked and Answered Patents Bloomberg: Watson Has Real World Uses IBM GBS CTO Talks Cloud API Economy DePaul 2016 Commencement - Live Stream IBM Fellows African-American engineers 21st-century American engineers African-American inventors 20th-century American inventors 21st-century American inventors African-American scientists Software engineering researchers American computer scientists Living people DePaul University College of Law alumni 1954 births 21st-century African-American people 20th-century African-American people
972846
https://en.wikipedia.org/wiki/CCMP%20%28cryptography%29
CCMP (cryptography)
Counter Mode Cipher Block Chaining Message Authentication Code Protocol (Counter Mode CBC-MAC Protocol) or CCM mode Protocol (CCMP) is an encryption protocol designed for Wireless LAN products that implements the standards of the IEEE 802.11i amendment to the original IEEE 802.11 standard. CCMP is an enhanced data cryptographic encapsulation mechanism designed for data confidentiality and based upon the Counter Mode with CBC-MAC (CCM mode) of the Advanced Encryption Standard (AES) standard. It was created to address the vulnerabilities presented by Wired Equivalent Privacy (WEP), a dated, insecure protocol. Technical details CCMP uses CCM that combines CTR mode for data confidentiality and cipher block chaining message authentication code (CBC-MAC) for authentication and integrity. CCM protects the integrity of both the MPDU data field and selected portions of the IEEE 802.11 MPDU header. CCMP is based on AES processing and uses a 128-bit key and a 128-bit block size. CCMP uses CCM with the following two parameters: M = 8; indicating that the MIC is 8 octets (eight bytes). L = 2; indicating that the Length field is 2 octets. A CCMP Medium Access Control Protocol Data Unit (MPDU) comprises five sections. The first is the MAC header which contains the destination and source address of the data packet. The second is the CCMP header which is composed of 8 octets and consists of the packet number (PN), the Ext IV, and the key ID. The packet number is a 48-bit number stored across 6 octets. The PN codes are the first two and last four octets of the CCMP header and are incremented for each subsequent packet. Between the PN codes are a reserved octet and a Key ID octet. The Key ID octet contains the Ext IV (bit 5), Key ID (bits 6–7), and a reserved subfields (bits 0–4). CCMP uses these values to encrypt the data unit and the MIC. The third section is the data unit which is the data being sent in the packet. The fourth is the message integrity code (MIC) which protects the integrity and authenticity of the packet. Finally, the fifth is the frame check sequence (FCS) which is used for error detection and correction. Of these sections only the data unit and MIC are encrypted. Security CCMP is the standard encryption protocol for use with the Wi-Fi Protected Access II (WPA2) standard and is much more secure than the Wired Equivalent Privacy (WEP) protocol and Temporal Key Integrity Protocol (TKIP) of Wi-Fi Protected Access (WPA). CCMP provides the following security services: Data confidentiality; ensures only authorized parties can access the information Authentication; provides proof of genuineness of the user Access control in conjunction with layer management Because CCMP is a block cipher mode using a 128-bit key, it is secure against attacks to the 264 steps of operation. Generic meet-in-the-middle attacks do exist and can be used to limit the theoretical strength of the key to 2n/2 (where n is the number of bits in the key) operations needed. Known attacks References Cryptographic protocols Wireless networking IEEE 802.11 Secure communication Key management
27683037
https://en.wikipedia.org/wiki/List%20of%20Israeli%20inventions%20and%20discoveries
List of Israeli inventions and discoveries
This is a list of inventions and discoveries by Israeli scientists and researchers, working locally or overseas. There are over 6,000 startups currently in Israel. There are currently more than 30 technology companies valued over US$1 billion (unicorn startups) in Israel, more than all of Europe combined. Mathematics Johnson–Lindenstrauss lemma, a mathematical result concerning low-distortion embeddings of points from high-dimensional into low-dimensional Euclidean space contributed by Joram Lindenstrauss. Development of the measurement of rigidity by Elon Lindenstrauss in ergodic theory, and their applications to number theory. Proof of Szemerédi's theorem solved by mathematician Hill Furstenberg. Expansion of axiomatic set theory and the ZF set theory by Abraham Fraenkel. Development of the area of automorphic forms and L-functions by Ilya Piatetski-Shapiro.Tel Aviv University obituary Development of Sauer–Shelah lemma and Shelah cardinal. Development of the first proof of the alternating sign matrix conjecture. Development of Zig-zag product of graphs, a method of combining smaller graphs to produce larger ones used in the construction of expander graphs by Avi Wigderson. Development of Bernstein–Sato polynomial and proof of the Kazhdan–Lusztig conjectures by Joseph Bernstein Generalization of the marriage theorem by obtaining the right transfinite conditions for infinite bipartite graphs. He subsequently proved the appropriate versions of the Kőnig theorem and the Menger theorem for infinite graphs by Ron Aharoni. Development of the Amitsur–Levitzki theorem by Shimshon Amitsur. False discovery rate, a statistical method for regulating Type I errors. Science Chemistry Discovery of quasicrystals by Dan Shechtman of the Technion. The discovery led him to receive the Nobel Prize in Chemistry. Discovery of the role of protein Ubiquitin by Avram Hershko and Aaron Ciechanover of the Technion Institute (together with the American biologist Irwin Rose). The discovery led them to receive the Nobel Prize in Chemistry. Ada Yonath - 2009 Nobel Prize in Chemistry for discovery of the structure and function of ribosomes, the universal cellular factory for the translation of the genetic code to proteins. Development of multiscale models for complex chemical systems by Arieh Warshel and Michael Levitt of the Weizmann Institute of Science (presently at University of Southern California and Stanford University, respectively), together with the Austrian-born American chemist Martin Karplus. The discovery led them to receive the Nobel Prize in Chemistry. Physics Prediction of Quarks by Yuval Ne'eman of Tel Aviv University (together with the American physicist Murray Gell-Mann). Discovery of the Aharonov–Bohm effect by Yakir Aharonov and David Bohm. Formulation of Black holes Entropy by Jacob Bekenstein of the Hebrew University of Jerusalem. Optics World's smallest video camera – a camera with a diameter, designed to fit in a tiny endoscope designed by Medigus. Pillcam by Given Imaging, the first Capsule Endoscopy solution to record images of the digestive tract. The capsule is the size and shape of a pill and contains a tiny camera. Created by Israeli engineer Gavriel Iddan who sold the company to Irish medical device maker Covidien for $860 million. Iddan has expressed regret for the sale due to the companies fulfillment of an ancient Jewish prophecy “The Pillcam was based on military technology... It was a good example of how we shall beat our swords into plowshares", as the Hebrew prophets predicted. Covidien was acquired by Medtronic in 2016, and is now the provider of Pillcam. Line free single power bicentric prismatic spectacle lens for correction of anisometropia. Sydney J. Bush UK patent no. 1539381. Medicine The flexible stent, also known as NIR Stent or EluNIR. Developed by Israeli company Medinol, which is headquartered in Tel Aviv. The pressure bandage - known widely as the Israeli Bandage is a specially designed, first-aid device that is used to stop bleeding from hemorrhagic wounds caused by traumatic injuries in pre-hospital emergency situations. First used for saving lives during a NATO peacekeeping operation in Bosnia and Herzegovina, by inventor, Israeli military medic, Bernard Bar-Natan. The bandage was successfully used during operations Enduring Freedom and Iraqi Freedom and is widely used today, across the world. The bandage was nicknamed "Israeli bandage" by American soldiers, and has been "the bandage of choice for the US Army and special forces". Before the Israeli emergency bandage was invented in 1998, wounded soldiers were told to find a rock and wrap it on top of hemorrhaging wounds in order to hold direct pressure. Bar-Natan sold his company to PerSys Medical Inc in Houston, Texas, the company that first introduced the bandage to the US military. Eshkol-Wachman Movement Notation – a notation system for recording movement on paper that has been used in many fields, including dance, physical therapy, animal behavior and early diagnosis of autism. Development of Azilect, a drug for Parkinson's disease, by Moussa Youdim and John Finberg from the Technion - Israel Institute of Technology, and commercialized by Teva Pharmaceutical Industries. Development of the Copaxone immunomodulator drug for treating multiple sclerosis. It was developed in the Weizmann Institute of Science in Israel by Michael Sela, Ruth Arnon and Deborah Teitelbaum. Development of the Interferon proteins by Michel Revel from the Weizmann Institute of Science in Israel. Development of taliglucerase alfa (Elelyso), a recombinant glucocerebrosidase enzyme produced from transgenic carrot cell cultures. Taliglucerase alfa won approval from the U.S. Food and Drug Administration in May 2012 as an orphan drug for the treatment of Type 1 Gaucher's disease. Development of Chimeric Antigen Receptor Sambucol, an over-the-counter elderberry-based anti-influenza syrup. Discovered by Dr. Madeleine Mumcuoglu at the Hebrew University of Jerusalem. Studies in 1995 showed that Sambucol was effective against human, swine and avian influenza strains, although more research is required to clearly understand its effectiveness. Economics Work of Daniel Kahneman and Amos Tversky of the Hebrew University of Jerusalem explaining irrational human economic choices. The work led Daniel to receive the Nobel Prize in Economics. Developments in Game theory. Israel Aumann of the Hebrew University of Jerusalem received the Nobel Prize in Economics for his work in this field. The Rubinstein bargaining model, one of the most influential findings in game theory, refers to a class of bargaining games that feature alternating offers through an infinite time horizon. The proof is from Ariel Rubinstein 1982. Biotechnology Nanowire – a conductive wire made of a string of tiny particles of silver, a thousand times thinner than a human hair. Developed by Uri Sivan, Erez Braun and Yoav Eichen from the Technion. World's smallest DNA computing machine system – "the smallest biological computing device" ever constructed, according to Guinness Book of Records, which is composed of enzymes and DNA molecules capable of performing simple mathematical calculations and which uses its input DNA molecule as its sole source of energy. Developed in 2003 in the Weizmann Institute of Science by professor Ehud Shapiro and his team. Theoretical computer science RSA public key encryption, introduced by Adi Shamir with Ron Rivest, and Leonard Adleman The concept of nondeterministic finite automatons, introduced by Michael O. Rabin Amir Pnueli introduced temporal logic into computing science Lempel–Ziv–Welch algorithm, a universal lossless data compression algorithm created by Abraham Lempel and Jacob Ziv of the Technion institute, together with the American Information theorist, Terry Welch. Differential cryptanalysis, co-invented by Adi Shamir Shamir's Secret Sharing, invented by Adi Shamir Computing Computer hardware USB flash drive – a flash memory data storage device integrated with a USB interface. The Israeli company M-Systems (in partnership with IBM) developed and manufactured the first USB flash drives available in North America. This claim is challenged by multiple companies in the following three countries which also independently developed USB technology: Singapore (Trek Technology), the People's Republic of China (PRC) (Netac Technology) and the Republic of China (Taiwan). See USB Flash drive § Patent controversy. Smartphone dual lens technology, by Israeli company Corephotonics. In 2018, Corephotonics sued Apple Inc for infringement of its dual camera patents; specifically regarding several iPhone models' use of their patented telephoto lens design, optical zoom method, and a method for intelligently fusing images from the wide-angle and telephoto lenses to improve image quality, infringing on four separate patents. Corephotonics was bought by Samsung in 2018 for US$155 million. The Intel 8088 – This microprocessor, designed at Intel's Haifa laboratory, powered the first PC that IBM built, which is credited with kickstarting the PC revolution. The 8088 was designed in Israel at Intel's Haifa laboratory. The widespread use of the IBM's PC, using the 8088 processor, established the use of x86 architecture as a de facto standard for decades. The IEEE wrote that "almost all the world’s PCs are built around CPUs that can claim the 8088 as an ancestor." Intel has credited the 8088 with launching the company into the Fortune 500. Quicktionary Electronic dictionary – a pen-sized scanner able to scan words or phrases and immediately translate them into other languages, or keep them in memory in order to transfer them to the PC. Developed by the company Wizcom Technologies Ltd. Laser Keyboard – a virtual keyboard is projected onto a wall or table top and allows to type handheld computers and cell phones. Developed simultaneously by the Israeli company Lumio and Silicon Valley startup company Canesta.Marriott, Michel (September 19, 2002). "No Keys, Just Soft Light and You". The New York Times.Shiels, Maggie (October 15, 2002). "The keyboard that isn't there". BBC News. The company subsequently licensed the technology to Celluon of Korea. TDMoIP (TDM over IP) − in telecommunications, the emulation of time-division multiplexing (TDM) over a packet-switched network (PSN), developed by engineers at RAD Data Communications Voice over Internet Protocol (VoIP) - technology for voice based communications using the internet instead of traditional telephone systems. VoIP was originally conceived by Danny Cohen, an Israeli-American scientist, but was first created, implemented, and commercialized by Netanya-based, Israeli company VocalTec and its founder Alon Cohen Many Intel processors are developed and/or manufactured in Israel including: Ice Lake Server processors, developed in Haifa and produced at Intel's Kiryat Gat plant. Celeron Sunny Cove Rocket Lake Alder Lake Pentium MMX Centrino / Pentium M Sandy Bridge Development and production of processors and chipsets for many companies including Google, Apple, Microsoft, HP, Amazon, IBM, Broadcom, ARM, STMicroelectronics, Samsung, Sony, and Qualcomm, some of whom have had major research and development centers in Israel for decades, often developing key technologies in their Israeli R&D centers. Thunderbolt a widely used interface technology, was developed as a joint venture between Apple Inc and Intel, in Israel Mellanox Technologies designs and/or makes crucial, industry-leading Ethernet and InfiniBand network and multicore processors, network adapters, switches (including the Spectrum family of switches), cables, software and other services for high performance computing, supercomputing, enterprise data centers, cloud, storage, AI, cyber security, telecom and financial solutions. Mellanox hardware and software is used by many large companies, such as Google, Microsoft, Alibaba, Dell, HP, and Western Digital among many others. The Israeli company was bought by Nvidia in 2019 for $6.9 billion. Computer and mobile software The network firewall - technology to block malware attacks, filter malicious traffic, and prevent unauthorized access to a network. Although academic work on this had been performed by others, Gil Schwed and Nir Zuk, at Israeli company Checkpoint, released the first commercial, stateful firewall. The first instant messaging (IM) service, ICQ. Originally developed by the Israeli company Mirabilis in 1996, ICQ was the first widely adopted IM service in the world. The company was bought by AOL for $407 million in 1998. FaceID, the Apple Inc facial recognition software used in iPhone devices. FaceID is based on technology created by Israeli companies PrimeSense, which was bought by Apple Inc. for $360 million on November 24, 2013, and RealFace Windows XP and Windows NT. Much of these operating systems were developed at the Microsoft-Israel campus.https://www.researchgate.net/publication/228194988_Developing_a_Source_of_Competitive_Advantage_Israel's_Version Page 24, Exhibit 12 Microsoft Security Essentials. Development of the Microsoft Security Essentials anti-virus suite began in December 2008 at the Microsoft R&D center in Herzliya, Israel. Microsoft Kinect. A line of motion sensing input devices produced by Microsoft, popularly used in tandem with the Xbox gaming system. The Kinect was originally developed by PrimeSense, an Israeli company. Babylon, a single-click computer translation, dictionary and information source utility program, developed by Amnon Ovadia. Gett, an application that connects between customers and taxi drivers using its proprietary GPS system, enabling users to order a cab either with their smartphone or through the company's website. It was founded by Israeli entrepreneurs Shahar Waiser and Roi More. Mobileye, vision-based advanced driver-assistance systems (ADAS) providing warnings for collision prevention and mitigation. Many companies developing autonomous vehicles, such as BMW, rely on Mobileye's technology. OrCam MyEye, is a portable, artificial vision device that allows the visually impaired to understand text and identify objects through audio feedback describing what such people are unable to see. Umoove, a high-tech startup company that invented a software only solution for face and eye tracking is located in Israel. Viber, a proprietary cross-platform instant messaging voice-over-Internet Protocol application for smartphones. Developed by American-Israeli entrepreneur Talmon Marco, Viber reached 200 million users in May 2013. Waze, a GPS-based geographical navigation application program for smartphones with GPS support and display screens, which provides turn-by-turn information and user-submitted travel times and route details, downloading location-dependent information over the mobile telephone network. Waze Ltd., which was founded in 2008 in Israel by Uri Levine, software engineer Ehud Shabtai and Amir Shinar, and is now available in over 100 countries, was acquired by Google for a reported $1.1 billion. WeCU (pronounced 'We See You') Technologies is a technology able to detect, identify, and analyze people in real time. WeCU is being implemented in airports around the world to help identify potential terrorists. Wix.com Robotics ReWalk a bionic walking assistance system that enables paraplegics to stand upright, walk and climb stairs. Development of robotic guidance system for spine surgery by Mazor Robotics. Defense Iron Dome – a mobile air defense system in development by Rafael Advanced Defense Systems and Israel Aircraft Industries designed to intercept short-range rockets and artillery shells. On April 7, 2011, the system successfully intercepted a Grad rocket launched from Gaza, marking the first time in history a short-range rocket was ever intercepted. The Iron Dome was later utilized more fully in the Israeli-Gaza conflict of 2012, where it displayed a very high rate of efficiency (95%–99%) in intercepting enemy projectiles. Further production of the Iron Dome system will be financed and supported by the United States government. Trophy is an industry-leading, vehicle-mounted, active-self-protection, system designed to protect against ATGMs and RPGs. Trophy was developed by Rafael and Elta Systems. Several contracts including a $193 and $79 million contract for Trophy systems were awarded to Leonardo DRS, Rafael's American partner, in order to equip a significant number of Abrams M1A1/A2 MBTs with Trophy. In January 2021, Rafael and Leonardo DRS completed urgent deliveries of enough Trophy systems to the US Army to equip all tanks of four armored brigades, some 400 systems. Israel also supplies the German Army with Trophy systems for use on their Leopard 2 Main Battle Tanks. IMI Tavor TAR-21 is an Israeli bullpup assault rifle. The Uzi submachine gun was developed by Maj. Uziel Gal in the 1950s. Python a short-range air-to-air missile. Desert Eagle a short range pistol. Protector USV is an unmanned surface vehicle, developed by the Rafael Advanced Defense Systems. It is the first of its kind to be used in combat. Arrow 3 is an anti-ballistic missile defense system capable of shooting down ICBMs and other long range missiles. David's Sling is an air defense system capable of intercepting enemy planes, drones, tactical ballistic missiles, medium- to long-range rockets and cruise missiles. Along with Arrow 3 and the Iron Dome, it makes up Israel's defense "umbrella." MUSIC (Multi Spectral Infrared Countermeasure) – a system that counter surface-to-air heat-seeking missiles. It is manufactured by Elbit Systems. MagnoShocker – combines a metal detector and a taser to immediately neutralize a dangerous person, developed by the mathematician Amit Weissman and his colleagues Adir Kahn and Zvi Jordan. Wall radar – a unique radar utilizing Ultra Wide Band (UWB) to allow users to see through walls. Developed by the Israeli company Camro. A unique evacuation method developed by Israeli company Agilite Gear, comprises a strap allowing you to carry the wounded person on your back. Agriculture and breeding The cherry tomato (Tomaccio) was developed by several Israeli laboratories, the dominant ones being those led by Professor Nahum Keidar and Professor Chaim Rabinovitch from the Agriculture Faculty of the Hebrew University of Jerusalem, Rehovot Campus. Watergen - an Israeli company that develops products that generate high quality drinking water from air, without needing a source of water such as a well, river, stream, ocean etc. Watergen products are being used worldwide, including in Hamas controlled Gaza Strip, Columbia, and Native American communities. Israel recently signed a deal with the UAE to license Watergen technology. Watergen won the CES's Best Innovation Technology Award for its technology. Drip irrigation systems – The huge worldwide industry of modern drip irrigation all began when Israeli engineer Simcha Blass noticed a tree growing bigger than its neighbors in the Israeli desert, and found that it was fed by a leaking water pipe. Netafim, the company founded in 1965 to commercialize his idea, is recognized as the worldwide pioneer in smart drip- and micro-irrigation. It has revolutionized the agricultural industry. Hybrid cucumber seeds – In the 1950s, Prof. Esra Galun of the Weizmann Institute developed hybrid seed production of cucumbers and melons, disease-resistant cucumbers and cucumbers suitable for mechanical harvesting. Galun and his colleagues invented a technique for producing hybrid cucumber seeds without hand pollination. Grain cocoons – invented by international food technology consultant Professor Shlomo Navarro, the GrainPro Cocoons provide a simple and cheap way for African and Asian farmers to keep their grain market-fresh, as huge bags keep both water and air out, making sure the harvest is clean and protected even in extreme heat and humidity. Biological pest control – invented in Kibbutz Sde Eliyahu by a company called Bio-Bee, it breeds beneficial insects and mites for biological pest control and bumblebees for natural pollination in greenhouses and open fields. The company's top seller worldwide and especially in the U.S. is a two-millimeter-long, pear-shaped orange spider that is a highly efficient enemy of the spider mite, a devastating agricultural pest. AKOL – a Kibbutz-based company which gives low-income farmers the ability to get top-level information from professional sources. Mirtra – a Tal-Ya Water Technologies agricultural technology invention. A unique, patented polypropelyne Mitra system that covers the plant's root system, directing water and fertilizer directly to the root, while protecting the earth around the root from weeds and extreme temperatures, reducing the need to water crops by up to 50 percent. It also reduces fertilizer needs by 50% and functions as an alternative to herbicide (weed-killer). "Zero-liquid-discharge" system – an invention of the Israeli GFA company which allows fish to be raised virtually anywhere by eliminating the environmental problems in conventional fish farming, without being dependent on electricity or proximity to a body of water. TraitUP – a new technology that enables the introduction of genetic materials into seeds without modifying their DNA, immediately and efficiently improving plants before they're even sowed. It was developed by Hebrew University agricultural scientists Ilan Sela and Haim D. Rabinowitch. Judean date palm – oldest seed ever to be revived, restoring an extinct cultivar. Golden hamster – first domesticated for pet use by a Hebrew University of Jerusalem zoologist in 1930 Energy Super iron battery – A new class of a rechargeable electric battery based on a special kind of iron. More environment friendly because the super-iron eventually rusts, it was developed by Stuart Licht. of the University of Massachusetts. The world's first gas turbine solar thermal power station, by Israeli company AORA Consumer goods and appliances SodaStream Epilator (originally "Epilady") – an feminine beauty product. It was developed and originally manufactured at Kibbutz HaGoshrim. Wonder Pot – a pot developed for baking on the stovetop rather than in an oven. Micronized coating instant hot water pipes developed by A.C.T. Games Rummikub – a tile-based game for two to four players invented by Ephraim Hertzano. Hidato – a logic puzzle game invented by mathematician Gyora Benedek. Taki – an Israeli card game invented by Haim Shafir. Mastermind – an Israeli board game invented by Mordecai Meirowitz. Guess Who? – a two-player guessing game invented by Theo & Ora Coster (a.k.a. Theora Design). Food and drink Ptitim, also called Israeli couscous worldwide, is a wheat-based baked pasta. It was initially invented during the austerity period in Israel when Rice and Semolina were scarce. Safed cheese or Tzfat cheese is a semi-hard, salty cheese produced in Israel from sheep's milk. It was first produced by the Hameiri dairy in Safed in 1840 and is still produced there by descendants of the original cheese makers. Jerusalem mixed grill is a grilled meat dish considered a specialty of Jerusalem. It consists of chicken hearts, spleens and livers mixed with bits of lamb cooked on a flat grill, seasoned with onion, garlic, juniper berries, black pepper, cumin, turmeric and coriander Sabich is a sandwich, consisting of a pita stuffed with fried eggplant and hard-cooked eggs. Local consumption is said to have stemmed from a tradition among Iraqi Jews, who ate it on Shabbat morning. Shkedei marak is an Israeli food product consisting of crisp mini croutons used as a soup accompaniment. Karat Caviar is a Russian Osetra caviar brand farmed in the Golan and has won several international awards. The Russian Osetra fingerlings were imported from the Caspian Sea. Bamba (snack) is a peanut butter-flavored snack food manufactured by the Osem corporation in Holon, Israel. Bissli is an Israeli wheat snack produced by Nestle-owned Osem. Bissli is Osem's leading snack brand after Bamba. Physical exercise Aviva method Feldenkrais Krav Maga Other Paranormal Activity - A famous horror film series. Produced by Israeli video game programmer and film producer, Oren Peli, shortly after moving to the US. Mighty Morphin Power Rangers - Brought to the US by Israeli Haim Saban. Homeland (TV Series) - Based on the Israeli hit TV series Hatufim. DogTV - The first dedicated television network designed for dogs. Created by Israeli Ron Lev and originally launched in Israel. See also Science and technology in Israel Start-up Nation: The Story of Israel's Economic Miracle Venture capital in Israel References "Ridiculous Dietary Allowance" S.Hickey PhD H. Roberts PhD. pp. 107–111. External links "Made in Israel – The top 64 innovations developed in Israel" – ISRAEL21c'' "NoCamels - Israeli Innovation News" Israeli Israeli Jewish scientists
2221437
https://en.wikipedia.org/wiki/Kurso%20de%20Esperanto
Kurso de Esperanto
Kurso de Esperanto () is a free and open source language course software with 12 units for the constructed language Esperanto. The course is especially dedicated to beginners who will know the basics of Esperanto within two weeks, due to optimized learning exercises. The software uses listening tests of all used words to train pronunciation and listening comprehension. The user can record their attempts at speaking Esperanto with a microphone and compare them with the examples. Sound samples include songs available for singing along with, by the Brazilian pop music band Merlin among others. At the end of each unit, and for certification of the whole Esperanto course the user can pass an examination and utilize a special feature of this course: the tests can be given to course leaders to be corrected free of charge. This correction service is provided by volunteers of the Esperanto movement including the German Esperanto youth organization. Kurso de Esperanto is based on a 10-hour course developed by the Quebec Esperanto Society and some other courses. The language course works with Android, Mac OS X, Windows and the numerous Linux distributions. As of August 2020 version 5.1 is available for download. Kurso de Esperanto is already in more than 23 languages. There are many volunteers working to increase this number. The translation process of the Esperanto course is simplified by a program called tradukilo that is also available for free download. External links Kurso de Esperanto website Quebec Esperanto Society (French) Plena Manlibro de Esperanta Gramatiko - another source for Kurso de Esperanto Esperanto education Free language learning software Software using the GPL license Cross-platform free software Free multilingual software Free software programmed in C++ Educational software that uses Qt
62874807
https://en.wikipedia.org/wiki/Cloud%20Native%20Computing%20Foundation
Cloud Native Computing Foundation
The Cloud Native Computing Foundation (CNCF) is a Linux Foundation project that was founded in 2015 to help advance container technology and align the tech industry around its evolution. It was announced alongside Kubernetes 1.0, an open source container cluster manager, which was contributed to the Linux Foundation by Google as a seed technology. Founding members include Google, CoreOS, Mesosphere, Red Hat, Twitter, Huawei, Intel, Cisco, IBM, Docker, Univa, and VMware. Today, CNCF is supported by over 450 members. In order to establish qualified representatives of the technologies governed by the CNCF, a program was announced at the inaugural CloudNativeDay in Toronto in August, 2016. Dan Kohn (who also helped launch the Core Infrastructure Initiative) led CNCF as executive director until May 2020. The foundation announced Priyanka Sharma, director of Cloud Native Alliances at GitLab, would step into a general manager role in his place. Sharma describes CNCF as "a very impactful organization built by a small group of people but [within] a very large ecosystem" and believes that CNCF is entering into a “second wave" due to increased industry awareness and adoption. In August 2018 Google announced that it was handing over operational control of Kubernetes to the community. Since its creation, CNCF has launched a number of hosted sub-projects. In January 2020, the CNCF annual report for the previous year was issued and reflected significant growth to the foundation across membership, event attendance, training, and industry investment. In 2019, CNCF grew by 50% since the previous year with 173 new members and nearly 90% growth in end-users. The report revealed a 78% increase in usage of Kubernetes in production. CNCF projects CNCF technology projects are cataloged with a maturity level of Sandbox, Incubated, and Graduated, in ascending order. The defined criteria include rate of adoption, longevity and whether the open source project can be relied upon to build a production-grade product. CNCF's process brings projects in as incubated projects and then aims to move them through to graduation, which implies a level of process and technology maturity. A graduated project reflects overall maturity; these projects have reached a tipping point in terms of diversity of contribution, community scale/growth, and adoption. The CNCF Sandbox is a place for early-stage projects, and it was first announced in March 2019. The Sandbox replaces what had originally been called the "inception project level". In July 2020, Priyanka Sharma stated that CNCF is looking to increase the number of open source projects in the cloud native ecosystem. Graduated projects Containerd Containerd is an industry-standard core container runtime. It is currently available as a daemon for Linux and Windows, which can manage the complete container lifecycle of its host system. In 2015, Docker donated the OCI Specification to The Linux Foundation with a reference implementation called runc. Since February 28, 2019 it is an official CNCF project. Its general availability and intention to donate the project to CNCF was announced by Docker in 2017. CoreDNS CoreDNS is a DNS server that chains plugins. Its graduation was announced in 2019. Envoy Originally built at Lyft to move their architecture away from a monolith, Envoy is a high-performance open source edge and service proxy that makes the network transparent to applications. Lyft contributed Envoy to Cloud Native Computing Foundation in September 2017. etcd etcd is a distributed key value store, providing a method of storing data across a cluster of machines. It became a CNCF incubating project in 2018 at KubeCon+CloudNativeCon North America in Seattle that year. In November, 2020, the project became a graduated project. Fluentd Fluentd is an open source data collector, allowing the user to "unify the data collection and consumption for a better use and understanding of data." Fluentd joined CNCF in 2016 and became a graduated project in 2019. Harbor Harbor is an "open source trusted cloud native registry project that stores, signs, and scans content." It became an incubating project in September 2019 and graduated in June 2020. Helm Helm is a package manager that helps developers "easily manage and deploy applications onto the Kubernetes cluster." It joined the incubating level in June 2018 and graduated in April 2020. Jaeger Created by Uber Engineering, Jaeger is an open source distributed tracing system inspired by Google Dapper paper and OpenZipkin community. It can be used for tracing microservice-based architectures, including distributed context propagation, distributed transaction monitoring, root cause analysis, service dependency analysis, and performance/latency optimization. The Cloud Native Computing Foundation Technical Oversight Committee voted to accept Jaeger as the 12th hosted project in September 2017 and became a graduated project in 2019. In 2020 it became an approved and fully integrated part of the CNCF ecosystem. Kubernetes Kubernetes is an open source framework for automating deployment and managing applications in a containerized and clustered environment. "It aims to provide better ways of managing related, distributed components across varied infrastructure." It was originally designed by Google and donated to The Linux Foundation to form the Cloud Native Computing Foundation with Kubernetes as the seed technology. The "large and diverse" community supporting the project has made its staying power more robust than other, older technologies of the same ilk. In January 2020, the CNCF annual report showed significant growth in interest, training, event attendance and investment related to Kubernetes. Linkerd Linkerd is CNCF's fifth member project, and the project that coined the term “service mesh". Linkerd adds observability, security, and reliability features to applications by adding them at the platform rather than the application layer, and features a Rust-based "micro-proxy" to maximize speed and security of its data plane. Linkerd graduated from CNCF in July 2021. Open Policy Agent Open Policy Agent (OPA) is "an open source general-purpose policy engine and language for cloud infrastructure." It became a CNCF incubating project in April 2019. OPA graduated from CNCF in February 2021. Prometheus A Cloud Native Computing Foundation member project, Prometheus is a cloud monitoring tool sponsored by SoundCloud in early iterations. The tool is currently used by Digital Ocean, Ericsson, CoreOS, Docker, Red Hat and Google. In August 2018, the tool was designated a graduated project by the Cloud Native Computing Foundation. Rook Rook is CNCF's first cloud native storage project. It became an incubation level project in 2018 and graduated in October 2020. The Update Framework The Update Framework (TUF) helps developers to secure new or existing software update systems, which are often found to be vulnerable to many known attacks. TUF addresses this widespread problem by providing a comprehensive, flexible security framework that developers can integrate with any software update system. TUF was CNCF's first security-focused project to and the ninth project overall to graduate from the foundation's hosting program. TiKV TikV runs on Rust and "provides a distributed key value database." CNCF's Technical Oversight Committee voted to move the project to the incubation-level in May 2019. TiKV graduated from CNCF in September 2020. Vitess Vitess is a database clustering system for horizontal scaling of MySQL, first created for internal use by YouTube. It became a CNCF project in 2018 and graduated in November 2019. Incubating projects Cilium Cilium is an open source software for providing, securing and observing network connectivity between container workloads. It relies on the Linux kernel technology eBPF. The project joined CNCF in October 2021. Contour Contour is a management server for Envoy that can direct the management of Kubernetes' traffic. Contour also provides routing features that are more advanced than Kubernetes' out-of-the-box Ingress specification. VMWare contributed the project to CNCF in July 2020. Cortex Cortex offers horizontally scalable, multi-tenant, long-term storage for Prometheus and works alongside Amazon DynamoDB, Google Bigtable, Cassandra, S3, GCS, and Microsoft Azure. It was introduced into the ecosystem incubator alongside Thanos in August 2020. CRI-O CRI-O is an Open Container Initiative (OCI) based "implementation of Kubernetes Container Runtime Interface". CRI-O allows Kubernetes to be container runtime-agnostic. It became an incubating project in 2019. Crossplane Crossplane is an infrastructure management framework that integrates access to cloud resources via Kubernetes CRDs (Custom Resource Definitions). This is realized via the existing kubernetes API allowing access from standard kubernetes CLI and tools. Crossplane announced the promotion to the CNFC incubator in September 2021. Falco Falco is an open source and cloud native runtime security initiative. It is the "de facto Kubernetes threat detection engine". It became an incubating project in January 2020. gRPC gRPC is a "modern open source high performance RPC framework that can run in any environment." The project was formed in 2015 when Google decided to open source the next version of its RPC infrastructure ("Stubby"). The project has a number of early large industry adopters such as Square, Inc., Netflix, and Cisco. KubeEdge In September 2020, CNCF's Technical Oversight Committee (TOC) announced that KubeEdge was accepted as an incubating project. The project was created at Futurewei (a Huawei partner). KubeEdge's goal is to "make edge devices an extension of the cloud". Kuma In June 2020, API management platform Kong announced that it would donate its open-source service mesh control plane technology, called Kuma, to CNCF as a sandbox project. Litmus In July 2020, MayaData donated Litmus, an open source chaos engineering tool that runs natively on Kubernetes, to CNCF as a sandbox-level project. NATS NATS consists of a collection of open source messaging technologies that "implements the publish/subscribe, request/reply and distributed queue patterns to help create a performant and secure method of InterProcess Communication (IPC)." It existed independently for a number of years but gained wider reach since becoming a CNCF incubating project. Notary Notary is an open source project that enables widespread trust over arbitrary data collections. Notary was released by Docker in 2015 and became a CNCF project in 2017. OpenTelemetry OpenTelemetry is an open source observability framework created when CNCF merged the OpenTracing and OpenCensus projects. OpenTracing offers "consistent, expressive, vendor-neutral APIs for popular platforms" while the Google-created OpenCensus project acts as a "collection of language-specific libraries for instrumenting an application, collecting stats (metrics), and exporting data to a supported backend." Under OpenTelemetry, the projects create a "complete telemetry system [that is] suitable for monitoring microservices and other types of modern, distributed systems — and [is] compatible with most major OSS and commercial backends." It is the "second most active" CNCF project. In October 2020, AWS announced the public preview of its distro for OpenTelemetry. Thanos Thanos enables global query views and unlimited retention of metrics. It was designed to be easily addable to Prometheus deployments. CNCF Initiatives CNCF hosts a number of efforts and initiatives to serve the cloud native community, including: Events CNCF hosts the co-located KubeCon and CloudNativeCon conferences, which have become a keystone events for technical users and business professionals seeking to increase Kubernetes and cloud-native knowledge. The events seek to enable collaboration with industry peers and thought leaders.The North America event was moved to an entirely remote model for its 2020 season due to the COVID-19 pandemic. Diversity scholarships and stance on equity and inclusion CNCF's Diversity Scholarship program covers the ticket and travel to the KubeCon + CloudNativeCon conference. In 2018, $300,000 in diversity scholarships was raised to enable attendees from diverse and minority backgrounds to make the journey to Seattle for KubeCon and CloudNativeCon. In August 2020, Priyanka Sharma stated that CNCF stands "in solidarity" with the Black Lives Matter movement. Sharma also stated that she was "personally involved in a project to eradicate racially problematic terminology from code" and that the foundation is "actively working to improve the gender and racial balance inside the cloud native ecosystem" while remaining committed to creating spaces and opportunities for LGBTQIA+, women, Black and Brown people, and differently-abled people, specifically in regards to KubeCon. Kubernetes certification and education One path toward becoming a Kubernetes-certified IT professional is the vendor-agnostic Certified Kubernetes Administrator (CKA) accreditation, which is relevant to admins who work across a range of cloud platforms. There are tens of thousands of Certified Kubernetes Administrators (CKA) and Certified Kubernetes Application Developers (CKAD) worldwide. Kubernetes software conformance and training CNCF's Certified Kubernetes Conformance Program (KCSP) enables vendors to prove that their product and service conformant with a set of core Kubernetes APIs and are interoperable with other Kubernetes implementations. At the end of 2018, there were 76 firms that had validated their offerings with the Certified Kubernetes Conformance Program. In 2017, CNCF also helped the Linux Foundation launch a free Kubernetes course on the EdX platform — which has more than 88,000 enrollments. The self-paced course covers the system architecture, the problems Kubernetes solves, and the model it uses to handle containerized deployments and scaling. The course also includes technical instructions on how to deploy a standalone and multi-tier application. Cloud Native Landscape CNCF developed a landscape map that shows the full extent of cloud native solutions, many of which fall under their umbrella. The interactive catalog gives an idea of the problems facing engineers and developers deciding which products to use. This interactive catalog was created in response to the proliferation of third-party technologies and resulting decision-fatigue engineers and developers often experience when selecting software tools. In addition to mapping out the relevant and existing cloud native solutions, CNCF's landscape map provides details on the solutions themselves including open source status, contributors, and more. The landscape map has been the subject of various jokes on twitter due to the CNCF ecosystem's expansiveness and visual complexity. Cloud Native Trail Map CNCF's Cloud Native Trail Map outlines the open source cloud native technologies hosted by the Foundation and outlines the recommended path for building a cloud native operation using the projects under its wing. The Cloud Native Trail Map also acts as an interactive and comprehensive guide to cloud technologies. DevStats CNCF's DevStats tool provides analysis of GitHub activity for Kubernetes and the other CNCF projects. Dashboards track a multitude of metrics, including the number of contributions, the level of engagement of contributors, how long it takes to get a response after an issue is opened, and which special interest groups (SIGs) are the most responsive. CNCF Technology Radar In June 2020, CNCF published the inaugural issue of the CNCF Technology Radar, an "opinionated guide to a set of emerging technologies" in the form of a quarterly paper. Notes References External links Home page Linux Foundation projects
1035450
https://en.wikipedia.org/wiki/In%20silico
In silico
In biology and other experimental sciences, an in silico experiment is one performed on computer or via computer simulation. The phrase is pseudo-Latin for 'in silicon' (in Latin it would be in silicio), referring to silicon in computer chips. It was coined in 1987 as an allusion to the Latin phrases in vivo, in vitro, and in situ, which are commonly used in biology (especially systems biology). The latter phrases refer, respectively, to experiments done in living organisms, outside living organisms, and where they are found in nature. History The earliest known use of the phrase was by Christopher Langton to describe artificial life, in the announcement of a workshop on that subject at the Center for Nonlinear Studies at the Los Alamos National Laboratory in 1987. The expression in silico was first used to characterize biological experiments carried out entirely in a computer in 1989, in the workshop "Cellular Automata: Theory and Applications" in Los Alamos, New Mexico, by Pedro Miramontes, a mathematician from National Autonomous University of Mexico (UNAM), presenting the report "DNA and RNA Physicochemical Constraints, Cellular Automata and Molecular Evolution". The work was later presented by Miramontes as his dissertation. In silico has been used in white papers written to support the creation of bacterial genome programs by the Commission of the European Community. The first referenced paper where in silico appears was written by a French team in 1991. The first referenced book chapter where in silico appears was written by Hans B. Sieburg in 1990 and presented during a Summer School on Complex Systems at the Santa Fe Institute. The phrase in silico originally applied only to computer simulations that modeled natural or laboratory processes (in all the natural sciences), and did not refer to calculations done by computer generically. Drug discovery with virtual screening In silico study in medicine is thought to have the potential to speed the rate of discovery while reducing the need for expensive lab work and clinical trials. One way to achieve this is by producing and screening drug candidates more effectively. In 2010, for example, using the protein docking algorithm EADock (see Protein-ligand docking), researchers found potential inhibitors to an enzyme associated with cancer activity in silico. Fifty percent of the molecules were later shown to be active inhibitors in vitro. This approach differs from use of expensive high-throughput screening (HTS) robotic labs to physically test thousands of diverse compounds a day often with an expected hit rate on the order of 1% or less with still fewer expected to be real leads following further testing (see drug discovery). As an example, the technique was utilized for a drug repurposing study in order to search for potential cures for COVID-19 (SARS-CoV-2). Cell models Efforts have been made to establish computer models of cellular behavior. For example, in 2007 researchers developed an in silico model of tuberculosis to aid in drug discovery, with the prime benefit of its being faster than real time simulated growth rates, allowing phenomena of interest to be observed in minutes rather than months. More work can be found that focus on modeling a particular cellular process such as the growth cycle of Caulobacter crescentus. These efforts fall far short of an exact, fully predictive, computer model of a cell's entire behavior. Limitations in the understanding of molecular dynamics and cell biology as well as the absence of available computer processing power force large simplifying assumptions that constrain the usefulness of present in silico cell models, which are very important for in silico cancer research. Genetics Digital genetic sequences obtained from DNA sequencing may be stored in sequence databases, be analyzed (see Sequence analysis), be digitally altered or be used as templates for creating new actual DNA using artificial gene synthesis. Other examples In silico computer-based modeling technologies have also been applied in: Whole cell analysis of prokaryotic and eukaryotic hosts e.g. E. coli, B. subtilis, yeast, CHO- or human cell lines Discovery of potential cure for COVID-19. Bioprocess development and optimization e.g. optimization of product yields Simulation of oncological clinical trials exploiting grid computing infrastructures, such as the European Grid Infrastructure, for improving the performance and effectiveness of the simulations. Analysis, interpretation and visualization of heterologous data sets from various sources e.g. genome, transcriptome or proteome data Validation of taxonomic assignment steps in herbivore metagenomics study. Protein design. One example is RosettaDesign, a software package under development and free for academic use. See also Virtual screening Computational biology Computational biomodeling Computer experiment Folding@home Cellular model Nonclinical studies Organ-on-a-chip In silico molecular design programs In silico medicine Dry lab References External links World Wide Words: In silico CADASTER Seventh Framework Programme project aimed to develop in silico computational methods to minimize experimental tests for REACH Registration, Evaluation, Authorisation and Restriction of Chemicals In Silico Biology. Journal of Biological Systems Modeling and Simulation In Silico Pharmacology Pharmaceutical industry Latin biological phrases Alternatives to animal testing Animal test conditions
6244450
https://en.wikipedia.org/wiki/Epi%20Info
Epi Info
Epi Info is statistical software for epidemiology developed by Centers for Disease Control and Prevention (CDC) in Atlanta, Georgia (US). Epi Info has been in existence for over 20 years and is currently available for Microsoft Windows, Android and iOS, along with a web and cloud version. The program allows for electronic survey creation, data entry, and analysis. Within the analysis module, analytic routines include t-tests, ANOVA, nonparametric statistics, cross tabulations and stratification with estimates of odds ratios, risk ratios, and risk differences, logistic regression (conditional and unconditional), survival analysis (Kaplan Meier and Cox proportional hazard), and analysis of complex survey data. The software is an open-source project with limited support. An analysis conducted in 2003 documented over 1,000,000 downloads of Epi Info from 180 countries. History Epi Info has been in development for over 20 years. The first version, Epi Info 1, was originally implemented by Jeff Dean as an unpaid intern in high school. It was an MS-DOS batch file on 5.25" floppy disks and released in 1985. MS-DOS continued to be the only supported operating system until the release of Epi Info 2000, which was written in Microsoft's Visual Basic and became the first Windows-compatible version. The last MS-DOS version was Epi Info 6.04d released in January 2001. Epi Info 2000 changed the way data was stored by adopting the Microsoft Access database format, rather than continuing to use the plain-text file format from the MS-DOS versions. Following the release of Epi Info 2000 was Epi Info 2002, then Epi Info version 3.0, and finally the open-source Epi Info 7. Epi Info 7 was made open source on November 13, 2008 when its source code was uploaded to Codeplex for the first time. The 7 series is the presently maintained Epi Info product line. Note that Epi Info 3 for Windows is different from Epi Info 3 for MS-DOS even though they share the same version number. After Microsoft shut down Codeplex in December 2017, the repository of Epi Info migrated to GitHub. Features From a user's perspective, the most important functions of Epi Info are the ability to rapidly develop a questionnaire, customize the data entry process, quickly enter data into that questionnaire, and then analyze the data. For epidemiological uses, such as outbreak investigations, being able to rapidly create an electronic data entry screen and then do immediate analysis on the collected data can save considerable amounts of time versus using paper surveys. Epi Info uses three distinct modules to accomplish these tasks: Form Designer, Enter, and Analysis. Other modules include the Dashboard module, a mapping module, and various utilities such as StatCalc. Electronic questionnaires can also be created in the Form Designer module. Individual questions can be placed anywhere on a page and each form may contain multiple pages. The user is given a high degree of control over the form's appearance and function. The user defines both the question's prompt and the format of the data that is to be collected. Data types include numbers, text strings, dates, times, and Boolean. Users can also create drop-down lists, code tables, and comment legal fields. One of the more powerful features of Form Designer is the ability to program intelligence into a form through a feature called "check code". Check code allows for certain events to occur depending on what action a data entry person has taken. For example, if the data entry person types "Male" into a question on gender, any questions relating to pregnancy might then be hidden or disabled. Skip patterns, message boxes, and math operations are also available. Relational database modeling is supported, as users may link their form to any number of other forms in their database. The "Classic Analysis" module is where users analyze their data. Import and export functions exist that allow for data to be converted between plain-text, CSV, Microsoft Excel, Microsoft Access, MySQL, Microsoft SQL Server, and other formats. Many advanced statistical routines are provided, such as t-tests, ANOVA, nonparametric statistics, cross tabulations and stratification with estimates of odds ratios, risk ratios, and risk differences, logistic regression (conditional and unconditional), survival analysis (Kaplan Meier and Cox proportional hazard), and analysis of complex survey data. The "Visual Dashboard" module is a lighter-weight Analysis component that is designed to be easy to use, but does not contain the full set of data management features that the "Classic Analysis" module does. Using the Map module, data can be displayed either by geographic reference or by GPS coordinates. Older versions of Epi Info contained a Report module and a Menu module. The Report module allowed the user to edit and format the raw output from other Epi Info modules into presentable documents. The menu module allowed for the editing and re-arranging of the basic Epi Info menu structure. This module was powerful enough that several applications have been built off of it (in versions of Epi Info prior to version 7), including the National Electronic Telecommunications System for Surveillance (NETSS) for Epi Info 6. Unlike the other modules, the menu module does not have a design-mode user interface, but instead resides in a .mnu file whose scripts must be edited manually. In Epi Info 7, the Visual Dashboard assumes some of the basic functions of the report module. Epi Info 7 includes a number of nutritional anthropometric functions that can assist in recording and evaluating measurements of length, stature, weight, head circumference, and arm circumference for children and adolescents. They can be used to calculate percentiles and number of standard deviations from the mean (Z-scores) using the CDC/WHO 1978 growth reference, CDC 2000 growth reference, the WHO Child Growth Reference, or the WHO Reference 2007. It replaces the NutStat and EpiNut modules found in prior versions of Epi Info. Open Epi OPenEpi is an online version of the software and has inbuilt statistical calculators. For more information, see the article OpenEpi. Future developments Version 7 is in continuing development as an open source project. Web-based data entry, web-based analysis, and mobile data collection tools are currently available and will see continued improvement in 2014 and beyond. Release history See also CSPro OpenEpi X-12-ARIMA Epi Map Free statistical software References External links Official website All repositories on Github Epi Info Community Edition Epi Info™ Companion for iOS Epi Info™ Companion for Android Open Epi Epi Info - YouTube page. Official instructional videos Centers for Disease Control and Prevention Free health care software Free statistical software Public-domain software
105036
https://en.wikipedia.org/wiki/Karlsruhe
Karlsruhe
Karlsruhe ( , , ), formerly spelled Carlsruhe in English, is the third-largest city of the German federal state of Baden-Württemberg after its capital of Stuttgart, and Mannheim. Its 308,436 inhabitants make it the 21st-largest city of Germany. On the right bank of the Rhine, the city lies near the French-German border, between the Mannheim/Ludwigshafen conurbation to the north, and the Strasbourg/Kehl conurbation to the south. It is the largest city of Baden, a historic region named after Hohenbaden Castle in the city of Baden-Baden. Karlsruhe is also the largest city in the South Franconian dialect area (transitional dialects between Central and Upper German), the only other larger city in that area being Heilbronn. The city is the seat of the Federal Constitutional Court (Bundesverfassungsgericht), as well as of the Federal Court of Justice (Bundesgerichtshof) and the Public Prosecutor General of the Federal Court of Justice (Generalbundesanwalt beim Bundesgerichtshof). Karlsruhe was the capital of the Margraviate of Baden-Durlach (Durlach: 1565–1718; Karlsruhe: 1718–1771), the Margraviate of Baden (1771–1803), the Electorate of Baden (1803–1806), the Grand Duchy of Baden (1806–1918), and the Republic of Baden (1918–1945). Its most remarkable building is Karlsruhe Palace, which was built in 1715. There are nine institutions of higher education in the city, most notably the Karlsruhe Institute of Technology (Karlsruher Institut für Technologie). Karlsruhe/Baden-Baden Airport (Flughafen Karlsruhe/Baden-Baden) is the second-busiest airport of Baden-Württemberg after Stuttgart Airport, and the 17th-busiest airport of Germany. In 2019 the UNESCO announced that Karlsruhe will join its network of "Creative Cities" as "City of Media Arts". Geography Karlsruhe lies completely to the east of the Rhine, and almost completely on the Upper Rhine Plain. It contains the Turmberg in the east, and also lies on the borders of the Kraichgau leading to the Northern Black Forest. The Rhine, one of the world's most important shipping routes, forms the western limits of the city, beyond which lie the towns of Maximiliansau and Wörth am Rhein in the German state of Rhineland-Palatinate. The city centre is about from the river, as measured from the Marktplatz (Market Square). Two tributaries of the Rhine, the Alb and the Pfinz, flow through the city from the Kraichgau to eventually join the Rhine. The city lies at an altitude between 100 and 322 m (near the communications tower in the suburb of Grünwettersbach). Its geographical coordinates are ; the 49th parallel runs through the city centre, which puts it at the same latitude as much of the Canada–United States border, the cities Vancouver (Canada), Paris (France), Regensburg (Germany), and Hulunbuir (China). Its course is marked by a stone and painted line in the Stadtgarten (municipal park). The total area of the city is , hence it is the 30th largest city in Germany measured by land area. The longest north–south distance is and in the east–west direction. Karlsruhe is part of the urban area of Karlsruhe/Pforzheim, to which certain other towns in the district of Karlsruhe, such as Bruchsal, Ettlingen, Stutensee, and Rheinstetten, as well as the city of Pforzheim, belong. The city was planned with the palace tower (Schloss) at the center and 32 streets radiating out from it like the spokes of a wheel, or the ribs of a folding fan, so that one nickname for Karlsruhe in German is the "fan city" (Fächerstadt). Almost all of these streets survive to this day. Because of this city layout, in metric geometry, Karlsruhe metric refers to a measure of distance that assumes travel is only possible along radial streets and along circular avenues around the centre. The city centre is the oldest part of town and lies south of the palace in the quadrant defined by nine of the radial streets. The central part of the palace runs east–west, with two wings, each at a 45° angle, directed southeast and southwest (i.e., parallel with the streets marking the boundaries of the quadrant defining the city center). The market square lies on the street running south from the palace to Ettlingen. The market square has the town hall (Rathaus) to the west, the main Lutheran church (Evangelische Stadtkirche) to the east, and the tomb of Margrave Charles III William in a pyramid in the buildings, resulting in Karlsruhe being one of only three large cities in Germany where buildings are laid out in the neoclassical style. The area north of the palace is a park and forest. Originally the area to the east of the palace consisted of gardens and forests, some of which remain, but the Karlsruhe Institute of Technology (founded in 1825), Wildparkstadion football stadium, and residential areas have been built there. The area west of the palace is now mostly residential. Climate Karlsruhe experiences an oceanic climate (Köppen Cfb) and its winter climate is milder, compared to most other German cities, except for the Rhine-Ruhr area. Summers are also hotter than elsewhere in the country and it is one of the sunniest cities in Germany, like the Rhine-Palatinate area. Precipitation is almost evenly spread throughout the year. In 2008, the weather station in Karlsruhe, which had been in operation since 1876, was closed; it was replaced by a weather station in Rheinstetten, south of Karlsruhe. Districts Karlsruhe is divided into 27 districts. History According to legend, the name Karlsruhe, which translates as "Charles' repose" or "Charles' peace", was given to the new city after a hunting trip when Margrave Charles III William of Baden-Durlach woke from a dream in which he dreamt of founding his new city. A variation of this story claims that he built the new palace to find peace from his wife. Charles William founded the city on June 17, 1715, after a dispute with the citizens of his previous capital, Durlach. The founding of the city is closely linked to the construction of the palace. Karlsruhe became the capital of Baden-Durlach, and, in 1771, of the united Baden until 1945. Built in 1822, the Ständehaus was the first parliament building in a German state. In the aftermath of the democratic revolution of 1848, a republican government was elected there. Karlsruhe was visited by Thomas Jefferson during his time as the American envoy to France; when Pierre Charles L'Enfant was planning the layout of Washington, D.C., Jefferson passed to him maps of 12 European towns to consult, one of which was a sketch he had made of Karlsruhe during his visit. In 1860, the first-ever international professional convention of chemists, the Karlsruhe Congress, was held in the city. In 1907 the town was site of the Hau Riot where large crowds caused disturbance during the trial of murderer Carl Hau. During the Holocaust, on Kristallnacht in 1938, the Adass Jeshurun synagogue was burned to the ground, and the city's Jews were later sent to Dachau concentration camp, Gurs concentration camp, Theresienstadt, and Auschwitz, with 1,421 of Karlsruhe's Jews being killed. Much of the central area, including the palace, was reduced to rubble by Allied bombing during World War II, but was rebuilt after the war. Located in the American zone of the postwar Allied occupation, Karlsruhe was home to an American military base, established in 1945. In 1995, the bases closed, and their facilities were turned over to the city of Karlsruhe. Population The following list shows the most significant groups of foreigners residing in the city of Karlsruhe by country. Main sights The Stadtgarten is a recreational area near the main railway station (Hauptbahnhof) and was rebuilt for the 1967 Federal Garden Show (Bundesgartenschau). It is also the site of the Karlsruhe Zoo. The Durlacher Turmberg has a look-out tower (hence its name). It is a former keep dating back to the 13th century. The city has two botanical gardens: the municipal Botanischer Garten Karlsruhe, which forms part of the Palace complex, and the Botanischer Garten der Universität Karlsruhe, which is maintained by the university. The Marktplatz has a stone pyramid marking the grave of the city's founder. Built in 1825, it is the emblem of Karlsruhe. The city is nicknamed the "fan city" (die Fächerstadt) because of its design layout, with straight streets radiating fan-like from the Palace. The Karlsruhe Palace (Schloss) is an interesting piece of architecture; the adjacent Schlossgarten includes the Botanical Garden with a palm, cactus and orchid house, and walking paths through the woods to the north. The so-called Kleine Kirche (Little Church), built between 1773 and 1776, is the oldest church of Karlsruhe's city centre. The architect Friedrich Weinbrenner designed many of the city's most important sights. Another sight is the Rondellplatz with its 'Constitution Building Columns' (1826). It is dedicated to Baden's first constitution in 1818, which was one of the most liberal of its time. The Münze (mint), erected in 1826/27, was also built by Weinbrenner. The St. Stephan parish church is one of the masterpieces of neoclassical church architecture in. Weinbrenner, who built this church between 1808 and 1814, orientated it to the Pantheon, Rome. The neo-Gothic Grand Ducal Burial Chapel, built between 1889 and 1896, is a mausoleum rather than a church, and is located in the middle of the forest. The main cemetery of Karlsruhe is the oldest park-like cemetery in Germany. The crematorium was the first to be built in the style of a church. Karlsruhe is also home to a natural history museum (the State Museum of Natural History Karlsruhe), an opera house (the Baden State Theatre), as well as a number of independent theatres and art galleries. The State Art Gallery, built in 1846 by Heinrich Hübsch, displays paintings and sculptures from six centuries, particularly from France, Germany and Holland. Karlsruhe's newly renovated art museum is one of the most important art museums in Baden-Württemberg. Further cultural attractions are scattered throughout Karlsruhe's various incorporated suburbs. Established in 1924, the Scheffel Association is the largest literary society in Germany. Today the Prinz-Max-Palais, built between 1881 and 1884 in neoclassical style, houses the organisation and includes its museum. Due to population growth in the late 19th century, Karlsruhe developed several suburban areas (Vorstadt) in the Gründerzeit and especially art nouveau styles of architecture, with many preserved examples. Karlsruhe is also home to the Majolika-Manufaktur, the only art-ceramics pottery studio in Germany. Founded in 1901, it is located in the Schlossgarten. A 'blue streak' (Blauer Strahl) consisting of 1,645 ceramic tiles, connects the studio with the Palace. It is the world's largest ceramic artwork. Another tourist attraction is the Centre for Art and Media (Zentrum für Kunst und Medientechnologie, or ZKM), which is located in a converted ammunition factory. Government Justice Karlsruhe is the seat of the German Federal Constitutional Court (Bundesverfassungsgericht) and the highest Court of Appeals in civil and criminal cases, the Bundesgerichtshof. The courts came to Karlsruhe after World War II, when the provinces of Baden and Württemberg were merged. Stuttgart, capital of Württemberg, became the capital of the new province (Württemberg-Baden in 1945 and Baden-Württemberg in 1952). In compensation for the state authorities relocated to Stuttgart, Karlsruhe applied to become the seat of the high court. Public health There are four hospitals: The municipal Klinikum Karlsruhe provides the maximum level of medical services, the St. Vincentius-Kliniken and the Diakonissenkrankenhaus, connected to the Catholic and Protestant churches, respectively, offer central services, and the private Paracelsus-Klinik basic medical care, according to state hospital demand planning. Economy Germany's largest oil refinery is located in Karlsruhe, at the western edge of the city, directly on the river Rhine. The Technologieregion Karlsruhe is a loose confederation of the region's cities in order to promote high tech industries; today, about 20% of the region's jobs are in research and development. EnBW, one of Germany's biggest electric utility companies, with a revenue of 19.2 billion € in 2012, is headquartered in the city. Internet activities Due to the Karlsruhe Institute of Technology providing services until the late 1990, Karlsruhe became known as the internet capital of Germany. The DENIC, Germany's network information centre, has since moved to Frankfurt, though, where DE-CIX is located. Two major internet service providers, WEB.DE and schlund+partner/1&1, now both owned by United Internet AG, are located at Karlsruhe. The library of the Karlsruhe Institute of Technology developed the Karlsruher Virtueller Katalog, the first internet site that allowed researchers worldwide (for free) to search multiple library catalogues worldwide. In the year 2000 the regional online "newspaper" ka-news.de was created. As a daily newspaper, it not only provides the news, but also informs readers about upcoming events in Karlsruhe and surrounding areas. In addition to established companies, Karlsruhe has a vivid and spreading startup community with well-known startups like STAPPZ. Together, the local high tech industry is responsible for over 22.000 jobs. Politics Mayors After the castle was founded in 1715, there was also a settlement in which a mayor was appointed from 1718. From 1812 the mayors received the title of Lord Mayor. In addition to the Lord Mayor, there are five other mayors. Mayor for: Human Resources, Elections and Statistics, Citizen Service and Security, Culture Youth and social affairs, schools, sports, pools Finance, economy and work, city marketing, congresses, exhibitions and events, tourism, supply and ports, real estate and market affairs Environment and climate protection, health, cemetery office, waste management, forestry, fire and disaster control Planning, building, real estate management, people's apartment and zoo List of Mayors Transport Railway The Verkehrsbetriebe Karlsruhe (VBK) operates the city's urban public transport network, comprising seven tram routes and a network of bus routes. All city areas can be reached round the clock by tram and a night bus system. The Turmbergbahn funicular railway, to the east of the city centre, is also operated by the VBK. Similar to a premetro tramlines operating in the city centre use two tramway tunnels that were completed on 11th December 2021. The VBK is also a partner, with the Albtal-Verkehrs-Gesellschaft and Deutsche Bahn, in the operation of the Karlsruhe Stadtbahn, the rail system that serves a larger area around the city. This system makes it possible to reach other towns in the region, like Ettlingen, Wörth am Rhein, Pforzheim, Bad Wildbad, Bretten, Bruchsal, Heilbronn, Baden-Baden, and even Freudenstadt in the Black Forest right from the city centre. The Stadtbahn is known for pioneering the concept of operating trams on train tracks, to achieve a more effective and attractive public transport system. Karlsruhe is connected via road and rail, with Autobahn and Intercity Express connections going to Frankfurt, Stuttgart/Munich and Freiburg/Basel from Karlsruhe Hauptbahnhof. Since June 2007 it has been connected to the TGV network, reducing travel time to Paris to three hours (previously it had taken five hours). The Rhine Valley Railway is also an important freight line. Freight trains can bypass Karlsuhe Hauptbahnhof via the Karlsruhe freight bypass railway. Shipping Two ports on the Rhine provide transport capacity on cargo ships, especially for petroleum products. Airport The nearest airport is part of the Baden Airpark (officially Flughafen Karlsruhe/Baden-Baden) about southwest of Karlsruhe, with regular connections to airports in Germany and Europe in general. Frankfurt International Airport can be reached in about an hour and a half by car (one hour by Intercity Express); Stuttgart Airport can be reached in about one hour (about an hour and a half by train and S-Bahn). Streets Karlsruhe is at the Bundesautobahn 5 and the Bundesstraße 10. In the city there is a good bike lane infrastructure. Two interesting facts in transportation history are that both Karl Drais, the inventor of the bicycle, as well as Karl Benz, the inventor of the automobile were born in Karlsruhe. Benz was born in Mühlburg, which later became a borough of Karlsruhe (in 1886). Benz also studied at the Karlsruhe University. Benz's wife Bertha took the world's first long distance-drive with an automobile from Mannheim to Karlsruhe-Grötzingen and Pforzheim (see Bertha Benz Memorial Route). Their professional lives led both men to the neighboring city of Mannheim, where they first applied their most famous inventions. Jewish community Jews settled in Karlsruhe soon after its founding. They were attracted by the numerous privileges granted by its founder to settlers, without discrimination as to creed. Official documents attest the presence of several Jewish families at Karlsruhe in 1717. A year later the city council addressed to the margrave a report in which a question was raised as to the proportion of municipal charges to be borne by the newly arrived Jews, who in that year formed an organized congregation, with Rabbi Nathan Uri Kohen of Metz at its head. A document dated 1726 gives the names of twenty-four Jews who had taken part in an election of municipal officers. As the city grew, permission to settle there became less easily obtained by Jews, and the community developed more slowly. A 1752 Jewry ordinance stated Jews were forbidden to leave the city on Sundays and Christian holidays, or to go out of their houses during church services, but they were exempted from service by court summonses on Sabbaths. They could sell wine only in inns owned by Jews and graze their cattle, not on the commons, but on the wayside only. Nethanael Weill was a rabbi in Karlsruhe from 1750 until his death. In 1783, by a decree issued by Margrave Charles Frederick of Baden, the Jews ceased to be serfs, and consequently could settle wherever they pleased. The same decree freed them from the Todfall tax, paid to the clergy for each Jewish burial. In commemoration of these changes special prayers were prepared by the acting rabbi Jedidiah Tiah Weill, who, succeeding his father in 1770, held the office until 1805. In 1808 the new constitution of what at that time, during the Napoleonic era, had become the Grand Duchy of Baden granted Jews citizenship status; a subsequent edict, in 1809, constitutionally acknowledged Jews as a religious group. The latter edict provided for a hierarchical organization of the Jewish communities of Baden, under the umbrella of a central council of Baden Jewry (Oberrat der Israeliten Badens), with its seat in Karlsruhe, and the appointment of a chief rabbi of Karlsruhe, as the spiritual head of the Jews in all of Baden. The first chief rabbi of Karlsruhe and Baden was Rabbi Asher Loew, who served from 1809 until his death in 1837. Complete emancipation was given in 1862, Jews were elected to city council and Baden parliament, and from 1890 were appointed judges. Jews were persecuted in the 'Hep-Hep' riots that occurred in 1819; and anti-Jewish demonstrations were held in 1843, 1848, and the 1880s. The well-known German-Israeli artist Leo Kahn studied in Karlsruhe before leaving for France and Israel in the 1920s and 1930s. Today, there are about 900 members in the Jewish community, many of whom are recent immigrants from Russia, and an orthodox rabbi. Karlsruhe has memorialized its Jewish community and notable pre-war synagogues with a memorial park. Karlsruhe and the Shoah On 28 October 1938, all Jewish men of Polish extraction were expelled to the Polish border, their families joining them later and most ultimately perishing in the ghettoes and concentration camps. On Kristallnacht (9-10 November 1938), the Adass Jeshurun synagogue was burned to the ground, the main synagogue was damaged, and Jewish men were taken to the Dachau concentration camp after being beaten and tormented. Deportations commenced on 22 October 1940, when 893 Jews were loaded onto trains for the three-day journey to the Gurs concentration camp in France. Another 387 were deported in from 1942 to 1945 to lzbica in the Lublin district (Poland), Theresienstadt, and Auschwitz. Of the 1,280 Jews deported directly from Karlsruhe, 1,175 perished. Another 138 perished after deportation from other German cities or occupied Europe. In all, 1,421 of Karlsruhe's Jews died during the Shoah. A new community was formed after the war by surviving former residents, with a new synagogue erected in 1971. It numbered 359 in 1980. Historical population Notable people George Bayer, pioneer in the US state of Missouri Karl Benz (1844–1929), mechanical engineer and inventor of the first automobile as well as the founder of Benz & Co., Daimler-Benz, and Mercedes-Benz (now part of Daimler AG). He was born in the Karlsruhe borough of Mühlburg and educated at Karlsruhe Grammar School, the Lyceum, and Poly-Technical University Hermann Billing, Art Nouveau architect, was born and lived in Karlsruhe, where he built his first famous works Siegfried Buback (1920–1977), then-Attorney General of Germany who fell victim to terrorists of the Rote Armee Fraktion in April 1977 in Karlsruhe Berthold von Deimling (1853–1944), Prussian general Karl Drais (1785–1851), inventor of the two-wheeler principle (dandy horse) basic to bicycle and motorcycle, key typewriter, and earliest stenograph, was born and died in Karlsruhe Theodor von Dusch (1824–1890), physician remembered for experiments involving cotton-wool filters for bacteria Ludwig Eichrodt, writer Erik H. Erikson (1902–1994), children's psychoanalyst and theoretical pioneer in the field of study of identity building, spent his childhood and school time (Bismarck-Gymnasium) in Karlsruhe. Harry L. Ettlinger, US Army private who assisted the MFAA in the recovery of art looted by the Nazis. He was the last Jewish boy to celebrate his bar mitzvah in Karlsruhe's Kronenstrasse Synagogue, on September 24, 1938. Clara Mathilda Faisst (1872–1948), pianist and composer Hans Frank (1900–1946), Obergruppenführer SA, Gauleiter and governor-general of Nazi-occupied Poland; hanged at Nuremberg for his war crimes during World War II Reinhold Frank (1896–1945), lawyer who worked for the resistance in Nazi Germany, ran a law practice in Karlsruhe; in his honour the street in Karlsruhe where the lawyer's chambers were founded bears his name Gottfried Fuchs (1889–1972), German-Canadian Olympic soccer player Karoline von Günderrode, poet, was born in Karlsruhe (1780–1806) Johann Peter Hebel, writer and poet, lived in Karlsruhe for most of his life Heinrich Rudolf Hertz, discovered electromagnetic waves at the University of Karlsruhe in the late 1880s. A lecture room named after Hertz lies close to the very spot where the discovery was made. Julius Hirsch (1892–1945), Olympian soccer player and first Jewish member of the national team, two-time Germany team champion, awarded the Iron Cross during World War I, murdered in Auschwitz concentration camp Friedrich Hund, physicist of the pioneering generation of quantum mechanics (see Hund's rules); was born here Hedwig Kettler (1851–1937), founded the first German Mädchengymnasium (girls' high school), located in Karlsruhe Gustav Landauer (1870–1919), theorist of anarchism in Germany, was born in Karlsruhe Kolja Lessing (born 1961), German violinist, pianist, composer and academic teacher Markus Lüpertz worked and lives in Karlsruhe; he created the Narrenbrunnen (Fool's Fountain) in the city center Composer Wolfgang Rihm is a resident of Karlsruhe. In 1886, Joseph Viktor von Scheffel, poet and novelist, was born in Karlsruhe. Peter Sloterdijk (born 1947), German philosopher. Rahel Straus (1880–1963), German-Jewish medical doctor and feminist Johann Gottfried Tulla (1770–?), instrumental in stabilizing and straightening the course of the southern Rhine; a co-founder of the Karlsruhe University (1825) Victoria of Baden (1862–1930), born in Karlsruhe, queen consort of Sweden by her marriage to King Gustaf V of Sweden Friedrich Weinbrenner (1766–1826), architect of neoclassicism; his tomb is situated in the main Protestant church in Karlsruhe. Thomas Ernst Josef Wiedemann (1950–2001), German-British historian, born in Karlsruhe Richard Willstätter, recipient of 1915 Nobel Prize for Chemistry Sina Deinert, a member of Now United Dennis Aogo (born 1987), football defender Christa Bauch, female bodybuilder Walther Bensemann, one of the founders of the first southern German soccer club Karlsruher FV and later one of the founders of DFB and the founder of Kicker, Germany's leading soccer magazine Oliver Bierhoff (born 1968), retired football striker and former national team captain for the Germany and Italian Serie A clubs Udinese, A.C. Milan and Chievo; currently working as the German national team manager Andi Deris (born 1964), musician and songwriter, lead singer of the power metal band Helloween Karl Elzer, stage and film actor Gottfried Fuchs (1889–1972), was born in Karlsruhe and holds the record of ten goals in one single international soccer match for the German national team Regina Halmich (born 1976), retired female boxing flyweight world champion Vincenzo Italiano (born 1977), Italian footballer currently plays for Calcio Padova Nora Krug (born 1977), German-American writer Sead Kolašinac (born 1993), Bosnian footballer who plays as a left back for Arsenal FC Oliver Kahn (born 1969), retired goalkeeper of Karlsruher SC, Bayern Munich and Germany Sebastian Koch (born 1962), actor Renate Lingor (born 1975), former national football player Pietro Lombardi (born 1992), singer Mehmet Scholl (born 1970), retired footballer for Karlsruher SC, later Bayern Munich and the German national team Susanne Stichler (born 1969), journalist and television presenter Muhammed Suiçmez (born 1975), Turkish guitarist and composer for German technical death metal band Necrophagist Eugene Weingand (1934–1986), actor and television host who claimed to be Peter Lorre Jr. Moon Ga-young (born 1996), South Korean actress Education Karlsruhe is a renowned research and study centre, with one of Germany's finest institutions of higher education. Technology, engineering, and business The Karlsruhe University (Universität Karlsruhe-TH), the oldest technical university in Germany, is home to the Forschungszentrum Karlsruhe (Karlsruhe Research Center), where engineering and scientific research is performed in the areas of health, earth, and environmental sciences. The Karlsruhe University of Applied Sciences (Hochschule Karlsruhe-HS) is the largest university of technology in the state of Baden-Württemberg, offering both professional and academic education in engineering sciences and business. In 2009, the University of Karlsruhe joined the Forschungszentrum Karlsruhe to form the Karlsruhe Institute of Technology (KIT). The arts The Academy of Fine Arts, Karlsruhe is one of the smallest universities in Germany, with average 300 students. The Karlsruhe University of Arts and Design (HfG) was founded to the same time as its sister institution, the Center for Art and Media Karlsruhe (Zentrum für Kunst und Medientechnologie). The HfG teaching and research focuses on new media and media art. The Hochschule für Musik Karlsruhe is a music conservatory that offers degrees in composition, music performance, education, and radio journalism. Since 1989 it has been located in the Gottesaue Palace. International education The Karlshochschule International University (formerly known as Merkur Internationale Fachhochschule) was founded in 2004. As a foundation-owned, state-approved management school, Karlshochschule offers undergraduate education in both German and English, focusing on international and intercultural management, as well as service- and culture-related industries. Furthermore, an international consecutive Master of Arts in leadership studies is offered in English. European Institute of Innovation and Technology (EIT) Karlsruhe hosts one of the European Institute of Innovation and Technology's Knowledge and Innovation Communities (KICs) focusing on sustainable energy. Other co‑centres are based in Grenoble, France (CC Alps Valleys); Eindhoven, the Netherlands, and Leuven, Belgium (CC Benelux); Barcelona, Spain (CC Iberia); Kraków, Poland (CC PolandPlus); and Stockholm, Sweden (CC Sweden). University of Education The Karlsruhe University of Education was founded in 1962. It is specialized in educational processes. The university has about 3700 students and 180 full-time researchers and lecturers. It offers a wide range of educational studies, like teaching profession for primary and secondary schools (both optional with a European Teaching Certificate profile), Bachelor programs that specializes in Early Childhood Education and in Health and Leisure Education, Master programs in Educational Science, Intercultural Education, Migration and Multilingualism. Furthermore, the University of Education Karlsruhe offers a Master program for Biodiversity and Environmental Education. Culture In 1999 the ZKM (Zentrum für Kunst und Medientechnologie, Centre for Art and Media) was opened. Linking new media theory and practice, the ZKM is located in a former weapons factory. Among the institutes related to the ZKM are the Staatliche Hochschule für Gestaltung (State University of Design), whose president is philosopher Peter Sloterdijk and the Museum for Contemporary Art. Twin towns – sister cities Karlsruhe is twinned with: Nancy, France (1955) Nottingham, England, United Kingdom (1969) Halle, Germany (1987) Krasnodar, Russia (1997) Timișoara, Romania (1997) Partnerships Karlsruhe also cooperates with: Oulu, Finland Legacy Ukrainian village Stepove near the city of Mykolaiv in southern Ukraine was established by German colonists as Karlsruhe. Events Every year in July there is a large open-air festival lasting three days called simply Das Fest ("The Festival"). The Baden State Theatre has sponsored the Händel Festival since 1978. The city hosted the 23rd and 31st European Juggling Conventions (EJC) in 2000 and 2008. In July the African Summer Festival is held in the city's Nordstadt. Markets, drumming workshops, exhibitions, a varied children's programme, and musical performances take place during the three days festival. In the past Karlsruhe has been the host of LinuxTag (the biggest Linux event in Europe) and until 2006 hosted the annual Linux Audio Conference. Visitors and locals watched the total solar eclipse at noon on August 11, 1999. The city was not only located within the eclipse path but was one of the few within Germany not plagued by bad weather. Sport Football Karlsruher SC (KSC), DFB (2. Liga) Basketball PS Karlsruhe Lions, Basketball-Pro-Liga A (second division) Karlsruhe co-hosted the FIBA EuroBasket 1985. Tennis TC Rueppurr (TCR), [Tennis-Bundesliga] (women's first division) Baseball, softball Karlsruhe Cougars, Regional League South-East (men's baseball), 1st Bundesliga South (women's softball I) and State League South (women's softball II) American football Badener Greifs, currently competing in the Regional League Central but formerly a member of the German Football League's 1st Bundesliga, lost to the Berlin Adler in the 1987 German Bowl (see also: German Football League) Notes References External links Map of Karlsruhe City wiki of Karlsruhe Karlsruhe Nuclide Chart 1715 establishments in the Holy Roman Empire Capitals of former nations Cities in Baden-Württemberg Baden Holocaust locations in Germany Karlsruhe (region) Planned capitals Populated places established in 1715 Populated places on the Rhine
41441723
https://en.wikipedia.org/wiki/Firefly%20Online
Firefly Online
Firefly Online is a vaporware strategic role-playing video game based on the Firefly franchise. It was being developed by Spark Plug Games and Quantum Mechanix for Microsoft Windows, macOS, iOS, and Android. Although never officially cancelled, there have been no updates about the game's release since March 2016. Gameplay Players would have assumed the roles of spaceship captains, assembling crews, completing missions, and trading with others. The game would have contained a central story alongside various branching stories, and players might have been able to create jobs for each other to complete. Players would have been able to customize their ships while playing, and view "in-universe guides" around planets which provide information on them. Furthermore, both space and planetary environments were planned to exist. The game would have contained over 200 worlds to visit. Players would have been able to assemble a crew, needing to choose crew members who possess the skills and abilities they need, such as engineering or weaponry skills. Development Prior to the announcement of an official title, a fan made game titled Firefly Universe Online was being developed by DarkCryo. Fox gave their blessing to the game, but the studio ceased development following the announcement of an official release based on the Firefly franchise. Firefly Online was announced at the 2013 San Diego Comic-Con for iOS and Android. It was later announced for Microsoft Windows and macOS. QMx Interactive joined Spark Plug Games to produce the game. Joss Whedon, creator of the series, was not involved with the development of the game but was aware of it. The development team was planning to add future downloadable content which might have included the ability to switch to the Alliance faction, involved the inclusion of "Reavers", and was aiming to incorporate cross-platform functionality. The PC versions would have been distributed via Steam. Gameplay of Firefly Online was shown at the 2014 San Diego Comic Con. At that Comic Con it was announced that the original TV series cast was going to voice their in-game characters, along with a number of cast from the series, with Star Trek: The Next Generation 's Wil Wheaton providing the male voice for the player's avatar. Originally planned for launch in Spring 2015, the developers were required to recreate large amounts of it after the original show's cast were brought on to provide voice acting. John O'Neill, CEO of Spark Plug Games, said that they were having to "change everything" and that they were deliberately not providing development updates to avoid "saying something that’s wrong again." The last post on Firefly Online's Facebook page dated March 2016 said, "We're still here. We're still flyin'. Game is still in development. Stay tuned." References External links (archived) Android (operating system) games Firefly (franchise) games IOS games MacOS games Cancelled Android (operating system) games Cancelled iOS games Cancelled macOS games Cancelled Windows games Video games based on television series Video games based on works by Joss Whedon Windows games
362565
https://en.wikipedia.org/wiki/Optimal%20control
Optimal control
Optimal control theory is a branch of mathematical optimization that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and operations research. For example, the dynamical system might be a spacecraft with controls corresponding to rocket thrusters, and the objective might be to reach the moon with minimum fuel expenditure. Or the dynamical system could be a nation's economy, with the objective to minimize unemployment; the controls in this case could be fiscal and monetary policy. A dynamical system may also be introduced to embed operations research problems within the framework of optimal control theory. Optimal control is an extension of the calculus of variations, and is a mathematical optimization method for deriving control policies. The method is largely due to the work of Lev Pontryagin and Richard Bellman in the 1950s, after contributions to calculus of variations by Edward J. McShane. Optimal control can be seen as a control strategy in control theory. General method Optimal control deals with the problem of finding a control law for a given system such that a certain optimality criterion is achieved. A control problem includes a cost functional that is a function of state and control variables. An optimal control is a set of differential equations describing the paths of the control variables that minimize the cost function. The optimal control can be derived using Pontryagin's maximum principle (a necessary condition also known as Pontryagin's minimum principle or simply Pontryagin's principle), or by solving the Hamilton–Jacobi–Bellman equation (a sufficient condition). We begin with a simple example. Consider a car traveling in a straight line on a hilly road. The question is, how should the driver press the accelerator pedal in order to minimize the total traveling time? In this example, the term control law refers specifically to the way in which the driver presses the accelerator and shifts the gears. The system consists of both the car and the road, and the optimality criterion is the minimization of the total traveling time. Control problems usually include ancillary constraints. For example, the amount of available fuel might be limited, the accelerator pedal cannot be pushed through the floor of the car, speed limits, etc. A proper cost function will be a mathematical expression giving the traveling time as a function of the speed, geometrical considerations, and initial conditions of the system. Constraints are often interchangeable with the cost function. Another related optimal control problem may be to find the way to drive the car so as to minimize its fuel consumption, given that it must complete a given course in a time not exceeding some amount. Yet another related control problem may be to minimize the total monetary cost of completing the trip, given assumed monetary prices for time and fuel. A more abstract framework goes as follows. Minimize the continuous-time cost functional subject to the first-order dynamic constraints (the state equation) the algebraic path constraints and the endpoint conditions where is the state, is the control, is the independent variable (generally speaking, time), is the initial time, and is the terminal time. The terms and are called the endpoint cost and the running cost respectively. In the calculus of variations, and are referred to as the Mayer term and the Lagrangian, respectively. Furthermore, it is noted that the path constraints are in general inequality constraints and thus may not be active (i.e., equal to zero) at the optimal solution. It is also noted that the optimal control problem as stated above may have multiple solutions (i.e., the solution may not be unique). Thus, it is most often the case that any solution to the optimal control problem is locally minimizing. Linear quadratic control A special case of the general nonlinear optimal control problem given in the previous section is the linear quadratic (LQ) optimal control problem. The LQ problem is stated as follows. Minimize the quadratic continuous-time cost functional Subject to the linear first-order dynamic constraints and the initial condition A particular form of the LQ problem that arises in many control system problems is that of the linear quadratic regulator (LQR) where all of the matrices (i.e., , , , and ) are constant, the initial time is arbitrarily set to zero, and the terminal time is taken in the limit (this last assumption is what is known as infinite horizon). The LQR problem is stated as follows. Minimize the infinite horizon quadratic continuous-time cost functional Subject to the linear time-invariant first-order dynamic constraints and the initial condition In the finite-horizon case the matrices are restricted in that and are positive semi-definite and positive definite, respectively. In the infinite-horizon case, however, the matrices and are not only positive-semidefinite and positive-definite, respectively, but are also constant. These additional restrictions on and in the infinite-horizon case are enforced to ensure that the cost functional remains positive. Furthermore, in order to ensure that the cost function is bounded, the additional restriction is imposed that the pair is controllable. Note that the LQ or LQR cost functional can be thought of physically as attempting to minimize the control energy (measured as a quadratic form). The infinite horizon problem (i.e., LQR) may seem overly restrictive and essentially useless because it assumes that the operator is driving the system to zero-state and hence driving the output of the system to zero. This is indeed correct. However the problem of driving the output to a desired nonzero level can be solved after the zero output one is. In fact, it can be proved that this secondary LQR problem can be solved in a very straightforward manner. It has been shown in classical optimal control theory that the LQ (or LQR) optimal control has the feedback form where is a properly dimensioned matrix, given as and is the solution of the differential Riccati equation. The differential Riccati equation is given as For the finite horizon LQ problem, the Riccati equation is integrated backward in time using the terminal boundary condition For the infinite horizon LQR problem, the differential Riccati equation is replaced with the algebraic Riccati equation (ARE) given as Understanding that the ARE arises from infinite horizon problem, the matrices , , , and are all constant. It is noted that there are in general multiple solutions to the algebraic Riccati equation and the positive definite (or positive semi-definite) solution is the one that is used to compute the feedback gain. The LQ (LQR) problem was elegantly solved by Rudolf E. Kálmán. Numerical methods for optimal control Optimal control problems are generally nonlinear and therefore, generally do not have analytic solutions (e.g., like the linear-quadratic optimal control problem). As a result, it is necessary to employ numerical methods to solve optimal control problems. In the early years of optimal control ( 1950s to 1980s) the favored approach for solving optimal control problems was that of indirect methods. In an indirect method, the calculus of variations is employed to obtain the first-order optimality conditions. These conditions result in a two-point (or, in the case of a complex problem, a multi-point) boundary-value problem. This boundary-value problem actually has a special structure because it arises from taking the derivative of a Hamiltonian. Thus, the resulting dynamical system is a Hamiltonian system of the form where is the augmented Hamiltonian and in an indirect method, the boundary-value problem is solved (using the appropriate boundary or transversality conditions). The beauty of using an indirect method is that the state and adjoint (i.e., ) are solved for and the resulting solution is readily verified to be an extremal trajectory. The disadvantage of indirect methods is that the boundary-value problem is often extremely difficult to solve (particularly for problems that span large time intervals or problems with interior point constraints). A well-known software program that implements indirect methods is BNDSCO. The approach that has risen to prominence in numerical optimal control since the 1980s is that of so-called direct methods. In a direct method, the state or the control, or both, are approximated using an appropriate function approximation (e.g., polynomial approximation or piecewise constant parameterization). Simultaneously, the cost functional is approximated as a cost function. Then, the coefficients of the function approximations are treated as optimization variables and the problem is "transcribed" to a nonlinear optimization problem of the form: Minimize subject to the algebraic constraints Depending upon the type of direct method employed, the size of the nonlinear optimization problem can be quite small (e.g., as in a direct shooting or quasilinearization method), moderate (e.g. pseudospectral optimal control) or may be quite large (e.g., a direct collocation method). In the latter case (i.e., a collocation method), the nonlinear optimization problem may be literally thousands to tens of thousands of variables and constraints. Given the size of many NLPs arising from a direct method, it may appear somewhat counter-intuitive that solving the nonlinear optimization problem is easier than solving the boundary-value problem. It is, however, the fact that the NLP is easier to solve than the boundary-value problem. The reason for the relative ease of computation, particularly of a direct collocation method, is that the NLP is sparse and many well-known software programs exist (e.g., SNOPT) to solve large sparse NLPs. As a result, the range of problems that can be solved via direct methods (particularly direct collocation methods which are very popular these days) is significantly larger than the range of problems that can be solved via indirect methods. In fact, direct methods have become so popular these days that many people have written elaborate software programs that employ these methods. In particular, many such programs include DIRCOL, SOCS, OTIS, GESOP/ASTOS, DITAN. and PyGMO/PyKEP. In recent years, due to the advent of the MATLAB programming language, optimal control software in MATLAB has become more common. Examples of academically developed MATLAB software tools implementing direct methods include RIOTS, DIDO, DIRECT, FALCON.m, and GPOPS, while an example of an industry developed MATLAB tool is PROPT.<ref>Rutquist, P. and Edvall, M. M, PROPT – MATLAB Optimal Control Software," 1260 S.E. Bishop Blvd Ste E, Pullman, WA 99163, USA: Tomlab Optimization, Inc.</ref> These software tools have increased significantly the opportunity for people to explore complex optimal control problems both for academic research and industrial problems. Finally, it is noted that general-purpose MATLAB optimization environments such as TOMLAB have made coding complex optimal control problems significantly easier than was previously possible in languages such as C and FORTRAN. Discrete-time optimal control The examples thus far have shown continuous time systems and control solutions. In fact, as optimal control solutions are now often implemented digitally, contemporary control theory is now primarily concerned with discrete time systems and solutions. The Theory of Consistent Approximations provides conditions under which solutions to a series of increasingly accurate discretized optimal control problem converge to the solution of the original, continuous-time problem. Not all discretization methods have this property, even seemingly obvious ones. For instance, using a variable step-size routine to integrate the problem's dynamic equations may generate a gradient which does not converge to zero (or point in the right direction) as the solution is approached. The direct method RIOTS'' is based on the Theory of Consistent Approximation. Examples A common solution strategy in many optimal control problems is to solve for the costate (sometimes called the shadow price) . The costate summarizes in one number the marginal value of expanding or contracting the state variable next turn. The marginal value is not only the gains accruing to it next turn but associated with the duration of the program. It is nice when can be solved analytically, but usually, the most one can do is describe it sufficiently well that the intuition can grasp the character of the solution and an equation solver can solve numerically for the values. Having obtained , the turn-t optimal value for the control can usually be solved as a differential equation conditional on knowledge of . Again it is infrequent, especially in continuous-time problems, that one obtains the value of the control or the state explicitly. Usually, the strategy is to solve for thresholds and regions that characterize the optimal control and use a numerical solver to isolate the actual choice values in time. Finite time Consider the problem of a mine owner who must decide at what rate to extract ore from their mine. They own rights to the ore from date to date . At date there is ore in the ground, and the time-dependent amount of ore left in the ground declines at the rate of that the mine owner extracts it. The mine owner extracts ore at cost (the cost of extraction increasing with the square of the extraction speed and the inverse of the amount of ore left) and sells ore at a constant price . Any ore left in the ground at time cannot be sold and has no value (there is no "scrap value"). The owner chooses the rate of extraction varying with time to maximize profits over the period of ownership with no time discounting. See also Active inference Bellman equation Bellman pseudospectral method Brachistochrone DIDO DNSS point Dynamic programming Gauss pseudospectral method Generalized filtering GPOPS-II JModelica.org (Modelica-based open source platform for dynamic optimization) Kalman filter Linear-quadratic regulator Model Predictive Control Overtaking criterion PID controller PROPT (Optimal Control Software for MATLAB) Pseudospectral optimal control Pursuit-evasion games Sliding mode control SNOPT Stochastic control Trajectory optimization References Further reading Ross, I. M. (2015). A Primer on Pontryagin's Principle in Optimal Control. Collegiate Publishers. . External links Optimal Control Course Online Computational Optimal Control Dr. Benoît CHACHUAT: Automatic Control Laboratory – Nonlinear Programming, Calculus of Variations and Optimal Control. DIDO - MATLAB tool for optimal control GEKKO - Python package for optimal control GESOP – Graphical Environment for Simulation and OPtimization GPOPS-II – General-Purpose MATLAB Optimal Control Software PROPT – MATLAB Optimal Control Software OpenOCL – Open Optimal Control Library Elmer G. Wiens: Optimal Control – Applications of Optimal Control Theory Using the Pontryagin Maximum Principle with interactive models. Pontryagin's Principle Illustrated with Examples On Optimal Control by Yu-Chi Ho Pseudospectral optimal control: Part 1 Pseudospectral optimal control: Part 2 Mathematical optimization
52888360
https://en.wikipedia.org/wiki/2017%20Little%20Rock%20Trojans%20baseball%20team
2017 Little Rock Trojans baseball team
The 2017 Little Rock Trojans baseball team represented the University of Arkansas at Little Rock during the 2017 NCAA Division I baseball season. The Trojans played their home games at Gary Hogan Field and were coaches by third year head coach Chris Curry. They were members of the Sun Belt Conference. Roster Coaching staff Schedule and results Little Rock Trojans announced its 2017 baseball schedule on October 24, 2016. The 2017 schedule consists of 28 home and 27 away games in the regular season. The Trojans hosted Sun Belts foes Appalachian State, Georgia State, Louisiana, Texas State, and Troy and traveled to Arkansas State, Georgia Southern, Louisiana–Monroe, South Alabama, and Texas-Arlington. The 2017 Sun Belt Conference Championship was contested May 24–28 in Statesboro, Georgia, and was be hosted by Georgia Southern. Rankings are based on the team's current ranking in the Collegiate Baseball poll. References Little Rock Little Rock Trojans baseball seasons
11959837
https://en.wikipedia.org/wiki/International%20rankings%20of%20Malaysia
International rankings of Malaysia
The following are international rankings of Malaysia. Cities 2thinknow: Innovation Cities™ Index 2011, Kuala Lumpur ranked 67th in the world 2thinknow: Innovation Cities™ Index 2012, Kuala Lumpur ranked 66th in the world 2thinknow: Innovation Cities™ Index 2014, Kuala Lumpur and Petaling Jaya ranked 98th and 328th in the world respectively 2thinknow: Innovation Cities™ Index 2015, Kuala Lumpur and Petaling Jaya ranked 88th and 306th in the world respectively 2thinknow: Innovation Cities™ Index 2016, Kuala Lumpur and Petaling Jaya ranked 92nd and 331st in the world respectively 2thinknow: Innovation Cities™ Index 2018, Kuala Lumpur and Petaling Jaya ranked 99th and 362nd in the world respectively 2thinknow: Innovation Cities™ Index 2019, Kuala Lumpur and Petaling Jaya ranked 81st and 409th in the world respectively ADB: Asian Development Outlook 2019 Update Fostering Growth and Inclusion In Asia's Cities, Kuala Lumpur ranked 2nd out of 278 Asian cities for most congested cities Agoda: Top Summer Destinations By Middle East 2019, Kuala Lumpur ranked top 10 in the world Arcadis: International Construction Costs 2016, Kuala Lumpur ranked 41st out of 44 global cities Arcadis: International Construction Costs 2017, Kuala Lumpur ranked 43rd out of 44 global cities Arcadis: International Construction Costs 2018, Kuala Lumpur ranked 47th out of 50 global cities Arcadis: International Construction Costs 2019, Kuala Lumpur ranked 97th out of 100 global cities Arcadis: Sustainable Cities Index 2016, Kuala Lumpur ranked 55th out of 100 global cities Arcadis: Sustainable Cities Index 2017, Kuala Lumpur ranked 95th out of 100 global cities Arcadis: Sustainable Cities Index 2018, Kuala Lumpur ranked 67th out of 100 global cities ASEAN: Clean Tourist City Standard Award 2017, George Town and Muar both granted this award ASEAN: Clean Tourist City Standard Award 2020, Penang, Putrajaya and Kota Kinabalu all granted this award A.T. Kearney: Global Cities Index 2012, Kuala Lumpur ranked 49th out of 66 global cities A.T. Kearney: Global Cities Index 2014, Kuala Lumpur ranked 53rd out of 84 global cities A.T. Kearney: Global Cities Index 2015, Kuala Lumpur ranked 47th out of 125 global cities A.T. Kearney: Global Cities Index 2016, Kuala Lumpur ranked 49th out of 125 global cities A.T. Kearney: Global Cities Index 2017, Kuala Lumpur ranked 49th out of 128 global cities A.T. Kearney: Global Cities Index 2018, Kuala Lumpur ranked 49th out of 135 global cities A.T. Kearney: Global Cities Index 2019, Kuala Lumpur ranked 49th out of 130 global cities A.T. Kearney: Global Cities Outlook 2016, Kuala Lumpur ranked 54th out of 124 global cities A.T. Kearney: Global Cities Outlook 2017, Kuala Lumpur ranked 53rd out of 128 global cities A.T. Kearney: Global Cities Outlook 2018, Kuala Lumpur ranked 61st out of 135 global cities A.T. Kearney: Global Cities Outlook 2019, Kuala Lumpur ranked 76th out of 130 global cities Big 7 Travel: The 50 Friendliest Cities In The World 2019, Kuala Lumpur ranked 2nd in the world Boston Consulting Group: Cities of Choice Global City Ranking 2021, Kuala Lumpur ranked 39th out of 45 global cities Caterwings: Best Food Destinations 2017, George Town ranked 51st out of 100 global cities CBRE: Global Living Report 2019, Kuala Lumpur ranked 32nd out of 35 global cities Crescent Rating: Muslim Travel Shopping Index (MTSI) 2015, Kuala Lumpur and Penang ranked 2nd and 11th out of 40 global cities respectively Daily Mirror: World's Best City For Street Food 2019, Penang and Kuala Lumpur ranked 17th and 20th out of 30 global cities respectively Dell: Women Entrepreneur Cities Index 2017, Kuala Lumpur ranked 41st out of 50 global cities Dell: Women Entrepreneur Cities Index 2019, Kuala Lumpur ranked 44th out of 50 global cities Cushman & Wakefield: Prepped Cities Index 2018, Kuala Lumpur ranked 8th out of 17 cities in Asia Pacific region EasyPark Group: Smart Cities Index 2017, Kuala Lumpur ranked 84th out of 500 global cities EasyPark Group: Smart Cities Index 2019, Kuala Lumpur ranked 94th out of 500 global cities EasyPark Group: Cities of the Future Index 2022, Kuala Lumpur ranked 50th out of 3,200 global cities for Metropolitan areas with populations over 3 million people ECA International: Cost of Living Survey 2016, Kuala Lumpur ranked 197th out of 262 global cities ECA International: Cost of Living Survey 2017, Kuala Lumpur, George Town and Johor Bahru ranked 212th, 245th and 250th out of 262 global cities respectively ECA International: Cost of Living Survey 2018, Kuala Lumpur ranked 182nd out of 475 global cities ECA International: Global Liveability Index 2015, George Town, Kuala Lumpur and Johor Bahru ranked 118th, 118th and 126th out of 269 global cities respectively ECA International: Global Liveability Index 2016, George Town, Kuala Lumpur and Johor Bahru ranked 117th, 120th and 127th out of 269 global cities respectively ECA International: Global Liveability Index 2017, George Town, Kuala Lumpur and Johor Bahru ranked 115th, 118th and 128th out of 269 global cities respectively ECA International: Global Liveability Index 2018, George Town and Kuala Lumpur ranked 120th and 126th out of 480 global cities respectively ECA International: Global Liveability Index 2019, George Town and Kuala Lumpur ranked 97th and 98th out of 480 global cities respectively Economist Intelligence Unit: Global Liveability Ranking 2015, Kuala Lumpur ranked 73rd out of 140 global cities Economist Intelligence Unit: Global Liveability Ranking 2017, Kuala Lumpur ranked 70th out of 140 global cities Economist Intelligence Unit: Safe Cities Index 2017, Kuala Lumpur ranked 31st out of 60 global cities Economist Intelligence Unit: Safe Cities Index 2019, Kuala Lumpur ranked 35th out of 60 global cities Economist Intelligence Unit: Safe Cities Index 2021, Kuala Lumpur ranked 32nd out of 60 global cities Economist Intelligence Unit: Worldwide Cost of Living Report 2015, Kuala Lumpur ranked 81st out of 133 global cities Economist Intelligence Unit: Worldwide Cost of Living Report 2016, Kuala Lumpur ranked 100th out of 133 global cities Economist Intelligence Unit: Worldwide Cost of Living Report 2017, Kuala Lumpur ranked 96th out of 133 global cities Economist Intelligence Unit: Worldwide Cost of Living Report 2018, Kuala Lumpur ranked 98th out of 133 global cities Economist Intelligence Unit: Worldwide Cost of Living Report 2019, Kuala Lumpur ranked 88th out of 133 global cities EF English Proficiency Index 2019, Kuala Lumpur ranked 19th out of 94 global cities Euromonitor International: Top 100 City Destinations Ranking WTM London 2017 Edition, Kuala Lumpur, Johor Bahru and Penang ranked 10th, 42nd and 63rd in the world Expatistan Cost of Living Index 2018, Kuala Lumpur and George Town ranked 258th and 304th out of 343 global cities respectively Findexable: Asia Pacific Fintech Rankings 2022, Kuala Lumpur ranked 15th out of 45 Asia Pacific Fintech hubs Foreign Policy: The Global Cities Index 2010, Kuala Lumpur ranked 48th out of 65 global cities Foreign Policy: The Global Cities Index 2012, Kuala Lumpur ranked 49th out of 66 global cities Foreign Policy: The Global Cities Index 2014, Kuala Lumpur ranked 53rd out of 84 global cities Globalization and World Cities Research Network: The World According to GaWC 2016, Kuala Lumpur and Johor Bahru ranked as Alpha rank and High Sufficiency rank respectively GoCompare: Best Cities For Millennials To Start Businesses, Kuala Lumpur ranked 30th out of 45 global cities Hiyacar: World’s Most Stressful Cities To Drive In 2021, Kuala Lumpur ranked 6th out of 36 global cities Hoopa Top 50 Instagram Destination 2015, Kuala Lumpur ranked 29th out of 50 global cities Hoopa Most Liked Instagram Destinations In The World 2015, Kuala Lumpur ranked 18th out of 25 global cities Hoopa Top 50 Instagram Destination 2019, Kuala Lumpur ranked 39th out of 50 global cities IESE: Cities in Motion Index (CIMI) 2014, Kuala Lumpur ranked 56th out of 135 global cities IESE: Cities in Motion Index (CIMI) 2015, Kuala Lumpur ranked 88th out of 148 global cities IESE: Cities in Motion Index (CIMI) 2016, Kuala Lumpur ranked 88th out of 181 global cities IESE: Cities in Motion Index (CIMI) 2017, Kuala Lumpur ranked 92nd out of 180 global cities IESE: Cities in Motion Index (CIMI) 2018, Kuala Lumpur ranked 87th out of 165 global cities IESE: Cities in Motion Index (CIMI) 2019, Kuala Lumpur ranked 100th out of 174 global cities IMD: Smart City Index 2019, ranked 70th out of 102 global cities IMD: Smart City Index 2020, ranked 54th out of 109 global cities IMD: Smart City Index 2021, ranked 74th out of 118 global cities International Living 15 Best Islands in the World to Retire On 2021, Penang ranked 3rd in the world InterNations: Expat City Ranking 2017, Kuala Lumpur ranked 4th out of 51 global cities InterNations: Expat City Ranking 2018, Kuala Lumpur ranked 6th out of 62 global cities InterNations: Expat City Ranking 2019, Kuala Lumpur ranked 2nd out of 82 global cities InterNations: Expat City Ranking 2020, Kuala Lumpur ranked 8th out of 56 global cities InterNations: Expat City Ranking 2021, Kuala Lumpur ranked 1st out of 57 global cities Julius Baer: Wealth Report Asia 2018, ranked 11th out of 11 Asia cities Knight Frank: Asia Pacific Prime Office Rental Index Q1 2017, Kuala Lumpur ranked 20th out of 20 Asian cities Knight Frank: Global Residential Cities Index Q4 2016, Kuala Lumpur ranked 79th out of 150 global cities Knight Frank: Prime International Residential Index (PIRI) 2017, Kuala Lumpur ranked 80th out of 100 global cities Knight Frank: The Knight Frank City Wealth Index 2017, Kuala Lumpur ranked 31st out of 40 global cities KPMG: Leading Technology Innovation Hub 2021, Kuala Lumpur ranked 9th in ASPAC region Lloyd's List: Top 100 Ports 2016, Port Klang, Port Tanjung Pelepas and Penang ranked 12th, 18th and 98th in the world respectively Lloyd's List: Top 100 Ports 2017, Port Klang, Port Tanjung Pelepas and Penang ranked 11th, 19th and 99th in the world respectively Lloyd's List: Top 100 Ports 2018, Port Klang and Port Tanjung Pelepas ranked 12nd and 19th in the world respectively Lloyd's List: Top 100 Ports 2019, Port Klang and Port Tanjung Pelepas ranked 12nd and 18th in the world respectively Lloyd's List: Top 100 Ports 2020, Port Klang and Port Tanjung Pelepas ranked 12nd and 18th in the world respectively Lonely Planet: Top 10 Cities list for Best in Travel 2016, George Town ranked 4th in the world Love2Laundry: Top 10 Least Stressful Cities 2021, Kuala Lumpur ranked 10th out of 50 global cities Lyon Cab Transfer Shuttle : Taxi service transportation 2016, Kuala Lumpur ranked 2nd cheap taxi, limo service transport fares in Asia region MasterCard: Global Destination Cities Index 2015, Kuala Lumpur ranked 7th in the world MasterCard: Global Destination Cities Index 2016, Kuala Lumpur ranked 7th in the world MasterCard: Global Destination Cities Index 2017, Kuala Lumpur ranked 8th in the world MasterCard: Global Destination Cities Index 2018, Kuala Lumpur ranked 7th in the world MasterCard: Global Destination Cities Index 2019, Kuala Lumpur ranked 6th in the world Mercer: Cost of Living Rankings 2016, Kuala Lumpur ranked 151st out of 209 global cities Mercer: Cost of Living Rankings 2018, Kuala Lumpur ranked 145th out of 209 global cities Mercer: Cost of Living Rankings 2019, Kuala Lumpur ranked 141st out of 209 global cities Mercer: Cost of Living Rankings 2020, Kuala Lumpur ranked 144th out of 209 global cities Mercer: Cost of Living Rankings 2021, Kuala Lumpur ranked 152nd out of 209 global cities Mercer: Quality of Living Rankings 2016, Kuala Lumpur ranked 86th out of 230 global cities Mercer: Quality of Living Rankings 2017, Kuala Lumpur and Johor Bahru ranked 86th and 103rd out of 231 global cities respectively Mercer: Quality of Living Rankings 2018, Kuala Lumpur and Johor Bahru ranked 85th and 101st out of 231 global cities respectively Mercer: Quality of Living Rankings 2019, Kuala Lumpur and Johor Bahru ranked 85th and 101st out of 231 global cities respectively Nestpick: Millennial Cities Ranking 2017, Kuala Lumpur ranked 62nd out of 100 global cities Nestpick: Millennial Cities Ranking 2018, Kuala Lumpur ranked 53rd out of 110 global cities Nestpick: Generation Z City Index 2019, Kuala Lumpur ranked 95th out of 110 global cities New Seven Wonders Foundation: New7Wonders Cities 2014, Kuala Lumpur selected as one of the New7Wonders Cities in the world OAG: Megahubs International Index 2018, Kuala Lumpur ranked 1st in the world for most internationally connected low-cost carrier megahub OAG: Megahubs International Index 2019, Kuala Lumpur ranked 12th out of 50 global cities and ranked 1st in the world for most internationally connected low-cost carrier megahub Open For Business City Rankings 2018, Kuala Lumpur ranked C for City is partially open for business Open For Business City Rankings 2020, Kuala Lumpur ranked CC for City is partially inclusive and competitive PricewaterhouseCoopers: Cities of Opportunity 2014, Kuala Lumpur ranked 17th out of 30 global cities PricewaterhouseCoopers: Cities of Opportunity 2016, Kuala Lumpur ranked 20th out of 30 global cities Startup Genome: Top 100 Emerging Ecosystem Ranking - The Global Startup Ecosystem Report 2020 (GSER 2020), Kuala Lumpur ranked ranked 11st out of 270 ecosystems from over 100 countries. Startup Genome: Top 100 Emerging Ecosystem Ranking - The Global Startup Ecosystem Report 2021 (GSER 2021), Kuala Lumpur ranked ranked 21st out of 300 ecosystems from over 100 countries. Sustainable Destinations Top 100 2018, Taiping is ranked top 100 sustainable cities in the world Sustainable Destinations Top 100 2019, Taiping is ranked 3rd sustainable cities in the world Taxi2Airport: Cost of Public Transportation In 53 countries 2019, Kuala Lumpur ranked cheapest public transport fares in South East Asia region The CEO Magazine: Cities for the Best Work–Life Balance 2019, Kuala Lumpur ranked 40th out of 40 global cities Time: The Selfiest Cities in the World 2014, Petaling Jaya and George Town are ranked 5th and 10th out of 100 cities respectively Time Out: The 48 best cities in the world in 2019, Kuala Lumpur is ranked 46th out of 48 cities ValueChampion: Top Millennial-Friendly Cities in Asia-Pacific, Kuala Lumpur ranked 14th out of 20 global cities Quacquarelli Symonds: Best Student Cities 2014, Kuala Lumpur ranked 43rd out of 50 global cities Quacquarelli Symonds: Best Student Cities 2016, Kuala Lumpur ranked 53rd out of 74 global cities Quacquarelli Symonds: Best Student Cities 2017, Kuala Lumpur ranked 41st out of 100 global cities Quacquarelli Symonds: Best Student Cities 2018, Kuala Lumpur ranked 37th out of 100 global cities Quacquarelli Symonds: Best Student Cities 2019, Kuala Lumpur ranked 29th out of 100 global cities Quacquarelli Symonds: Best Student Cities 2022, Kuala Lumpur ranked 31st out of 115 global cities Quacquarelli Symonds: Most Affordable Cities For Students 2017, Kuala Lumpur ranked 1st in the world Quacquarelli Symonds: Most Affordable Cities For Students 2018, Kuala Lumpur ranked 2nd in the world Quacquarelli Symonds: Most Affordable Cities For Students 2019, Kuala Lumpur ranked 2nd in the world World Shipping Council: Top 50 World Container Ports 2019, Port Klang and Port Tanjung Pelepas ranked 12th and 18th in the world respectively YCP Solidiance: Top E-commerce Cities in Asia 2019, Kuala Lumpur ranked top 12 in 40 Asian countries States Condé Nast Traveler The 10 Best Places in the World to Retire 2016, Penang ranked 2nd in the world CNN: 17 Best Places To Visit In 2017, Penang ranked 2nd in the world CNN: 19 Best Spring Travel Destinations 2019, Penang ranked top 19 destinations in the world CNN: 17 Best Places To Visit For The Ultimate Asia Experience 2019, Penang ranked top 17 destinations in Asia CNN: 21 Best Destinations To Go 2022, Penang ranked top 21 destinations in the world Economic Asian Corporate Governance Association (ACGA): Corporate Governance (CG) Watch Report 2018, ranked 4th out of 12 Asian countries in terms of market accountability and transparency Basel Institute on Governance: Basel AML Index 2020, ranked 65th out of 141 countries Boao Forum for Asia: Asian Competitiveness Annual Report 2014, ranked 11th out of 37 countries Boao Forum for Asia: Asian Competitiveness Annual Report 2015, ranked 11th out of 37 countries Boao Forum for Asia: Asian Competitiveness Annual Report 2016, ranked 13th out of 37 countries Boao Forum for Asia: Asian Competitiveness Annual Report 2017, ranked 13th out of 37 countries Boao Forum for Asia: Asian Competitiveness Annual Report 2018, ranked 11th out of 37 countries Centre for Economics and Business Research (CEBR): World Economic League Table 2003, ranked 39th out of 190 countries Centre for Economics and Business Research (CEBR): World Economic League Table 2008, ranked 40th out of 192 countries Centre for Economics and Business Research (CEBR): World Economic League Table 2013, ranked 35th out of 193 countries Centre for Economics and Business Research (CEBR): World Economic League Table 2018, ranked 37th out of 193 countries Centre for Economics and Business Research (CEBR): World Economic League Table 2019, ranked 35th out of 193 countries Centre for Economics and Business Research (CEBR): World Economic League Table 2020, ranked 40th out of 193 countries Centre for Economics and Business Research (CEBR): World Economic League Table 2021, ranked 34th out of 193 countries Centre for Economics and Business Research (CEBR): World Economic League Table 2022, ranked 37th out of 191 countries CEOWORLD Magazine: World’s Best Countries To Invest In Or Do Business For 2019, ranked 1st out of 67 countries CIA World Factbook: GDP - official exchange rate (2014), ranked 36th out of 218 countries CIA World Factbook: GDP - per capita (PPP) (2013), ranked 70th out of 181 countries CIA World Factbook: Budget Expenditures (2014), ranked 43rd out of 224 countries CIA World Factbook: Budget revenues (2014), ranked 46th out of 226 countries CIA World Factbook: Budget surplus (+) or deficit (-) % of GDP (2014), ranked 155th out of 213 countries CIA World Factbook: Current account balance (2014), ranked 19th out of 193 countries CIA World Factbook: Debt - external (2014), ranked 49th out of 201 countries CIA World Factbook: Exports (2014), ranked 25th out of 223 countries CIA World Factbook: Household income or consumption by percentage share - highest 10% (2014), ranked 46th out of 147 countries CIA World Factbook: Household income or consumption by percentage share - lowest 10% (2014), ranked 115th out of 147 countries CIA World Factbook: Inflation rate (consumer prices) (%) (2014), ranked 190th out of 227 countries CIA World Factbook: Labor force (2014), ranked 42nd out of 232 countries CIA World Factbook: Labor force - by occupation - agriculture (%) (2014), ranked 117th out of 197 countries CIA World Factbook: Labor force - by occupation - industry (%) (2014), ranked 36th out of 167 countries CIA World Factbook: Labor force - by occupation - services (%) (2014), ranked 106th out of 194 countries DHL: Global Connectedness Index 2012, ranked 16th out of 140 countries DHL: Global Connectedness Index 2014, ranked 21st out of 140 countries DHL: Global Connectedness Index 2016, ranked 19th out of 140 countries DHL: Global Connectedness Index 2018, ranked 12th out of 169 countries Findexable: Global Fintech Rankings 2021, ranked 46th out of 83 countries Foreign Policy: Baseline Profitability Index 2015, ranked 6th out of 110 countries Forbes: Best Countries for Business 2018, ranked 35th out of 153 countries Forbes: Best Countries for Business 2019, ranked 35th out of 161 countries Fraser Institute: Economic Freedom of the World 2010, ranked 70th out of 153 countries Fraser Institute: Economic Freedom of the World 2011, ranked 72nd out of 153 countries Fraser Institute: Economic Freedom of the World 2012, ranked 72nd out of 153 countries Fraser Institute: Economic Freedom of the World 2013, ranked 56th out of 157 countries Fraser Institute: Economic Freedom of the World 2014, ranked 62nd out of 159 countries Fraser Institute: Economic Freedom of the World 2015, ranked 65th out of 159 countries Global Entrepreneurship and Development Institute: Global Entrepreneurship Index 2019, ranked 58th out of 137 countries Heritage Foundation: Index of Economic Freedom 2006, ranked 68th out of 157 countries Heritage Foundation: Index of Economic Freedom 2008, ranked 51st out of 157 countries Heritage Foundation: Index of Economic Freedom 2009, ranked 58th out of 179 countries Heritage Foundation: Index of Economic Freedom 2010, ranked 59th out of 179 countries Heritage Foundation: Index of Economic Freedom 2011, ranked 53rd out of 179 countries Heritage Foundation: Index of Economic Freedom 2012, ranked 53rd out of 179 countries Heritage Foundation: Index of Economic Freedom 2013, ranked 56th out of 177 countries Heritage Foundation: Index of Economic Freedom 2014, ranked 37th out of 178 countries Heritage Foundation: Index of Economic Freedom 2015, ranked 31st out of 178 countries Heritage Foundation: Index of Economic Freedom 2016, ranked 29th out of 178 countries Heritage Foundation: Index of Economic Freedom 2017, ranked 27th out of 180 countries Heritage Foundation: Index of Economic Freedom 2018, ranked 22nd out of 180 countries IMD: World Competitiveness Ranking 2010, ranked 10th out of 58 countries IMD: World Competitiveness Ranking 2016, ranked 19th out of 58 countries IMD: World Competitiveness Ranking 2017, ranked 24th out of 63 countries IMD: World Competitiveness Ranking 2018, ranked 22nd out of 63 countries IMD: World Competitiveness Ranking 2019, ranked 22nd out of 63 countries IMD: World Competitiveness Ranking 2020, ranked 27th out of 64 countries IMD: World Competitiveness Ranking 2021, ranked 25th out of 64 countries IMD: World Digital Competitiveness Ranking 2016, ranked 24th out of 61 countries IMD: World Digital Competitiveness Ranking 2017, ranked 24th out of 63 countries IMD: World Digital Competitiveness Ranking 2018, ranked 27th out of 63 countries IMD: World Digital Competitiveness Ranking 2019, ranked 26th out of 63 countries IMD: World Digital Competitiveness Ranking 2020, ranked 26th out of 63 countries International Monetary Fund: GDP (nominal) per capita (2006), ranked 64th out of 182 countries International Monetary Fund: GDP (nominal) per capita (2009), ranked 67th out of 180 countries International Monetary Fund: GDP (nominal) (2006), ranked 39th out of 181 countries International Monetary Fund: GDP (nominal) (2006), ranked 41st out of 181 countries Milken Institute: Global Opportunity Index 2022, ranked 1st investment destination for foreign investors in emerging ASEAN countries Refinitiv: Islamic Finance Development Indicator (IFDI) 2021, ranked 1st out of 135 countries Standard Chartered: Wealth Expectancy Report 2019, ranked 2nd out of 10 countries in terms of smallest wealth expectancy gaps, with around two-thirds of wealth (67 per cents) creators set to achieve more than half of their wealth aspiration. Steve Hanke: Misery Index 2013, ranked 103rd out of 109 countries Steve Hanke: Misery Index 2014, ranked 101st out of 108 countries Steve Hanke: Misery Index 2015, ranked 52nd out of 60 countries Steve Hanke: Misery Index 2017, ranked 107th out of 126 countries Steve Hanke: Misery Index 2018, ranked 86th out of 95 countries The Conference Board: Consumer Confidence Index (CCI) 2018, ranked 7th in the world TMF Group: Compliance Complexity Index 2018, ranked 5th in the world TMF Group: Global Business Complexity Index 2019, ranked 27th in the world World Bank: Doing Business Report 2011, ranked 21st out of 183 countries World Bank: Doing Business Report 2016, ranked 18th out of 190 countries World Bank: Doing Business Report 2017, ranked 23rd out of 190 countries World Bank: Doing Business Report 2018, ranked 24th out of 190 countries World Bank: Doing Business Report 2019, ranked 15th out of 190 countries World Bank: Doing Business Report 2020, ranked 12nd out of 190 countries World Bank: LPI Global Rankings 2016, ranked 32nd out of 160 countries World Economic Forum: Global Competitiveness Index 2010, ranked 26th out of 139 countries World Economic Forum: Global Competitiveness Index 2015, ranked 18th out of 140 countries World Economic Forum: Global Competitiveness Index 2016, ranked 25th out of 138 countries World Economic Forum: Global Competitiveness Index 2017, ranked 23rd out of 137 countries World Economic Forum: Global Competitiveness Index 2018, ranked 25th out of 140 countries World Economic Forum: Global Competitiveness Index 2019, ranked 27th out of 141 countries World Economic Forum: Global Enabling Trade Report 2008, ranked 29th out of 118 countries World Economic Forum: Global Enabling Trade Report 2009, ranked 28th out of 121 countries World Economic Forum: Global Enabling Trade Report 2010, ranked 30th out of 125 countries World Economic Forum: Global Enabling Trade Report 2014, ranked 25th out of 138 countries World Economic Forum: Global Enabling Trade Report 2016, ranked 37th out of 136 countries Educational Coursera Global Skills Index (GSI) 2019, ranked 46th, 47th and 42nd out of 60 countries in Business, Technology and Data Science fields respectively EF English Proficiency Index 2011, ranked 9th out of 44 countries EF English Proficiency Index 2012, ranked 13th out of 52 countries EF English Proficiency Index 2013, ranked 11th out of 60 countries EF English Proficiency Index 2014, ranked 12th out of 63 countries EF English Proficiency Index 2015, ranked 14th out of 70 countries EF English Proficiency Index 2016, ranked 12th out of 72 countries EF English Proficiency Index 2017, ranked 13th out of 80 countries EF English Proficiency Index 2018, ranked 22nd out of 88 countries EF English Proficiency Index 2019, ranked 26th out of 100 countries EF English Proficiency Index 2020, ranked 30th out of 100 countries EF English Proficiency Index 2021, ranked 28th out of 112 countries HSBC Expat Explorer Survey 2015, ranked 20th out of 38 countries HSBC Expat Explorer Survey 2016, ranked 28th out of 45 countries HSBC Expat Explorer Survey 2017, ranked 25th out of 46 countries HSBC Expat Explorer Survey 2018, ranked 14th out of 29 countries HSBC Expat Explorer Survey 2019, ranked 16th out of 33 countries HSBC Expat Explorer Survey 2020, ranked 16th out of 40 countries IMD World Talent Report 2014, ranked 5th out of 60 countries IMD World Talent Ranking 2015, ranked 15th out of 61 countries IMD World Talent Ranking 2016, ranked 19th out of 61 countries IMD World Talent Ranking 2017, ranked 28th out of 63 countries IMD World Talent Ranking 2018, ranked 22nd out of 63 countries IMD World Talent Ranking 2019, ranked 22nd out of 63 countries IMD World Talent Ranking 2020, ranked 25th out of 63 countries IMD World Talent Ranking 2021, ranked 28th out of 64 countries InterNations: Expat Insider 2021, ranked 4th out of 59 countries INSEAD Global Talent Competitiveness Index 2013, ranked 37th out of 103 countries INSEAD Global Talent Competitiveness Index 2014, ranked 35th out of 93 countries INSEAD Global Talent Competitiveness Index 2015, ranked 30th out of 109 countries INSEAD Global Talent Competitiveness Index 2017, ranked 28th out of 118 countries INSEAD Global Talent Competitiveness Index 2018, ranked 27th out of 119 countries INSEAD Global Talent Competitiveness Index 2019, ranked 27th out of 125 countries INSEAD Global Talent Competitiveness Index 2020, ranked 26th out of 132 countries INSEAD Global Talent Competitiveness Index 2021, ranked 34th out of 134 countries MasterCard: Financial Literacy Index 2014, ranked 5th out of 16 Asia Pacific countries. MasterCard: Financial Literacy Index 2015, ranked 6th out of 17 Asia Pacific countries. OECD: Programme for International Student Assessment 2009, ranked 57th, 53rd and 55th out of 74 countries in mathematics, science and reading respectively. OECD: Programme for International Student Assessment 2012, ranked 52nd, 53rd and 59th out of 65 countries in mathematics, science and reading respectively. OECD: Programme for International Student Assessment 2015, ranked 44th, 46th and 49th out of 72 countries in mathematics, science and reading respectively. Universitas 21: U21 Ranking of National Higher Education Systems 2012, ranked 36th out of 50 countries Universitas 21: U21 Ranking of National Higher Education Systems 2014, ranked 28th out of 50 countries Universitas 21: U21 Ranking of National Higher Education Systems 2016, ranked 27th out of 50 countries Universitas 21: U21 Ranking of National Higher Education Systems 2017, ranked 25th out of 50 countries Universitas 21: U21 Ranking of National Higher Education Systems 2018, ranked 26th out of 50 countries Universitas 21: U21 Ranking of National Higher Education Systems 2019, ranked 28th out of 50 countries Universitas 21: U21 Ranking of National Higher Education Systems 2020, ranked 27th out of 50 countries Quacquarelli Symonds: Higher Education System Strength Rankings 2016, ranked 27th out of 50 countries Quacquarelli Symonds: Higher Education System Strength Rankings 2018, ranked 25th out of 50 countries Save the Children: State of the World's Mothers report 2006, ranked 52nd out of 110 countries UNESCO: Top 20 Countries For International Students 2014, ranked 12th in the world Environmental Value Champion: Asia Pacific Greenest Countries 2019, ranked 8th out of 13 Asia Pacific countries Yale University: Environmental Sustainability Index 2005, ranked 38th out of 146 countries Yale University: Environmental Performance Index 2006, ranked 9th out of 133 countries Yale University: Environmental Performance Index 2010, ranked 54th out of 163 countries Yale University: Environmental Performance Index 2012, ranked 25th out of 132 countries Yale University: Environmental Performance Index 2018, ranked 75th out of 180 countries General Agility Emerging Markets Logistics Index 2016, ranked 4th out of 45 countries Arcadis: Global Infrastructure Investment Index (GIII) 2012, ranked 7th out of 41 countries Arcadis: Global Infrastructure Investment Index (GIII) 2014, ranked 7th out of 41 countries Arcadis: Global Infrastructure Investment Index (GIII) 2016, ranked 5th out of 41 countries Arcadis: International Construction Costs 2015, Malaysia ranked 38th out of 42 countries A.T. Kearney / Foreign Policy Magazine: Globalization Index 2006, ranked 19th out of 62 countries A.T. Kearney Global Services Location Index (GSLI) 2011, ranked 3rd out of 50 countries A.T. Kearney Global Services Location Index (GSLI) 2014, ranked 3rd out of 50 countries A.T. Kearney Global Services Location Index (GSLI) 2016, ranked 3rd out of 50 countries A.T. Kearney Global Services Location Index (GSLI) 2017, ranked 3rd out of 55 countries A.T. Kearney Global Services Location Index (GSLI) 2019, ranked 3rd out of 50 countries A.T. Kearney Global Services Location Index (GSLI) 2021, ranked 3rd out of 60 countries Bloomberg Innovation Index 2015, ranked 27th out of 50 countries Bloomberg Innovation Index 2019, ranked 26th out of 60 countries Bloomberg Innovation Index 2020, ranked 27th out of 60 countries Bloomberg Innovation Index 2021, ranked 29th out of 60 countries Credit Suisse Research Institute (CSRI): The CS Family 1000 Report 2017, ranked 7th in the world for family-owned companies EY Capital Confidence Barometer 2015, ranked 5th in the world for investment destinations EY Capital Confidence Barometer 2016, ranked 5th in the world for South East Asia region ETH Zurich: KOF Index of Globalisation 2012, ranked 29th out of 208 countries ETH Zurich: KOF Index of Globalisation 2013, ranked 27th out of 207 countries ETH Zurich: KOF Index of Globalisation 2014, ranked 24th out of 207 countries ETH Zurich: KOF Index of Globalisation 2015, ranked 26th out of 207 countries ETH Zurich: KOF Index of Globalisation 2016, ranked 25th out of 207 countries ETH Zurich: KOF Index of Globalisation 2017, ranked 31st out of 207 countries ETH Zurich: KOF Index of Globalisation 2018, ranked 28th out of 209 countries ETH Zurich: KOF Index of Globalisation 2019, ranked 26th out of 203 countries GoBankingRates: 50 Cheapest Countries To Retire To 2021, ranked 1st out of 50 countries Henley & Partners Passport Index 2013, ranked 14th out of 103 passport ranks Henley & Partners Passport Index 2014, ranked 8th out of 94 passport ranks Henley & Partners Passport Index 2015, ranked 9th out of 106 passport ranks Henley & Partners Passport Index 2016, ranked 12th out of 104 passport ranks Henley & Partners Passport Index 2017 , ranked 13rd out of 104 passport ranks Henley & Partners Passport Index 2018, ranked 10th out of 106 passport ranks Henley & Partners Passport Index 2019, ranked 13rd out of 108 passport ranks Henley & Partners Passport Index 2020, ranked 13rd out of 107 passport ranks Henley & Partners Passport Index 2021, ranked 13rd out of 116 passport ranks Henley & Partners Passport Index 2022, ranked 12th out of 111 passport ranks INSEAD: Global Innovation Index 2007, ranked 26th out of 107 countries INSEAD & CII: Global Innovation Index 2008, ranked 25th out of 130 countries INSEAD & CII: Global Innovation Index 2009, ranked 28th out of 132 countries INSEAD: Global Innovation Index 2011, ranked 31st out of 125 countries INSEAD & WIPO: Global Innovation Index 2012, ranked 32nd out of 141 countries INSEAD, Cornell University & WIPO: Global Innovation Index 2013, ranked 32nd out of 142 countries INSEAD, Cornell University & WIPO: Global Innovation Index 2014, ranked 33rd out of 143 countries INSEAD, Cornell University & WIPO: Global Innovation Index 2015, ranked 32nd out of 141 countries INSEAD, Cornell University & WIPO: Global Innovation Index 2016, ranked 35th out of 128 countries INSEAD, Cornell University & WIPO: Global Innovation Index 2017, ranked 37th out of 127 countries INSEAD, Cornell University & WIPO: Global Innovation Index 2018, ranked 35th out of 126 countries INSEAD, Cornell University & WIPO: Global Innovation Index 2019, ranked 35th out of 129 countries INSEAD, Cornell University & WIPO: Global Innovation Index 2020, ranked 33rd out of 131 countries INSEAD, Cornell University & WIPO: Global Innovation Index 2021, ranked 36th out of 132 countries WIPO: Global Innovation Index 2021, ranked 36th out of 132 countries International Living The World's Best Places To Retire In 2017 / Annual Global Retirement Index 2017, ranked 6th out of 24 countries International Living The World's Best Places To Retire In 2018 / Annual Global Retirement Index 2018, ranked 5th out of 24 countries International Living The World's Best Places To Retire In 2019 / Annual Global Retirement Index 2019, ranked 5th out of 15 countries International Living The World's Best Places To Retire In 2022 / Annual Global Retirement Index 2022, ranked 15th out of 25 countries Knight Frank: Global House Price Index Q4 2016 ranked 27th out of 55 countries Legatum Prosperity Index 2010, ranked 43rd out of 110 countries Legatum Prosperity Index 2011, ranked 43rd out of 110 countries Legatum Prosperity Index 2012, ranked 45th out of 142 countries Legatum Prosperity Index 2013, ranked 44th out of 142 countries Legatum Prosperity Index 2014, ranked 45th out of 142 countries Legatum Prosperity Index 2015, ranked 44th out of 142 countries Legatum Prosperity Index 2016, ranked 38th out of 149 countries Legatum Prosperity Index 2017, ranked 42nd out of 149 countries Legatum Prosperity Index 2018, ranked 44th out of 149 countries Legatum Prosperity Index 2019, ranked 41st out of 167 countries LinkedIn Opportunity Index 2018, ranked 5th out of 9 countries LinkedIn Opportunity Index 2020, ranked 9th out of 22 countries MasterCard-Crescent Rating: Global Muslim Travel Index (GMTI) 2015, ranked 1st out of 100 countries MasterCard-Crescent Rating: Global Muslim Travel Index (GMTI) 2016, ranked 1st out of 130 countries MasterCard-Crescent Rating: Global Muslim Travel Index (GMTI) 2017, ranked 1st out of 130 countries MasterCard-Crescent Rating: Global Muslim Travel Index (GMTI) 2018, ranked 1st out of 130 countries MasterCard-Crescent Rating: Global Muslim Travel Index (GMTI) 2019, ranked 1st out of 129 countries MasterCard-Crescent Rating: Global Muslim Travel Index (GMTI) 2021, ranked 1st out of 140 countries MasterCard Index of Women Entrepreneurs (MIWE) 2017, ranked 28th out of 56 countries MasterCard Index of Women Entrepreneurs (MIWE) 2018, ranked 24th out of 58 countries MasterCard Index of Women Entrepreneurs (MIWE) 2019, ranked 21st out of 58 countries Newsweek World's Best Countries 2010, ranked 37th out of 100 countries Passport Index 2016, ranked 6th out of 98 passport power ranks Passport Index 2017, ranked 4th out of 93 passport power ranks Passport Index 2018, ranked 6th out of 96 passport power ranks Passport Index 2020, ranked 8th out of 90 passport power ranks United Nations: Human Development Index 2006, ranked 61st out of 177 countries United Nations: Human Development Index 2007/2008, ranked 63rd out of 177 countries United Nations: Human Development Index 2009, ranked 66th out of 182 countries United Nations: Human Development Index 2015, ranked 62nd out of 188 countries United Nations: Human Development Index 2016, ranked 59th out of 188 countries United Nations: Human Development Index 2017, ranked 57th out of 189 countries United Nations: Human Development Index 2018, ranked 57th out of 189 countries U.S. Chamber International IP Index 2017, ranked 19th out of 45 countries U.S. Chamber International IP Index 2019, ranked 24th out of 50 countries UNWTO: Tourism Highlights 2017 Edition, ranked 11th in the world U.S. News & World Report Best Countries 2016, ranked 28th out of 60 countries U.S. News & World Report Best Countries 2017, ranked 35th out of 80 countries U.S. News & World Report Best Countries 2018, ranked 34th out of 80 countries U.S. News & World Report Best Countries 2019, ranked 38th out of 80 countries U.S. News & World Report Best Countries 2020, ranked 32nd out of 73 countries U.S. News & World Report Best Countries To Invest In 2018, ranked 4th out of 80 countries U.S. News & World Report Best Countries To Invest In 2019, ranked 13rd out of 29 countries U.S. News & World Report Best Countries To Invest In 2020, ranked 12nd out of 25 countries World Economic Forum Human Capital Report 2016, ranked 42nd out of 130 countries World Economic Forum Human Capital Report 2017, ranked 33rd out of 130 countries World Economic Forum Travel & Tourism Competitiveness Index 2007, ranked 31st out of 124 countries World Economic Forum Travel & Tourism Competitiveness Index 2008, ranked 32nd out of 130 countries World Economic Forum Travel & Tourism Competitiveness Index 2009, ranked 32nd out of 133 countries World Economic Forum Travel & Tourism Competitiveness Index 2011, ranked 35th out of 139 countries World Economic Forum Travel & Tourism Competitiveness Index 2013, ranked 34th out of 140 countries World Economic Forum Travel & Tourism Competitiveness Index 2015, ranked 25th out of 141 countries World Economic Forum Travel & Tourism Competitiveness Index 2017, ranked 26th out of 136 countries World Economic Forum Travel & Tourism Competitiveness Index 2019, ranked 29th out of 140 countries Healthcare Bloomberg: Covid Resilience Ranking 2021, ranked 51st out of 53 countries Bloomberg: Health Care Efficiency Index 2017, ranked 22nd out of 55 countries Bloomberg: Health Care Efficiency Index 2018, ranked 29th out of 56 countries Economist Intelligence Unit & Johns Hopkins Center for Health Security & Nuclear Threat Initiative: Global Health Security Index 2019, ranked 18th out of 195 countries Economist Intelligence Unit & Johns Hopkins Center for Health Security & Nuclear Threat Initiative: Global Health Security Index 2021, ranked 27th out of 195 countries Indigo Wellness Index 2019, ranked 1st out of 24 countries International Living Global Retirement Index 2017, ranked 21st out of 151 countries International Living Global Retirement Index 2019, ranked 1st out of 25 countries LetterOne: Global Wellness Index 2019, ranked 22nd out of 150 countries Lowy Institute: Covid Performance Index 2021, ranked 17th out of 102 countries Nikkei: COVID-19 Recovery Index 2022, ranked 13rd out of 122 countries PEMANDU Associates: Global COVID-19 Index (GCI) 2022, ranked 13rd out of 180 countries Military Economist Intelligence Unit: Global Peace Index 2007, ranked 37th out of 121 countries Global Firepower: Military Strength Ranking 2017, ranked 33rd out of 133 countries Global Firepower: Military Strength Ranking 2018, ranked 44th out of 136 countries Global Firepower: Military Strength Ranking 2020, ranked 44th out of 138 countries Institute for Economics and Peace: Global Peace Index 2010, ranked 22nd out of 149 countries Institute for Economics and Peace: Global Peace Index 2011, ranked 19th out of 153 countries Institute for Economics and Peace: Global Peace Index 2012, ranked 20th out of 158 countries Institute for Economics and Peace: Global Peace Index 2013, ranked 29th out of 162 countries Institute for Economics and Peace: Global Peace Index 2014, ranked 33rd out of 162 countries Institute for Economics and Peace: Global Peace Index 2015, ranked 28th out of 162 countries Institute for Economics and Peace: Global Peace Index 2016, ranked 30th out of 163 countries Institute for Economics and Peace: Global Peace Index 2017 , ranked 29th out of 163 countries Institute for Economics and Peace: Global Peace Index 2018 , ranked 25th out of 163 countries Institute for Economics and Peace: Global Peace Index 2019 , ranked 16th out of 163 countries Institute for Economics and Peace: Global Peace Index 2020, ranked 20th out of 163 countries Lowy Institute: Asia Power Index 2018, ranked 9th out of 25 Asian countries Lowy Institute: Asia Power Index 2019, ranked 9th out of 25 Asian countries Lowy Institute: Asia Power Index 2020, ranked 10th out of 26 Asian countries Political Economist Intelligence Unit: Democracy Index 2011, ranked 71st out of 167 countries Economist Intelligence Unit: Democracy Index 2012, ranked 64th out of 167 countries Economist Intelligence Unit: Democracy Index 2013, ranked 64th out of 167 countries Economist Intelligence Unit: Democracy Index 2014, ranked 65th out of 167 countries Economist Intelligence Unit: Democracy Index 2015, ranked 68th out of 167 countries Economist Intelligence Unit: Democracy Index 2016, ranked 65th out of 167 countries Economist Intelligence Unit: Democracy Index 2017, ranked 59th out of 167 countries Economist Intelligence Unit: Democracy Index 2018, ranked 52nd out of 167 countries Economist Intelligence Unit: Democracy Index 2019, ranked 43rd out of 167 countries Economist Intelligence Unit: Democracy Index 2020, ranked 39th out of 167 countries Economist Intelligence Unit: Democracy Index 2021, ranked 39th out of 167 countries Freedom House: Freedom in the World 2010, scored 4 "Partly Free" out of 7 Freedom House: Freedom in the World 2011, scored 4 "Partly Free" out of 7 Freedom House: Freedom in the World 2012, scored 4 "Partly Free" out of 7 Freedom House: Freedom in the World 2013, scored 4 "Partly Free" out of 7 Freedom House: Freedom in the World 2014, scored 4 "Partly Free" out of 7 Freedom House: Freedom in the World 2015, scored 4 "Partly Free" out of 7 Freedom House: Freedom in the World 2016, scored 4 "Partly Free" out of 7 Freedom House: Freedom in the World 2017, scored 4 "Partly Free" out of 7 Freedom House: Freedom in the World 2018, scored 4 "Partly Free" out of 7 Freedom House: Freedom in the World 2019, scored 52 "Partly Free" out of 100 Freedom House: Freedom in the World 2020, scored 52 "Partly Free" out of 100 Freedom House: Freedom in the World 2021, scored 51 "Partly Free" out of 100 Gallup: Global Law and Order 2019, ranked 18th out of 42 score ranking Gallup: Global Law and Order 2020, ranked 13th out of 41 score ranking Reporters Without Borders: World Press Freedom Index 2010, ranked 141st out of 178 countries Reporters Without Borders: World Press Freedom Index 2014, ranked 147th out of 180 countries Reporters Without Borders: World Press Freedom Index 2015, ranked 147th out of 180 countries Reporters Without Borders: World Press Freedom Index 2016, ranked 146th out of 180 countries Reporters Without Borders: World Press Freedom Index 2017, ranked 144th out of 180 countries Reporters Without Borders: World Press Freedom Index 2018, ranked 145th out of 180 countries Reporters Without Borders: World Press Freedom Index 2019, ranked 123rd out of 180 countries Reporters Without Borders: World Press Freedom Index 2020, ranked 101st out of 180 countries Reporters Without Borders: World Press Freedom Index 2021, ranked 119th out of 180 countries TRACE Bribery Risk Matrix 2014 ranked 62nd out of 197 countries TRACE Bribery Risk Matrix 2016 ranked 47th out of 199 countries TRACE Bribery Risk Matrix 2017 ranked 79th out of 200 countries TRACE Bribery Risk Matrix 2018 ranked 63rd out of 200 countries TRACE Bribery Risk Matrix 2019 ranked 58th out of 200 countries TRACE Bribery Risk Matrix 2020 ranked 51st out of 194 countries TRACE Bribery Risk Matrix 2021 ranked 65th out of 194 countries Transparency International: Corruption Perceptions Index 2006 ranked 43rd out of 163 countries Transparency International: Corruption Perceptions Index 2007 ranked 43rd out of 179 countries Transparency International: Corruption Perceptions Index 2008 ranked 47th out of 180 countries Transparency International: Corruption Perceptions Index 2009 ranked 56th out of 180 countries Transparency International: Corruption Perceptions Index 2010 ranked 56th out of 178 countries Transparency International: Corruption Perceptions Index 2011 ranked 60th out of 182 countries Transparency International: Corruption Perceptions Index 2012 ranked 54th out of 174 countries Transparency International: Corruption Perceptions Index 2013 ranked 53rd out of 175 countries Transparency International: Corruption Perceptions Index 2014 ranked 50th out of 174 countries Transparency International: Corruption Perceptions Index 2015 ranked 54th out of 167 countries Transparency International: Corruption Perceptions Index 2016 ranked 55th out of 176 countries Transparency International: Corruption Perceptions Index 2017 ranked 62nd out of 180 countries Transparency International: Corruption Perceptions Index 2018 ranked 61st out of 180 countries Transparency International: Corruption Perceptions Index 2019 ranked 51st out of 180 countries Transparency International: Corruption Perceptions Index 2020 ranked 57th out of 180 countries Transparency International: Corruption Perceptions Index 2021 ranked 62nd out of 180 countries World Justice Project: Rule of Law Index 2016, ranked 56th out of 113 countries World Justice Project: Rule of Law Index 2017, ranked 53rd out of 113 countries World Justice Project: Rule of Law Index 2019, ranked 51st out of 126 countries World Justice Project: Rule of Law Index 2020, ranked 47th out of 128 countries World Justice Project: Rule of Law Index 2021, ranked 54th out of 139 countries Social Big Travel: Top 50 Sexiest Accents In The World 2019, ranked 39th in the world Big Travel: The 50 Sexiest Nationalities In The World 2019, ranked 25th in the world Economist Intelligence Unit: Quality-of-life index 2005, ranked 36th out of 108 countries Economist Intelligence Unit: Where-to-be-born Index 1988, ranked 37th out of 80 countries Economist Intelligence Unit: Where-to-be-born Index 2013, ranked 36th out of 80 countries Expedia: 17th Annual Survey on Vacation Deprivation 2017, ranked 3rd out of 30 countries Gallup: Potential Net Migration Index 2014, ranked 4th out of 28 Asian countries Ipsos: Index of Ignorance 2016, ranked 36th out of 40 countries Ipsos: Misperceptions Index 2017, ranked 15th out of 38 countries Ipsos: Misperceptions Index 2018, ranked 4th out of 37 countries Mercer: Melbourne Mercer Global Pension Index 2018, ranked C rating with an overall score of 58.5 Mercer: Melbourne Mercer Global Pension Index 2019, ranked C+ rating with an overall score of 60.6 Mercer: Melbourne Mercer Global Pension Index 2020, ranked C+ rating with an overall score of 60.1 Mercer: Melbourne Mercer Global Pension Index 2021, ranked 23rd in the world with an overall score of 59.6 Nature: Where People Walk The Most 2017, ranked 44th out of 46 countries Social Progress Imperative: Social Progress Index 2014, ranked 45th out of 132 countries Social Progress Imperative: Social Progress Index 2015, ranked 46th out of 133 countries Social Progress Imperative: Social Progress Index 2016, ranked 50th out of 133 countries Social Progress Imperative: Social Progress Index 2017, ranked 50th out of 128 countries The Economist: Global Normalcy Index 2021, ranked 50th out of 50 countries Sustainable Development Solutions Network's World Happiness Report 2013, ranked 56th out of 156 countries United Nations Sustainable Development Solutions Network's World Happiness Report 2015, ranked 61st out of 158 countries United Nations Sustainable Development Solutions Network's World Happiness Report 2016, ranked 47th out of 157 countries United Nations Sustainable Development Solutions Network's World Happiness Report 2017, ranked 42nd out of 155 countries United Nations Sustainable Development Solutions Network's World Happiness Report 2018, ranked 35th out of 156 countries United Nations Sustainable Development Solutions Network's World Happiness Report 2019, ranked 80th out of 156 countries United Nations Sustainable Development Solutions Network's World Happiness Report 2020, ranked 82nd out of 153 countries U.S. News & World Report: Best Heritage Country 2020, ranked 32nd out of 73 countries U.S. News & World Report: Best Heritage Country 2021, ranked 34th out of 78 countries World Economic Forum: Global Social Mobility Index 2020, ranked 43rd out of 82 countries Demographics Birth rate 2011: ranked 82nd out of 221 countries Birth rate 2016: ranked 85th out of 226 countries Birth rate 2017: ranked 85th out of 226 countries Death rate 2011: ranked 186th out of 223 countries Death rate 2016: ranked 186th out of 226 countries Death rate 2017: ranked 192th out of 226 countries Fertility rate 2011: ranked 76th out of 222 countries Fertility rate 2016: ranked 77th out of 224 countries Fertility rate 2017: ranked 80th out of 224 countries Life expectancy 2011: ranked 111th out of 221 countries Life expectancy 2016: ranked 110th out of 224 countries Life expectancy 2017: ranked 109th out of 224 countries Technological Economist Intelligence Unit: E-readiness 2007, ranked 43rd out of 69 countries Economist Intelligence Unit: E-readiness 2008, ranked 34th out of 70 countries Economist Intelligence Unit: E-readiness 2009, ranked 38th out of 70 countries Economist Intelligence Unit: E-readiness 2010, ranked 36th out of 70 countries Economist Intelligence Unit: Government E-Payments Adoption 2011, ranked 29th out of 62 countries Economist Intelligence Unit: Government E-Payments Adoption 2018, ranked 19th out of 73 countries Huawei: Global Connectivity Index (GCI) 2015, ranked 29th out of 50 countries Huawei: Global Connectivity Index (GCI) 2016, ranked 25th out of 50 countries Huawei: Global Connectivity Index (GCI) 2017, ranked 24th out of 50 countries Huawei: Global Connectivity Index (GCI) 2019, ranked 30th out of 79 countries Huawei: Global Connectivity Index (GCI) 2020, ranked 34th out of 79 countries International Telecommunication Union: Global Cybersecurity Index (GCI) 2014, ranked 3rd out of 29 overall index scores International Telecommunication Union: Global Cybersecurity Index (GCI) 2015, ranked 3rd out of 29 overall index scores International Telecommunication Union: Global Cybersecurity Index (GCI) 2017, ranked 3rd out of 165 countries International Telecommunication Union: Global Cybersecurity Index (GCI) 2018, ranked 8th out of 175 countries International Telecommunication Union: Global Cybersecurity Index (GCI) 2020, ranked 5th out of 194 countries International Telecommunication Union: ICT Development Index 2010, ranked 61st out of 166 countries International Telecommunication Union: ICT Development Index 2015, ranked 64th out of 167 countries International Telecommunication Union: ICT Development Index 2016, ranked 62nd out of 175 countries International Telecommunication Union: ICT Development Index 2017, ranked 63rd out of 176 countries MasterCard & Tufts University: Digital Intelligence Index 2020, ranked as Stand Out Nation that digitally advanced and exhibiting high momentum, leaders in driving innovation and building on existing advantages NationMaster: Technological Achievement 2001, ranked 28th out of 68 countries Open Knowledge Foundation: Global Open Data Index (GODI) 2018, ranked 87th out of 94 countries Opensignal: The State of Mobile Games Experience in the 5G Era 2020, ranked 50th out of 100 countries Oxford Insights: Government Artificial Intelligence Readiness Index 2019, ranked 22nd out of 194 countries ScienceDirect: Global Smartphone Addiction Research, ranked 3rd out of 24 countries Speedtest Global Index 2018: Fixed Broadband, ranked 29th out of 130 countries Speedtest Global Index 2020: Fixed Broadband, ranked 36th out of 176 countries Speedtest Global Index 2018: Mobile, ranked 75th out of 124 countries Speedtest Global Index 2020: Mobile, ranked 86th out of 141 countries Surfshark: Digital Quality of Life (DQL) Index 2021, ranked 31st out of 110 countries Tortoise Media: Global AI Index 2019, ranked 40th out of 54 countries Tortoise Media: Global AI Index 2021, ranked 43rd out of 62 countries Truecaller Global Spam Report 2021, ranked 32nd in the world UN: E-Participation Index 2003, ranked 67th out of 151 countries UN: E-Participation Index 2004, ranked 62nd out of 151 countries UN: E-Participation Index 2005, ranked 52nd out of 151 countries UN: E-Participation Index 2008, ranked 41st out of 170 countries UN: E-Participation Index 2010, ranked 12nd out of 180 countries UN: E-Participation Index 2012, ranked 31st out of 161 countries UN: E-Participation Index 2014, ranked 59th out of 192 countries UN: E-Participation Index 2016, ranked 47th out of 191 countries UN: E-Participation Index 2018, ranked 32nd out of 193 countries UN: E-Participation Index 2020, ranked 29th out of 193 countries UN: E-Government Development Index (EDGI) 2003, ranked 43rd out of 174 countries UN: E-Government Development Index (EDGI) 2004, ranked 42nd out of 179 countries UN: E-Government Development Index (EDGI) 2005, ranked 43rd out of 180 countries UN: E-Government Development Index (EDGI) 2008, ranked 34th out of 183 countries UN: E-Government Development Index (EDGI) 2010, ranked 32nd out of 184 countries UN: E-Government Development Index (EDGI) 2012, ranked 40th out of 191 countries UN: E-Government Development Index (EDGI) 2014, ranked 52nd out of 193 countries UN: E-Government Development Index (EDGI) 2016, ranked 60th out of 193 countries UN: E-Government Development Index (EDGI) 2018, ranked 48th out of 193 countries UN: E-Government Development Index (EDGI) 2020, ranked 47th out of 193 countries UNCTAD: B2C E-Commerce Index 2016, ranked 44th out of 137 countries UNCTAD: B2C E-Commerce Index 2017, ranked 38th out of 144 countries UNCTAD: B2C E-Commerce Index 2018, ranked 34th out of 151 countries UNCTAD: B2C E-Commerce Index 2019, ranked 34th out of 152 countries UNCTAD: Technology and Innovation Report Readiness For Frontier Technologies Index 2021, ranked 31st out of 158 countries World Economic Forum: Global Information Technology Report The Networked Readiness Index 2016, ranked 31st out of 139 countries See also Lists of countries Lists by country List of international rankings International rankings of Penang References Malaysia
18545292
https://en.wikipedia.org/wiki/GitHub
GitHub
GitHub, Inc. is a provider of Internet hosting for software development and version control using Git. It offers the distributed version control and source code management (SCM) functionality of Git, plus its own features. It provides access control and several collaboration features such as bug tracking, feature requests, task management, continuous integration and wikis for every project. Headquartered in California, it has been a subsidiary of Microsoft since 2018. It is commonly used to host open-source projects. As of November 2021, GitHub reports having over 73 million developers and more than 200 million repositories (including at least 28 million public repositories). It is the largest source code host . History GitHub.com Development of the GitHub.com platform began on October 19, 2007. The site was launched in April 2008 by Tom Preston-Werner, Chris Wanstrath, P. J. Hyett and Scott Chacon after it had been made available for a few months prior as a beta release. GitHub has an annual keynote called GitHub Universe. Organizational structure GitHub, Inc. was originally a flat organization with no middle managers; in other words, "everyone is a manager" (self-management). Employees could choose to work on projects that interested them (open allocation), but salaries were set by the chief executive. In 2014, GitHub, Inc. introduced a layer of middle management amid harassment claims made against senior management. Tom Preston-Werner resigned as CEO amid the scandal. Finance GitHub.com was a bootstrapped start-up business, which in its first years provided enough revenue to be funded solely by its three founders and start taking on employees. In July 2012, four years after the company was founded, Andreessen Horowitz invested $100 million in venture capital. In July 2015 GitHub raised another $250 million of venture capital in a series B round. Investors were Sequoia Capital, Andreessen Horowitz, Thrive Capital and other venture capital funds. As of 2018, GitHub was estimated to be generating $200–300 million in Annual Recurring Revenue. The GitHub service was developed by Chris Wanstrath, P. J. Hyett, Tom Preston-Werner and Scott Chacon using Ruby on Rails, and started in February 2008. The company, GitHub, Inc., has existed since 2007 and is located in San Francisco. On February 24, 2009, GitHub announced that within the first year of being online, GitHub had accumulated over 46,000 public repositories, 17,000 of which were formed in the previous month. At that time, about 6,200 repositories had been forked at least once and 4,600 had been merged. That same year, the site was used by over 100,000 users, according to GitHub, and had grown to host 90,000 unique public repositories, 12,000 having been forked at least once, for a total of 135,000 repositories. In 2010, GitHub was hosting 1 million repositories. A year later, this number doubled. ReadWriteWeb reported that GitHub had surpassed SourceForge and Google Code in total number of commits for the period of January to May 2011. On January 16, 2013, GitHub passed the 3 million users mark and was then hosting more than 5 million repositories. By the end of the year, the number of repositories was twice as great, reaching 10 million repositories. In 2012, GitHub raised $100 million in funding from Andreessen Horowitz with $750 million valuation. On July 29, 2015, GitHub stated it had raised $250 million in funding in a round led by Sequoia Capital. Other investors of that round included Andreessen Horowitz, Thrive Capital, and IVP (Institutional Venture Partners). The round valued the company at approximately $2 billion. In 2015, GitHub opened an office in Japan, its first outside of the U.S. In 2016, GitHub was ranked No. 14 on the Forbes Cloud 100 list. It has not been featured on the 2018, 2019 and 2020 lists. On February 28, 2018, GitHub fell victim to the third largest distributed denial-of-service (DDoS) attack in history, with incoming traffic reaching a peak of about 1.35 terabits per second. On June 19, 2018, GitHub expanded its GitHub Education by offering free education bundles to all schools. Acquisition by Microsoft From 2012, Microsoft became a significant user of GitHub, using it to host open-source projects and development tools such as .NET Core, Chakra Core, MSBuild, PowerShell, PowerToys, Visual Studio Code, Windows Calculator, Windows Terminal and the bulk of its product documentation (now to be found on Microsoft Docs). On June 4, 2018, Microsoft announced its intent to acquire GitHub for US$7.5 billion. The deal closed on October 26, 2018. GitHub continued to operate independently as a community, platform and business. Under Microsoft, the service was led by Xamarin's Nat Friedman, reporting to Scott Guthrie, executive vice president of Microsoft Cloud and AI. GitHub's CEO, Chris Wanstrath, was retained as a "technical fellow," also reporting to Guthrie. There have been concerns from developers Kyle Simpson, JavaScript trainer and author, and Rafael Laguna, CEO at Open-Xchange over Microsoft's purchase, citing uneasiness over Microsoft's handling of previous acquisitions, such as Nokia's mobile business or Skype. This acquisition was in line with Microsoft's business strategy under CEO Satya Nadella, which has seen a larger focus on the cloud computing services, alongside development of and contributions to open-source software. Harvard Business Review argued that Microsoft was intending to acquire GitHub to get access to its user base, so it can be used as a loss leader to encourage use of its other development products and services. Concerns over the sale bolstered interest in competitors: Bitbucket (owned by Atlassian), GitLab (a commercial open source product that also runs a hosted service version) and SourceForge (owned by BIZX, LLC) reported that they had seen spikes in new users intending to migrate projects from GitHub to their respective services. In September 2019, GitHub acquired Semmle, a code analysis tool. In February 2020, GitHub launched in India under the name GitHub India Private Limited. In March 2020, GitHub announced that they were acquiring npm, a JavaScript packaging vendor, for an undisclosed sum of money. The deal was closed on 15 April 2020. In early July 2020, the GitHub Archive Program was established, to archive its open source code in perpetuity. Mascot GitHub's mascot is an anthropomorphized "octocat" with five octopus-like arms. The character was created by graphic designer Simon Oxley as clip art to sell on iStock, a website that enables designers to market royalty-free digital images. GitHub became interested in Oxley's work after Twitter selected a bird that he designed for their own logo. The illustration GitHub chose was a character that Oxley had named Octopuss. Since GitHub wanted Octopuss for their logo (a use that the iStock license disallows), they negotiated with Oxley to buy exclusive rights to the image. GitHub renamed Octopuss to Octocat, and trademarked the character along with the new name. Later, GitHub hired illustrator Cameron McEfee to adapt Octocat for different purposes on the website and promotional materials; McEfee and various GitHub users have since created hundreds of variations of the character, which are available on The Octodex. Services Projects on GitHub.com can be accessed and managed using the standard Git command-line interface; all standard Git commands work with it. GitHub.com also allows users to browse public repositories on the site. Multiple desktop clients and Git plugins are also available. The site provides social networking-like functions such as feeds, followers, wikis (using wiki software called Gollum) and a social network graph to display how developers work on their versions ("forks") of a repository and what fork (and branch within that fork) is newest. Anyone can browse and download public repositories but only registered users can contribute content to repositories. With a registered user account, users are able to have discussions, manage repositories, submit contributions to others' repositories, and review changes to code. GitHub.com began offering limited private repositories at no cost in January 2019 (limited to three contributors per project). Previously, only public repositories were free. On April 14, 2020, GitHub made "all of the core GitHub features" free for everyone, including "private repositories with unlimited collaborators." The fundamental software that underpins GitHub is Git itself, written by Linus Torvalds, creator of Linux. The additional software that provides the GitHub user interface was written using Ruby on Rails and Erlang by GitHub, Inc. developers Wanstrath, Hyett, and Preston-Werner. Scope The main purpose of GitHub.com is to facilitate the version control and issue tracking aspects of software development. Labels, milestones, responsibility assignment, and a search engine are available for issue tracking. For version control, Git (and by extension GitHub.com) allows pull requests to propose changes to the source code. Users with the ability to review the proposed changes can see a diff of the requested changes and approve them. In Git terminology, this action is called "committing" and one instance of it is a "commit." A history of all commits is kept and can be viewed at a later time. In addition, GitHub supports the following formats and features: Documentation, including automatically rendered README files in a variety of Markdown-like file formats (see ) Wikis GitHub Actions, which allows building continuous integration and continuous deployment pipelines for testing, releasing and deploying software without the use of third-party websites/platforms Graphs: pulse, contributors, commits, code frequency, punch card, network, members Integrations Directory Email notifications Discussions Option to subscribe someone to notifications by @ mentioning them. Emojis Nested task-lists within files Visualization of geospatial data 3D render files that can be previewed using a new integrated STL file viewer that displays the files on a "3D canvas." The viewer is powered by WebGL and Three.js. Photoshop's native PSD format can be previewed and compared to previous versions of the same file. PDF document viewer Security Alerts of known Common Vulnerabilities and Exposures in different packages GitHub's Terms of Service do not require public software projects hosted on GitHub to meet the Open Source Definition. The terms of service state, "By setting your repositories to be viewed publicly, you agree to allow others to view and fork your repositories." GitHub Enterprise GitHub Enterprise is a self-managed version of GitHub.com with similar functionality. It can be run on an organization's own hardware or on a cloud provider, and it has been available since November 2011. In November 2020, source code for GitHub Enterprise Server was leaked online in apparent protest against DMCA takedown of youtube-dl. According to GitHub, the source code came from GitHub accidentally sharing the code with Enterprise customers themselves, not from an attack on GitHub servers. GitHub Pages GitHub Pages is a static web hosting service offered by GitHub since 2008 to GitHub users for hosting user blogs, project documentation, or even whole books created as a page. All GitHub Pages content is stored in a Git repository, either as files served to visitors verbatim or in Markdown format. GitHub is seamlessly integrated with Jekyll static web site and blog generator and GitHub continuous integration pipelines. Each time the content source is updated, Jekyll regenerates the website and automatically serves it via GitHub Pages infrastructure. As with the rest of GitHub, it includes both free and paid tiers of service, instead of being supported by web advertising. Web sites generated through this service are hosted either as subdomains of the github.io domain, or as custom domains bought through a third-party domain name registrar. When custom domain is set on a GitHub Pages repo a Let's Encrypt certificate for it is generated automatically. Once the certificate has been generated Enforce HTTPS can be set for the repository's website to transparently redirect all HTTP requests to HTTPS. Gist GitHub also operates a pastebin-style site called Gist, which is for code snippets, as opposed to GitHub proper, which is for larger projects. Tom Preston-Werner débuted the feature at a Ruby conference in 2008. Gist builds on the traditional simple concept of a pastebin by adding version control for code snippets, easy forking, and TLS encryption for private pastes. Because each "gist" is its own Git repository, multiple code snippets can be contained in a single page and they can be pushed and pulled using Git. Unregistered users were able to upload Gists until February 18, 2018, when uploading gists became available only to logged-in users, reportedly to mitigate spamming. Gists' URLs use hexadecimal IDs, and edits to gists are recorded in a revision history, which can show the text difference of thirty revisions per page with an option between a "split" and "unified" view. Like repositories, Gists can be forked, "starred", i.e. publicly bookmarked, and commented on. The count of revisions, stars, and forks is indicated on the gist page. Education program GitHub launched a new program called the GitHub Student Developer Pack to give students free access to popular development tools and services. GitHub partnered with Bitnami, Crowdflower, DigitalOcean, DNSimple, HackHands, Namecheap, Orchestrate, Screenhero, SendGrid, Stripe, Travis CI and Unreal Engine to launch the program. In 2016 GitHub announced the launch of the GitHub Campus Experts program to train and encourage students to grow technology communities at their universities. The Campus Experts program is open to university students of 18 years and older across the world. GitHub Campus Experts are one of the primary ways that GitHub funds student-oriented events and communities, Campus Experts are given access to training, funding, and additional resources to run events and grow their communities. To become a Campus Expert applicants must complete an online training course consisting of multiple modules designed to grow community leadership skills. GitHub Marketplace service GitHub also provides some software as a service ("SaaS") integrations for adding extra features to projects. Those services include: Waffle.io: Project management for software teams. Automatically see pull requests, automated builds, reviews, and deployments across all of your repositories in GitHub. Rollbar: Integrate with GitHub to provide real time debugging tools and full-stack exception reporting. It is compatible with all popular code languages, such as JavaScript, Python, .NET, Ruby, PHP, Node.js, Android, iOS, Go, Java, and C#. Codebeat: For automated code analysis specialized in web and mobile developers. The supported languages for this software are: Elixir, Go, Java, Swift, JavaScript, Python, Ruby, Kotlin, Objective-C, and TypeScript. Travis CI: To provide confidence for your apps while doing test and ship. Also gives full control over the build environment, to adapt it to the code. Supported languages: Go, Java, JavaScript, Objective-C, Python, PHP, Ruby, and Swift. GitLocalize: Developed for teams that are translating their content from one point to another. GitLocalize automatically syncs with your repository so you can keep your workflow on GitHub. It also keeps you updated on what needs to be translated. GitHub Sponsors GitHub Sponsors allows users to make monthly money donations to projects hosted on GitHub. The public beta was announced on May 23, 2019, and the project accepts wait list registrations. The Verge said that GitHub Sponsors "works exactly like Patreon" because "developers can offer various funding tiers that come with different perks, and they'll receive recurring payments from supporters who want to access them and encourage their work" except with "zero fees to use the program." Furthermore, GitHub offer incentives for early adopters during the first year: it pledges to cover payment processing costs, and match sponsorship payments up to $5,000 per developer. Furthermore, users still can use other similar services like Patreon and Open Collective and link to their own websites. GitHub Archive Program In July 2020, GitHub stored a February archive of the site in an abandoned mountain mine in Svalbard, Norway, part of the Arctic World Archive and not far from the Svalbard Global Seed Vault. The archive contained the code of all active public repositories, as well as that of dormant, but significant public repositories. The 21TB of data was stored on piqlFilm archival film reels as matrix (2D) barcode (Boxing barcode), and is expected to last 500–1,000 years. The GitHub Archive Program is also working with partners on Project Silica, in an attempt to store all public repositories for 10,000 years. It aims to write archives into the molecular structure of quartz glass platters, using a high-precision laser that pulses a quadrillion (1,000,000,000,000,000) times per second. Controversies Harassment allegations In March 2014, GitHub programmer Julie Ann Horvath alleged that founder and CEO Tom Preston-Werner and his wife, Theresa, engaged in a pattern of harassment against her that led to her leaving the company. In April 2014, GitHub released a statement denying Horvath's allegations. However, following an internal investigation, GitHub confirmed the claims. GitHub's CEO Chris Wanstrath wrote on the company blog, "The investigation found Tom Preston-Werner in his capacity as GitHub's CEO acted inappropriately, including confrontational conduct, disregard of workplace complaints, insensitivity to the impact of his spouse's presence in the workplace, and failure to enforce an agreement that his spouse should not work in the office." Preston-Werner subsequently resigned from the company. The firm then announced it would implement new initiatives and trainings "to make sure employee concerns and conflicts are taken seriously and dealt with appropriately." Sanctions On July 25, 2019, a developer based in Iran wrote on Medium that GitHub had blocked his private repositories and prohibited access to GitHub pages. Soon after, GitHub confirmed that it was now blocking developers in Iran, Crimea, Cuba, North Korea, and Syria from accessing private repositories. However, GitHub reopened access to GitHub Pages days later, for public repositories regardless of location. It was also revealed that using GitHub while visiting sanctioned countries could result in similar action occurring on a user's account. GitHub responded to complaints and the media through a spokesperson, saying: GitHub is subject to US trade control laws, and is committed to full compliance with applicable law. At the same time, GitHub's vision is to be the global platform for developer collaboration, no matter where developers reside. As a result, we take seriously our responsibility to examine government mandates thoroughly to be certain that users and customers are not impacted beyond what is required by law. This includes keeping public repositories services, including those for open source projects, available and accessible to support personal communications involving developers in sanctioned regions. Developers who feel that they should not have restrictions can appeal for the removal of said restrictions, including those who only travel to, and do not reside in, those countries. GitHub has forbidden the use of VPNs and IP proxies to access the site from sanctioned countries, as purchase history and IP addresses are how they flag users, among other sources. Censorship On December 3, 2014, Russia blacklisted GitHub.com because GitHub initially refused to take down user-posted suicide manuals. After a day, Russia withdrew its block, and GitHub began blocking specific content and pages in Russia. On December 31, 2014, India blocked GitHub.com along with 31 other websites over pro-ISIS content posted by users; the block was lifted three days later. On October 8, 2016, Turkey blocked GitHub to prevent email leakage of a hacked account belonging to the country's energy minister. On March 26, 2015, a large-scale DDoS attack was launched against GitHub.com that lasted for just under five days. The attack, which appeared to originate from China, primarily targeted GitHub-hosted user content describing methods of circumventing Internet censorship. On April 19, 2020, Chinese police detained Chen Mei and Cai Wei (volunteers for Terminus 2049, a project hosted on GitHub), and accused them of "picking quarrels and provoking trouble." Cai and Chen archived news articles, interviews, and other materials published on Chinese media outlets and social media platforms that have been removed by censors in China. ICE contract GitHub has a $200,000 contract with U.S. Immigration and Customs Enforcement (ICE) for the use of their on-site product GitHub Enterprise Server. This contract was renewed in 2019, despite internal opposition from many GitHub employees. In an email sent to employees, later posted to the GitHub blog on October 9, 2019, CEO Nat Friedman stated "The revenue from the purchase is less than $200,000 and not financially material for our company." He announced that GitHub had pledged to donate $500,000 to "nonprofit groups supporting immigrant communities targeted by the current administration." In response at least 150 GitHub employees signed an open letter re-stating their opposition to the contract, and denouncing alleged human rights abuses by ICE. As of November 13, 2019, five workers had resigned over the contract. The ICE contract dispute came into focus again in June 2020 due to the company's decision to abandon "master/slave" branch terminology, spurred by the George Floyd protests and Black Lives Matter movement. Detractors of GitHub describe the branch renaming to be a form of performative activism and have urged GitHub to cancel their ICE contract instead. An open letter from members of the open source community was shared on GitHub in December 2019, demanding that the company drop their contract with ICE and provide more transparency into how they conduct business and partnerships. The letter has been signed by more than 700 people. Capitol riot comments and employee firing In January 2021, GitHub fired one of its employees after he expressed concern for colleagues as a violent mob stormed the U.S. Capitol, calling some of the rioters "Nazis." After an investigation, GitHub's COO said there were "significant errors of judgment and procedure" with the company's decision to fire the employee. As a result of the investigation, GitHub reached out to the employee, and the company's head of human resources resigned. Criticism Linus Torvalds, the original developer of the Git software, has criticized the merging ability of the GitHub interface. Developed projects Atom, a free and open-source text and source code editor Electron, an open-source framework to use JavaScript-based websites as desktop applications. See also Collaborative innovation network Collaborative intelligence Commons-based peer production Comparison of source code hosting facilities DevOps Gitea Timeline of GitHub References External links 2018 mergers and acquisitions Bug and issue tracking software Cloud computing providers Collaborative projects Computing websites Cross-platform software Git (software) Internet properties established in 2008 Microsoft acquisitions Microsoft subsidiaries Microsoft websites Open-source software hosting facilities Project hosting websites Project management software Remote companies South of Market, San Francisco Version control
28184
https://en.wikipedia.org/wiki/Sound%20card
Sound card
A sound card (also known as an audio card) is an internal expansion card that provides input and output of audio signals to and from a computer under control of computer programs. The term sound card is also applied to external audio interfaces used for professional audio applications. Sound functionality can also be integrated onto the motherboard, using components similar to those found on plug-in cards. The integrated sound system is often still referred to as a sound card. Sound processing hardware is also present on modern video cards with HDMI to output sound along with the video using that connector; previously they used a S/PDIF connection to the motherboard or sound card. Typical uses of sound cards or sound card functionality include providing the audio component for multimedia applications such as music composition, editing video or audio, presentation, education and entertainment (games) and video projection. Sound cards are also used for computer-based communication such as voice over IP and teleconferencing. General characteristics Sound cards use a digital-to-analog converter (DAC), which converts recorded or generated digital signal data into an analog format. The output signal is connected to an amplifier, headphones, or external device using standard interconnects, such as a TRS phone connector. A common external connector is the microphone connector. Input through a microphone connector can be used, for example, by speech recognition or voice over IP applications. Most sound cards have a line in connector for an analog input from a sound source that has higher voltage levels than a microphone. In either case, the sound card uses an analog-to-digital converter to digitize this signal. Some cards include a sound chip to support the production of synthesized sounds, usually for real-time generation of music and sound effects using minimal data and CPU time. The card may use direct memory access to transfer the samples to and from main memory, from where a recording and playback software may read and write it to the hard disk for storage, editing, or further processing. Sound channels and polyphony An important sound card characteristic is polyphony, which refers to its ability to process and output multiple independent voices or sounds simultaneously. These distinct channels are seen as the number of audio outputs, which may correspond to a speaker configuration such as 2.0 (stereo), 2.1 (stereo and sub woofer), 5.1 (surround), or other configurations. Sometimes, the terms voice and channel are used interchangeably to indicate the degree of polyphony, not the output speaker configuration. For example, much older sound chips could accommodate three voices, but only one output audio channel (i.e., a single mono output), requiring all voices to be mixed together. Later cards, such as the AdLib sound card, had a 9-voice polyphony combined in 1 mono output channel. Early PC sound cards had multiple FM synthesis voices (typically 9 or 16) which were used for MIDI music. The full capabilities of advanced cards are often not fully used; only one (mono) or two (stereo) voice(s) and channel(s) are usually dedicated to playback of digital sound samples, and playing back more than one digital sound sample usually requires a software downmix at a fixed sampling rate. Modern low-cost integrated sound cards (i.e., those built into motherboards) such as audio codecs like those meeting the AC'97 standard and even some lower-cost expansion sound cards still work this way. These devices may provide more than two sound output channels (typically 5.1 or 7.1 surround sound), but they usually have no actual hardware polyphony for either sound effects or MIDI reproduction these tasks are performed entirely in software. This is similar to the way inexpensive softmodems perform modem tasks in software rather than in hardware. In the early days of wavetable synthesis, some sound card manufacturers advertised polyphony solely on the MIDI capabilities alone. In this case, typically, the card is only capable of two channels of digital sound and the polyphony specification solely applies to the number of MIDI instruments the sound card is capable of producing at once. Modern sound cards may provide more flexible audio accelerator capabilities which can be used in support of higher levels of polyphony or other purposes such as hardware acceleration of 3D sound, positional audio and real-time DSP effects. List of sound card standards Color codes Connectors on the sound cards are color-coded as per the PC System Design Guide. They may also have symbols of arrows, holes and soundwaves that are associated with each jack position. History of sound cards for the IBM PC architecture Sound cards for IBM PC compatible computers were very uncommon until 1988. For the majority IBM PC users, the internal PC speaker was the only way for early PC software to produce sound and music. The speaker hardware was typically limited to square waves. The resulting sound was generally described as "beeps and boops" which resulted in the common nickname "beeper". Several companies, most notably Access Software, developed techniques for digital sound reproduction over the PC speaker like RealSound. The resulting audio, while functional, suffered from the heavily distorted output and low volume, and usually required all other processing to be stopped while sounds were played. Other home computers of the 1980s like the Commodore 64 included hardware support for digital sound playback or music synthesis, leaving the IBM PC at a disadvantage when it came to multimedia applications. Early sound cards for the IBM PC platform were not designed for gaming or multimedia applications, but rather on specific audio applications, such as music composition with the AdLib Personal Music System, IBM Music Feature Card, and Creative Music System, or on speech synthesis like Digispeech DS201, Covox Speech Thing, and Street Electronics Echo. In 1988, a panel of computer-game CEOs stated at the Consumer Electronics Show that the PC's limited sound capability prevented it from becoming the leading home computer, that it needed a $49–79 sound card with better capability than current products, and that once such hardware was widely installed, their companies would support it. Sierra On-Line, which had pioneered supporting EGA and VGA video, and 3-1/2" disks, promised that year to support the AdLib, IBM Music Feature, and Roland MT-32 sound cards in its games. A 1989 Computer Gaming World survey found that 18 of 25 game companies planned to support AdLib, six Roland and Covox, and seven Creative Music System/Game Blaster. Hardware manufacturers One of the first manufacturers of sound cards for the IBM PC was AdLib, which produced a card based on the Yamaha YM3812 sound chip, also known as the OPL2. The AdLib had two modes: A 9-voice mode where each voice could be fully programmed, and a less frequently used "percussion" mode with 3 regular voices producing 5 independent percussion-only voices for a total of 11. Creative Labs also marketed a sound card about the same time called the Creative Music System (C/MS). Although the C/MS had twelve voices to AdLib's nine, and was a stereo card while the AdLib was mono, the basic technology behind it was based on the Philips SAA1099 chip which was essentially a square-wave generator. It sounded much like twelve simultaneous PC speakers would have except for each channel having amplitude control, and failed to sell well, even after Creative renamed it the Game Blaster a year later, and marketed it through RadioShack in the US. The Game Blaster retailed for under $100 and was compatible with many popular games, such as Silpheed. A large change in the IBM PC compatible sound card market happened when Creative Labs introduced the Sound Blaster card. Recommended by Microsoft to developers creating software based on the Multimedia PC standard, the Sound Blaster cloned the AdLib and added a sound coprocessor for recording and playback of digital audio. The card also included a game port for adding a joystick, and the capability to interface to MIDI equipment using the game port and a special cable. With AdLib compatibility and more features at nearly the same price, most buyers chose the Sound Blaster. It eventually outsold the AdLib and dominated the market. Roland also made sound cards in the late 1980s such as the MT-32 and LAPC-I. Roland cards sold for hundreds of dollars. Many games had music written for their cards, such as Silpheed and Police Quest II. The cards were often poor at sound effects such as laughs, but for music was by far the best sound cards available until the mid-nineties. Some Roland cards, such as the SCC, and later versions of the MT-32 were made to be less expensive. By 1992, one sound card vendor advertised that its product was "Sound Blaster, AdLib, Disney Sound Source and Covox Speech Thing Compatible!" Responding to readers complaining about an article on sound cards that unfavorably mentioned the Gravis Ultrasound, Computer Gaming World stated in January 1994 that, "The de facto standard in the gaming world is Sound Blaster compatibility ... It would have been unfair to have recommended anything else." The magazine that year stated that Wing Commander II was "Probably the game responsible" for making it the standard card. The Sound Blaster line of cards, together with the first inexpensive CD-ROM drives and evolving video technology, ushered in a new era of multimedia computer applications that could play back CD audio, add recorded dialogue to video games, or even reproduce full motion video (albeit at much lower resolutions and quality in early days). The widespread decision to support the Sound Blaster design in multimedia and entertainment titles meant that future sound cards such as Media Vision's Pro Audio Spectrum and the Gravis Ultrasound had to be Sound Blaster compatible if they were to sell well. Until the early 2000s, when the AC'97 audio standard became more widespread and eventually usurped the SoundBlaster as a standard due to its low cost and integration into many motherboards, Sound Blaster compatibility was a standard that many other sound cards supported to maintain compatibility with many games and applications released. Industry adoption When game company Sierra On-Line opted to support add-on music hardware in addition to built-in hardware such as the PC speaker and built-in sound capabilities of the IBM PCjr and Tandy 1000, what could be done with sound and music on the IBM PC changed dramatically. Two of the companies Sierra partnered with were Roland and AdLib, opting to produce in-game music for King's Quest 4 that supported the MT-32 and AdLib Music Synthesizer. The MT-32 had superior output quality, due in part to its method of sound synthesis as well as built-in reverb. Since it was the most sophisticated synthesizer they supported, Sierra chose to use most of the MT-32's custom features and unconventional instrument patches, producing background sound effects (e.g., chirping birds, clopping horse hooves, etc.) before the Sound Blaster brought digital audio playback to the PC. Many game companies also supported the MT-32, but supported the Adlib card as an alternative because of the latter's higher market base. The adoption of the MT-32 led the way for the creation of the MPU-401, Roland Sound Canvas and General MIDI standards as the most common means of playing in-game music until the mid-1990s. Feature evolution Early ISA bus sound cards were half-duplex, meaning they couldn't record and play digitized sound simultaneously. Later, ISA cards like the SoundBlaster AWE series and Plug-and-play Soundblaster clones supported simultaneous recording and playback, but at the expense of using up two IRQ and DMA channels instead of one. Conventional PCI bus cards generally do not have these limitations and are mostly full-duplex. Sound cards have evolved in terms of digital audio sampling rate (starting from 8-bit , to 32-bit, that the latest solutions support). Along the way, some cards started offering wavetable synthesis, which provides superior MIDI synthesis quality relative to the earlier Yamaha OPL based solutions, which uses FM-synthesis. Some higher-end cards introduced their own RAM and processor for user-definable sound samples and MIDI instruments as well as to offload audio processing from the CPU. With some exceptions, for years, sound cards, most notably the Sound Blaster series and their compatibles, had only one or two channels of digital sound. Early games and MOD-players needing more channels than a card could support had to resort to mixing multiple channels in software. Even today, the tendency is still to mix multiple sound streams in software, except in products specifically intended for gamers or professional musicians. Crippling of features Most new sound cards no longer have the audio loopback device commonly called "Stereo Mix"/"Wave out mix"/"Mono Mix"/"What U Hear" that and that allows users to digitally record speaker output to the microphone input. Lenovo and other manufacturers fail to implement the feature in hardware, while other manufacturers disable the driver from supporting it. In some cases, loopback can be reinstated with driver updates. Alternatively software (Total Recorder or Virtual Audio Cable) can be purchased to enable the functionality. According to Microsoft, the functionality was hidden by default in Windows Vista to reduce user confusion, but is still available, as long as the underlying sound card drivers and hardware support it. Ultimately, the user can use the analog loophole and connect the line out directly to the line in on the sound card. However, in laptops, manufacturers have gradually moved from providing 3 separate jacks with TRS connectorsusually for line in, line out/headphone out and microphoneinto just a single combo jack with TRRS connector that combines inputs and outputs. Outputs The number of physical sound channels has also increased. The first sound card solutions were mono. Stereo sound was introduced in the early 1980s, and quadraphonic sound came in 1989. This was shortly followed by 5.1 channel audio. The latest sound cards support up to 8 audio channels for the 7.1 speaker setup. A few early sound cards had sufficient power to drive unpowered speakers directlyfor example, two watts per channel. With the popularity of amplified speakers, sound cards no longer have a power stage, though in many cases they can adequately drive headphones. Professional sound cards Professional sound cards are sound cards optimized for high-fidelity, low-latency multichannel sound recording and playback. Their drivers usually follow the Audio Stream Input/Output protocol for use with professional sound engineering and music software. Professional sound cards are usually described as audio interfaces, and sometimes have the form of external rack-mountable units using USB, FireWire, or an optical interface, to offer sufficient data rates. The emphasis in these products is, in general, on multiple input and output connectors, direct hardware support for multiple input and output sound channels, as well as higher sampling rates and fidelity as compared to the usual consumer sound card. On the other hand, certain features of consumer sound cards such as support for environmental audio extensions (EAX), optimization for hardware acceleration in video games, or real-time ambience effects are secondary, nonexistent or even undesirable in professional sound cards, and as such audio interfaces are not recommended for the typical home user. The typical "consumer-grade" sound card is intended for generic home, office, and entertainment purposes with an emphasis on playback and casual use, rather than catering to the needs of audio professionals. In response to this, Steinberg (the creators of audio recording and sequencing software, Cubase and Nuendo) developed a protocol that specified the handling of multiple audio inputs and outputs. In general, consumer grade sound cards impose several restrictions and inconveniences that would be unacceptable to an audio professional. One of a modern sound card's purposes is to provide an Analog-to-digital converter (ADC, AD converter), and a Digital-to-analog converter (DAC, DA converter). However, in professional applications, there is usually a need for enhanced recording (analog to digital) conversion capabilities. One of the limitations of consumer sound cards is their comparatively large sampling latency; this is the time it takes for the AD Converter to complete conversion of a sound sample and transfer it to the computer's main memory. Consumer sound cards are also limited in the effective sampling rates and bit depths they can actually manage (compare analog versus digital sound) and have lower numbers of less flexible input channels: professional studio recording use typically requires more than the two channels that consumer sound cards provide, and more accessible connectors, unlike the variable mixture of internal—and sometimes virtual—and external connectors found in consumer-grade sound cards. Sound devices other than expansion cards Integrated sound hardware on PC motherboards In 1984, the first IBM PCjr had a rudimentary 3-voice sound synthesis chip (the SN76489) which was capable of generating three square-wave tones with variable amplitude, and a pseudo-white noise channel that could generate primitive percussion sounds. The Tandy 1000, initially a clone of the PCjr, duplicated this functionality, with the Tandy TL/SL/RL models adding digital sound recording and playback capabilities. Many games during the 1980s that supported the PCjr's video standard (described as "Tandy-compatible", "Tandy graphics", or "TGA") also supported PCjr/Tandy 1000 audio. In the late 1990s, many computer manufacturers began to replace plug-in sound cards with a "codec" chip (actually a combined audio AD/DA-converter) integrated into the motherboard. Many of these used Intel's AC'97 specification. Others used inexpensive ACR slot accessory cards. From around 2001, many motherboards incorporated integrated "real" (non-codec) sound cards, usually in the form of a custom chipset providing something akin to full Sound Blaster compatibility, providing relatively high-quality sound. However, these features were dropped when AC'97 was superseded by Intel's HD Audio standard, which was released in 2004, again specified the use of a codec chip, and slowly gained acceptance. As of 2011, most motherboards have returned to using a codec chip, albeit an HD Audio compatible one, and the requirement for Sound Blaster compatibility relegated to history. Integrated sound on other platforms Various non-IBM PC compatible computers, such as early home computers like the Commodore 64 (1982) and Amiga (1985), NEC's PC-88 and PC-98, Fujitsu's FM-7 and FM Towns, the MSX,<ref name="hg101_retro"> Reprinted from {{citation|title=Retro Gamer|issue=67|year=2009}}</ref> Apple's Macintosh, and workstations from manufacturers like Sun, have had their own motherboard integrated sound devices. In some cases, most notably in those of the Macintosh, Amiga, C64, PC-98, MSX, FM-7, and FM Towns, they provide very advanced capabilities (as of the time of manufacture), in others they are only minimal capabilities. Some of these platforms have also had sound cards designed for their bus architectures that cannot be used in a standard PC. Several Japanese computer platforms, including the PC-88, PC-98, MSX, and FM-7, featured built-in FM synthesis sound from Yamaha by the mid-1980s. By 1989, the FM Towns computer platform featured built-in PCM sample-based sound and supported the CD-ROM format. The custom sound chip on Amiga, named Paula, had four digital sound channels (2 for the left speaker and 2 for the right) with 8-bit resolution (although with patches, 14/15-bit was accomplishable at the cost of high CPU usage) for each channel and a 6-bit volume control per channel. Sound playback on Amiga was done by reading directly from the chip-RAM without using the main CPU. Most arcade games have integrated sound chips, the most popular being the Yamaha OPL chip for BGM coupled with a variety of DACs for sampled audio and sound effects. Sound cards on other platforms The earliest known sound card used by computers was the Gooch Synthetic Woodwind, a music device for PLATO terminals, and is widely hailed as the precursor to sound cards and MIDI. It was invented in 1972. Certain early arcade machines made use of sound cards to achieve playback of complex audio waveforms and digital music, despite being already equipped with onboard audio. An example of a sound card used in arcade machines is the Digital Compression System card, used in games from Midway. For example, Mortal Kombat II on the Midway T Unit hardware. The T-Unit hardware already has an onboard YM2151 OPL chip coupled with an OKI 6295 DAC, but said game uses an added on DCS card instead. The card is also used in the arcade version of Midway and Aerosmith's Revolution X for complex looping BGM and speech playback (Revolution X used fully sampled songs from the band's album that transparently looped- an impressive feature at the time the game was released). MSX computers, while equipped with built-in sound capabilities, also relied on sound cards to produce better quality audio. The card, known as Moonsound, uses a Yamaha OPL4 sound chip. Prior to the Moonsound, there were also sound cards called MSX Music and MSX Audio'', which uses OPL2 and OPL3 chipsets, for the system. The Apple II series of computers, which did not have sound capabilities beyond a beep until the IIGS, could use plug-in sound cards from a variety of manufacturers. The first, in 1978, was ALF's Apple Music Synthesizer, with 3 voices; two or three cards could be used to create 6 or 9 voices in stereo. Later ALF created the Apple Music II, a 9-voice model. The most widely supported card, however, was the Mockingboard. Sweet Micro Systems sold the Mockingboard in various models. Early Mockingboard models ranged from 3 voices in mono, while some later designs had 6 voices in stereo. Some software supported use of two Mockingboard cards, which allowed 12-voice music and sound. A 12-voice, single card clone of the Mockingboard called the Phasor was made by Applied Engineering. In late 2005 a company called ReactiveMicro.com produced a 6-voice clone called the Mockingboard v1 and also had plans to clone the Phasor and produce a hybrid card user-selectable between Mockingboard and Phasor modes plus support both the SC-01 or SC-02 speech synthesizers. The Sinclair ZX Spectrum that initially only had a beeper had some sound cards made for it. One example is the TurboSound. Other examples are the Fuller Box, Melodik for the Didaktik Gamma, AY-Magic et.c. The Zon X-81 for the ZX81 was also possible to use on the ZX Spectrum using an adapter. External sound devices Devices such as the Covox Speech Thing could be attached to the parallel port of an IBM PC and feed 6- or 8-bit PCM sample data to produce audio. Also, many types of professional sound cards (audio interfaces) have the form of an external FireWire or USB unit, usually for convenience and improved fidelity. Sound cards using the PCMCIA Cardbus interface were available before laptop and notebook computers routinely had onboard sound. Cardbus audio may still be used if onboard sound quality is poor. When Cardbus interfaces were superseded by Expresscard on computers since about 2005, manufacturers followed. Most of these units are designed for mobile DJs, providing separate outputs to allow both playback and monitoring from one system, however some also target mobile gamers, providing high-end sound to gaming laptops who are usually well-equipped when it comes to graphics and processing power, but tend to have audio codecs that are no better than the ones found on regular laptops. USB sound cards USB sound "cards" are external devices that plug into the computer via USB. They are often used in studios and on stage by electronic musicians including live PA performers and DJs. DJs who use DJ software typically use sound cards integrated into DJ controllers or specialized DJ sound cards. DJ sound cards sometimes have inputs with phono preamplifiers to allow turntables to be connected to the computer to control the software's playback of music files with timecode vinyl. The USB specification defines a standard interface, the USB audio device class, allowing a single driver to work with the various USB sound devices and interfaces on the market. Mac OS X, Windows, and Linux support this standard. However, many USB sound cards do not conform to the standard and require proprietary drivers from the manufacturer. Even cards meeting the older, slow, USB 1.1 specification are capable of high quality sound with a limited number of channels, or limited sampling frequency or bit depth, but USB 2.0 or later is more capable. A USB audio interface may also describe a device allowing a computer which has a sound-card, yet lacks a standard audio socket, to be connected to an external device which requires such a socket, via its USB socket. Uses The main function of a sound card is to play audio, usually music, with varying formats (monophonic, stereophonic, various multiple speaker setups) and degrees of control. The source may be a CD or DVD, a file, streamed audio, or any external source connected to a sound card input. Audio may be recorded. Sometimes sound card hardware and drivers do not support recording a source that is being played. A card can also be used, in conjunction with software, to generate arbitrary waveforms, acting as an audio-frequency function generator. Free and commercial software is available for this purpose; there are also online services that generate audio files for any desired waveforms, playable through a sound card. A card can be used, again in conjunction with free or commercial software, to analyse input waveforms. For example, a very-low-distortion sinewave oscillator can be used as input to equipment under test; the output is sent to a sound card's line input and run through Fourier transform software to find the amplitude of each harmonic of the added distortion. Alternatively, a less pure signal source may be used, with circuitry to subtract the input from the output, attenuated and phase-corrected; the result is distortion and noise only, which can be analysed. There are programs which allow a sound card to be used as an audio-frequency oscilloscope. For all measurement purposes a sound card must be chosen with good audio properties. It must itself contribute as little distortion and noise as possible, and attention must be paid to bandwidth and sampling. A typical integrated sound card, the Realtek ALC887, according to its data sheet has distortion of about 80 dB below the fundamental; cards are available with distortion better than −100 dB. Sound cards with a sampling rate of 192 kHz can be used to synchronize the clock of the computer with a time signal transmitter working on frequencies below 96 kHz like DCF 77 with a special software and a coil at the entrance of the sound card, working as antenna , . Driver architecture To use a sound card, the operating system (OS) typically requires a specific device driver, a low-level program that handles the data connections between the physical hardware and the operating system. Some operating systems include the drivers for many cards; for cards not so supported, drivers are supplied with the card, or available for download. DOS programs for the IBM PC often had to use universal middleware driver libraries (such as the HMI Sound Operating System, the Miles Audio Interface Libraries (AIL), the Miles Sound System etc.) which had drivers for most common sound cards, since DOS itself had no real concept of a sound card. Some card manufacturers provided (sometimes inefficient) middleware TSR-based drivers for their products. Often the driver is a Sound Blaster and AdLib emulator designed to allow their products to emulate a Sound Blaster and AdLib, and to allow games that could only use SoundBlaster or AdLib sound to work with the card. Finally, some programs simply had driver/middleware source code incorporated into the program itself for the sound cards that were supported. Microsoft Windows uses drivers generally written by the sound card manufacturers. Many device manufacturers supply the drivers on their own discs or to Microsoft for inclusion on Windows installation disc. Sometimes drivers are also supplied by the individual vendors for download and installation. Bug fixes and other improvements are likely to be available faster via downloading, since CDs cannot be updated as frequently as a web or FTP site. USB audio device class support is present from Windows 98 SE onwards. Since Microsoft's Universal Audio Architecture (UAA) initiative which supports the HD Audio, FireWire and USB audio device class standards, a universal class driver by Microsoft can be used. The driver is included with Windows Vista. For Windows XP, Windows 2000 or Windows Server 2003, the driver can be obtained by contacting Microsoft support. Almost all manufacturer-supplied drivers for such devices also include this class driver. A number of versions of UNIX make use of the portable Open Sound System (OSS). Drivers are seldom produced by the card manufacturer. Most present day Linux distributions make use of the Advanced Linux Sound Architecture (ALSA). Up until Linux kernel 2.4, OSS was the standard sound architecture for Linux, although ALSA can be downloaded, compiled and installed separately for kernels 2.2 or higher. But from kernel 2.5 onwards, ALSA was integrated into the kernel and the OSS native drivers were deprecated. Backwards compatibility with OSS-based software is maintained, however, by the use of the ALSA-OSS compatibility API and the OSS-emulation kernel modules. Mockingboard support on the Apple II is usually incorporated into the programs itself as many programs for the Apple II boot directly from disk. However a TSR is shipped on a disk that adds instructions to Apple Basic so users can create programs that use the card, provided that the TSR is loaded first. List of sound card manufacturers AIM Asus Advanced Gravis Computer Technology (defunct) AdLib (defunct) Aureal Semiconductor (defunct) Auzentech (defunct) Aztech Labs Behavior Tech Computer Behringer C-Media Creative Technology Conexant E-mu (bought out by Creative) Ensoniq (bought out by Creative) ESI HT Omega Lynx Studio Technology MARIAN M-Audio Onkyo Prism Sound Realtek Semiconductor RME Roland Trident Microsystems (defunct) Turtle Beach Systems VIA Technologies Yamaha Zoltrix (AdLib clone manufacturer) See also Analog Devices Sound chip EAX ASIO Audio signal processing Audio Libraries (Categories) Codec Virtual Studio Technology (VST) Cross-platform Audio Creation Tool (XACT) DirectSound DirectMusic OpenAL Programmable sound generator Dolby Digital Dolby Digital EX S Logic SNR Audio compression (data) PC System Design Guide Notes References External links Jumper settings for many sound cards History of PC sound hardware Hardware acceleration Sound cards
445218
https://en.wikipedia.org/wiki/Information%20literacy
Information literacy
The Association of College & Research Libraries defines information literacy as a "set of integrated abilities encompassing the reflective discovery of information, the understanding of how information is produced and valued and the use of information in creating new knowledge and participating ethically in communities of learning". The 1989 American Library Association (ALA) Presidential Committee on Information Literacy formally defined information literacy (IL) as attributes of an individual, stating that "to be information literate, a person must be able to recognize when information is needed and have the ability to locate, evaluate and use effectively the needed information". In 1990, academic Lori Arp published a paper asking, "Are information literacy instruction and bibliographic instruction the same?" Arp argued that neither term was particularly well defined by theoreticians or practitioners in the field, further studies were needed to lessen the confusion and continue to articulate the parameters of the question. The Alexandria Proclamation of 2005 defined the term as a human rights issue: "Information literacy empowers people in all walks of life to seek, evaluate, use and create information effectively to achieve their personal, social, occupational and educational goals. It is a basic human right in a digital world and promotes social inclusion in all nations." The United States National Forum on Information Literacy defined information literacy as "the ability to know when there is a need for information, to be able to identify, locate, evaluate, and effectively use that information for the issue or problem at hand". A number of other efforts have been made to better define the concept and its relationship to other skills and forms of literacy. Other pedagogical outcomes related to information literacy include traditional literacy, computer literacy, research skills and critical thinking skills. Information literacy as a sub-discipline is an emerging topic of interest and counter measure among educators and librarians with the advent of misinformation, fake news, and disinformation. Scholars have argued that in order to maximize people's contributions to a democratic and pluralistic society, educators should be challenging governments and the business sector to support and fund educational initiatives in information literacy. History of the concept In a 1976 article in Library Journal, scholars were already beginning to discuss the difficult task and subtleties in defining the term. In that article, which has widely been cited since its publication, M.R. Owens stated that "information literacy differs from context to context. All [people] are created equal, but voters with information resources are in a position to make more intelligent decisions than citizens who are information illiterates. The application of information resources to the process of decision-making to fulfill civic responsibilities is a vital necessity. In a literature review published in an academic journal in 2020, Oral Roberts University professor Angela Sample cited several conceptual waves of IL definitions since circa 1970. Some of those broad conceptual approaches included information literacy defined as a way of thinking; information literacy defined as a set of skills, information literacy defined as a social practice. These concept waves in the academic world led to the adoption of metaliteracy as a mechanism of IL concepts, and the creation of threshold concepts and knowledge dispositions, eventually leading to the creation of the ALA's Information Literacy Framework. The phrase "information literacy" first appeared in print in a 1974 report written on behalf of the National Commission on Libraries and Information Science by Paul G. Zurkowski, who was at the time president of the Software and Information Industry Association. Zurkowski used the phrase to describe the "techniques and skills" learned by the information literate "for utilizing the wide range of information tools as well as primary sources in molding information solutions to their problems" and drew a relatively firm line between the "literates" and "information illiterates". The American Library Association's Presidential Committee on Information Literacy released a report on January 10, 1989, outlining the importance of information literacy, opportunities to develop information literacy, and an Information Age School. The report's final name is the Presidential Committee on Information Literacy: Final Report. The recommendations of the Committee led to the creation later that year of the National Forum on Information Literacy, a coalition of more than 90 national and international organizations. In 1998, the American Association of School Librarians and the Association for Educational Communications and Technology published Information Power: Building Partnerships for Learning, which further established specific goals for information literacy education, defining some nine standards in the categories of "information literacy", "independent learning", and "social responsibility". Also in 1998, the Presidential Committee on Information Literacy updated its final report. The report outlined six recommendations from the original report, and examined areas of challenge and progress. In 1999, the Society of College, National and University Libraries (SCONUL) in the UK, published "The Seven Pillars of Information Literacy" model to "facilitate further development of ideas amongst practitioners in the field ... stimulate debate about the ideas and about how those ideas might be used by library and other staff in higher education concerned with the development of students' skills". A number of other countries have developed information literacy standards since then. In 2003, the National Forum on Information Literacy, together with UNESCO and the National Commission on Libraries and Information Science, sponsored an international conference in Prague with representatives from twenty-three countries to discuss the importance of information literacy within a global context. The resulting Prague Declaration described information literacy as a "key to social, cultural, and economic development of nations and communities, institutions and individuals in the 21st century" and declared its acquisition as "part of the basic human right of lifelong learning". In the United States, IL was made a priority during President Barack Obama's first term, who designated October as National Information Literacy Awareness Month. Presidential Committee on Information Literacy The American Library Association's Presidential Committee on Information Literacy defined information literacy as the ability "to recognize when information is needed and have the ability to locate, evaluate, and use effectively the needed information" and highlighted information literacy as a skill essential for lifelong learning and the production of an informed and prosperous citizenry. The committee outlined six principal recommendations. Included were recommendations like "Reconsider the ways we have organized information institutionally, structured information access, and defined information's role in our lives at home in the community, and in the work place"; to promote "public awareness of the problems created by information illiteracy"; to develop a national research agenda related to information and its use; to ensure the existence of "a climate conducive to students' becoming information literate"; to include information literacy concerns in teacher education democracy. In the updated report, the committee ended with an invitation, asking the National Forum and regular citizens to recognize that "the result of these combined efforts will be a citizenry which is made up of effective lifelong learners who can always find the information needed for the issue or decision at hand. This new generation of information literate citizens will truly be America's most valuable resource", and to continue working toward an information literate world. The Presidential Committee on Information Literacy resulted in the creation of the National Forum on Information Literacy. National Forum on Information Literacy In 1983, United States published "A Nation at Risk: The Imperative for Educational Reform", a report declaring that a "rising tide of mediocrity" was eroding the foundation of the American educational system. The report has been regarded as the genesis of the current educational reform movement within the United States. This report, in conjunction with the rapid emergence of the information society, led the American Library Association (ALA) to convene a panel of educators and librarians in 1987. The Forum, UNESCO and International Federation of Library Associations and Institutions (IFLA) collaborated to organize several "experts meetings" that resulted in the Prague Declaration (2003) and the Alexandria Proclamation (2005). Both statements underscore the importance of information literacy as a basic, fundamental human right, and consider IL as a lifelong learning skill. Global The International Federation of Library Associations and Institutions (IFLA) IFLA has established an Information Literacy Section. The Section has, in turn, developed and mounted an Information Literacy Resources Directory, called InfoLit Global. Librarians, educators and information professionals may self-register and upload information-literacy-related materials (IFLA, Information Literacy Section, n.d.) According to the IFLA website, "The primary purpose of the Information Literacy Section is to foster international cooperation in the development of information literacy education in all types of libraries and information institutions." The International Alliance for Information Literacy (IAIL) This alliance was created from the recommendation of the Prague Conference of Information Literacy Experts in 2003. One of its goals is to allow for the sharing of information literacy research and knowledge between nations. The IAIL also sees "lifelong learning" as a basic human right, and their ultimate goal is to use information literacy as a way to allow everyone to participate in the "Information Society" as a way of fulfilling this right. The following organizations are founding members of IAIL: Australian and New Zealand Institute for Information Literacy (ANZIIL), based in Australia and New Zealand European Network on Information Literacy (EnIL), based in the European Union National Forum on Information Literacy (NFIL), based in the United States NORDINFOlit, based in Scandinavia SCONUL (Society of College, National and University Libraries) Advisory Committee on Information Literacy, based in the United Kingdom UNESCO Media and Information Literacy According to the UNESCO website, this is their "action to provide people with the skills and abilities for critical reception, assessment and use of information and media in their professional and personal lives". Their goal is to create information literate societies by creating and maintaining educational policies for information literacy. They work with teachers around the world, training them in the importance of information literacy and providing resources for them to use in their classrooms. UNESCO publishes studies on information literacy in many countries, looking at how information literacy is currently taught, how it differs in different demographics, and how to raise awareness. They also publish pedagogical tools and curricula for school boards and teachers to refer to and use. Specific aspects In "Information Literacy as a Liberal Art", Jeremy J. Shapiro and Shelley K. Hughes (1996) advocated a more holistic approach to information literacy education, one that encouraged not merely the addition of information technology courses as an adjunct to existing curricula, but rather a radically new conceptualization of "our entire educational curriculum in terms of information". Drawing upon Enlightenment ideals like those articulated by Enlightenment philosopher Condorcet, Shapiro and Hughes argued that information literacy education is "essential to the future of democracy, if citizens are to be intelligent shapers of the information society rather than its pawns, and to humanistic culture, if information is to be part of a meaningful existence rather than a routine of production and consumption". To this end, Shapiro and Hughes outlined a "prototype curriculum" that encompassed the concepts of computer literacy, library skills, and "a broader, critical conception of a more humanistic sort", suggesting seven important components of a holistic approach to information literacy: Tool literacy, or the ability to understand and use the practical and conceptual tools of current information technology relevant to education and the areas of work and professional life that the individual expects to inhabit. Resource literacy, or the ability to understand the form, format, location and access methods of information resources, especially daily expanding networked information resources. Social-structural literacy, or understanding how information is socially situated and produced. Research literacy, or the ability to understand and use the IT-based tools relevant to the work of today's researcher and scholar. Publishing literacy, or the ability to format and publish research and ideas electronically, in textual and multimedia forms ... to introduce them into the electronic public realm and the electronic community of scholars. Emerging technology literacy, or the ability to continuously adapt to, understand, evaluate and make use of the continually emerging innovations in information technology so as not to be a prisoner of prior tools and resources, and to make intelligent decisions about the adoption of new ones. Critical literacy, or the ability to evaluate critically the intellectual, human and social strengths and weaknesses, potentials and limits, benefits and costs of information technologies. Ira Shor further defines critical literacy as "[habits] of thought, reading, writing, and speaking which go beneath surface meaning, first impressions, dominant myths, official pronouncements, traditional clichés, received wisdom, and mere opinions, to understand the deep meaning, root causes, social context, ideology, and personal consequences of any action, event, object, process, organization, experience, text, subject matter, policy, mass media, or discourse". Information literacy skills Big6 skills The Big6 skills have been used in a variety of settings to help those with a variety of needs. For example, the library of Dubai Women's College, in Dubai, United Arab Emirates which is an English as a second language institution, uses the Big6 model for its information literacy workshops. According to Story-Huffman (2009), using Big6 at the college "has transcended cultural and physical boundaries to provide a knowledge base to help students become information literate" (para. 8). In primary grades, Big6 has been found to work well with variety of cognitive and language levels found in the classroom. Issues to consider in the Big6 approach have been highlighted by Philip Doty: Eisenberg (2004) has recognized that there are a number of challenges to effectively applying the Big6 skills, not the least of which is information overload which can overwhelm students. Part of Eisenberg's solution is for schools to help students become discriminating users of information. Another conception This conception, used primarily in the library and information studies field, and rooted in the concepts of library instruction and bibliographic instruction, is the ability "to recognize when information is needed and have the ability to locate, evaluate and use effectively the needed information". In this view, information literacy is the basis for lifelong learning. It is also the basis for evaluating contemporary sources of information. In the publication Information Power: Building Partnerships for Learning (AASL and AECT, 1998), three categories, nine standards, and twenty-nine indicators are used to describe the information literate student. The categories and their standards are as follows: Category 1: Information literacy Standards: The student who is information literate accesses information efficiently and effectively. evaluates information critically and competently. uses information accurately and creatively. Category 2: Independent learning Standards: The student who is an independent learner is information literate and pursues information related to personal interests. appreciates literature and other creative expressions of information. strives for excellence in information seeking and knowledge generation. Category 3: Social responsibility Standards: The student who contributes positively to the learning community and to society is information literate and recognizes the importance of information to a democratic society. practices ethical behavior in regard to information and information technology. participates effectively in groups to pursue and generate information. Since information may be presented in a number of formats, the term "information" applies to more than just the printed word. Other literacies such as visual, media, computer, network, and basic literacies are implicit in information literacy. Many of those who are in most need of information literacy are often amongst those least able to access the information they require: As the Presidential Committee report points out, members of these disadvantaged groups are often unaware that libraries can provide them with the access, training and information they need. In Osborne (2004), many libraries around the country are finding numerous ways to reach many of these disadvantaged groups by discovering their needs in their own environments (including prisons) and offering them specific services in the libraries themselves. Effects on education The rapidly evolving information landscape has demonstrated a need for education methods and practices to evolve and adapt accordingly. Information literacy is a key focus of educational institutions at all levels and in order to uphold this standard, institutions are promoting a commitment to lifelong learning and an ability to seek out and identify innovations that will be needed to keep pace with or outpace changes. Educational methods and practices, within our increasingly information-centric society, must facilitate and enhance a student's ability to harness the power of information. Key to harnessing the power of information is the ability to evaluate information, to ascertain among other things its relevance, authenticity and modernity. The information evaluation process is crucial life skill and a basis for lifelong learning. According to Lankshear and Knobel, what is needed in our education system is a new understanding of literacy, information literacy and on literacy teaching. Educators need to learn to account for the context of our culturally and linguistically diverse and increasingly globalized societies. We also need to take account of the burgeoning variety of text forms associated with information and multimedia technologies. Evaluation consists of several component processes including metacognition, goals, personal disposition, cognitive development, deliberation, and decision-making. This is both a difficult and complex challenge and underscores the importance of being able to think critically. Critical thinking is an important educational outcome for students. Education institutions have experimented with several strategies to help foster critical thinking, as a means to enhance information evaluation and information literacy among students. When evaluating evidence, students should be encouraged to practice formal argumentation. Debates and formal presentations must also be encouraged to analyze and critically evaluate information. Education professionals must underscore the importance of high information quality. Students must be trained to distinguish between fact and opinion. They must be encouraged to use cue words such as "I think" and "I feel" to help distinguish between factual information and opinions. Information related skills that are complex or difficult to comprehend must be broken down into smaller parts. Another approach would be to train students in familiar contexts. Education professionals should encourage students to examine "causes" of behaviors, actions and events. Research shows that people evaluate more effectively if causes are revealed, where available. Information in any format is produced to convey a message and is shared via a selected delivery method. The iterative processes of researching, creating, revising, and disseminating information vary, and the resulting product reflects these differences (Association of College, p. 5). Some call for increased critical analysis in Information Literacy instruction. Smith (2013) identifies this as beneficial "to individuals, particularly young people during their period of formal education. It could equip them with the skills they need to understand the political system and their place within it, and, where necessary, to challenge this" (p. 16). Education in the US Standards National content standards, state standards and information literacy skills terminology may vary, but all have common components relating to information literacy. Information literacy skills are critical to several of the National Education Goals outlined in the Goals 2000: Educate America Act, particularly in the act's aims to increase "school readiness", "student achievement and citizenship", and "adult literacy and lifelong learning". Of specific relevance are the "focus on lifelong learning, the ability to think critically, and on the use of new and existing information for problem solving", all of which are important components of information literacy. In 1998, the American Association of School Librarians and the Association for Educational Communications and Technology published "Information Literacy Standards for Student Learning", which identified nine standards that librarians and teachers in K–12 schools could use to describe information literate students and define the relationship of information literacy to independent learning and social responsibility: Standard One: The student who is information literate accesses information efficiently and effectively. Standard Two: The student who is information literate evaluates information critically and competently. Standard Three: The student who is information literate uses information accurately and creatively. Standard Four: The student who is an independent learner is information literate and pursues information related to personal interests. Standard Five: The student who is an independent learner is information literate and appreciates literature and other creative expressions of information. Standard Six: The student who is an independent learner is information literate and strives for excellence in information seeking and knowledge generation. Standard Seven: The student who contributes positively to the learning community and to society is information literate and recognizes the importance of information to a democratic society. Standard Eight: The student who contributes positively to the learning community and to society is information literate and practices ethical behavior in regard to information and information technology. Standard Nine: The student who contributes positively to the learning community and to society is information literate and participates effectively in groups to pursue and generate information. In 2007 AASL expanded and restructured the standards that school librarians should strive for in their teaching. These were published as "Standards for the 21st Century Learner" and address several literacies: information, technology, visual, textual, and digital. These aspects of literacy were organized within four key goals: that "learners use of skills, resources, & tools" to "inquire, think critically, and gain knowledge"; to "draw conclusions, make informed decisions, apply knowledge to new situations, and create new knowledge"; to "share knowledge and participate ethically and productively as members of our democratic society"; and to "pursue personal and aesthetic growth". In 2000, the Association of College and Research Libraries (ACRL), a division of the American Library Association (ALA), released "Information Literacy Competency Standards for Higher Education", describing five standards and numerous performance indicators considered best practices for the implementation and assessment of postsecondary information literacy programs. The five standards are: Standard One: The information literate student determines the nature and extent of the information needed. Standard Two: The information literate student accesses needed information effectively and efficiently. Standard Three: The information literate student evaluates information and its sources critically and incorporates selected information into his or her knowledge base and value system. Standard Four: The information literate student, individually or as a member of a group, uses information effectively to accomplish a specific purpose. Standard Five: The information literate student understands many of the economic, legal, and social issues surrounding the use of information and accesses and uses information ethically and legally. These standards were meant to span from the simple to more complicated, or in terms of Bloom's Taxonomy of Educational Objectives, from the "lower order" to the "higher order". Lower order skills would involve for instance being able to use an online catalog to find a book relevant to an information need in an academic library. Higher order skills would involve critically evaluating and synthesizing information from multiple sources into a coherent interpretation or argument. In 2016, the Association of College and Research Librarians (ACRL) rescinded the Standards and replaced them with the Framework for Information Literacy for Higher Education, which offers the following set of core ideas: Authority is constructed and contextual Information creation as a process Information has value Research as inquiry Scholarship as conversation Searching as strategic exploration The Framework is based on a cluster of interconnected core concepts, with flexible options for implementation, rather than on a set of standards or learning outcomes, or any prescriptive enumeration of skills. At th e heart of this Framework are conceptual understandings that organize many other concepts and ideas about information, research, and scholarship into a coherent whole. K–12 education restructuring Today instruction methods have changed drastically from the mostly one-directional teacher-student model, to a more collaborative approach where the students themselves feel empowered. Much of this challenge is now being informed by the American Association of School Librarians that published new standards for student learning in 2007. Within the K–12 environment, effective curriculum development is vital to imparting Information Literacy skills to students. Given the already heavy load on students, efforts must be made to avoid curriculum overload. Eisenberg strongly recommends adopting a collaborative approach to curriculum development among classroom teachers, librarians, technology teachers, and other educators. Staff must be encouraged to work together to analyze student curriculum needs, develop a broad instruction plan, set information literacy goals, and design specific unit and lesson plans that integrate the information skills and classroom content. These educators can also collaborate on teaching and assessment duties Educators are selecting various forms of resource-based learning (authentic learning, problem-based learning and work-based learning) to help students focus on the process and to help students learn from the content. Information literacy skills are necessary components of each. Within a school setting, it is very important that a students' specific needs as well as the situational context be kept in mind when selecting topics for integrated information literacy skills instruction. The primary goal should be to provide frequent opportunities for students to learn and practice information problem solving. To this extent, it is also vital to facilitate repetition of information seeking actions and behavior. The importance of repetition in information literacy lesson plans cannot be underscored, since we tend to learn through repetition. A students' proficiency will improve over time if they are afforded regular opportunities to learn and to apply the skills they have learnt. The process approach to education is requiring new forms of student assessment. Students demonstrate their skills, assess their own learning, and evaluate the processes by which this learning has been achieved by preparing portfolios, learning and research logs, and using rubrics. Efforts in K–12 education Information literacy efforts are underway on individual, local, and regional bases. Many states have either fully adopted AASL information literacy standards or have adapted them to suit their needs. States such as Oregon (OSLIS, 2009) increasing rely on these guidelines for curriculum development and setting information literacy goals. Virginia, on the other hand, chose to undertake a comprehensive review, involving all relevant stakeholders and formulate its own guidelines and standards for information literacy. At an international level, two framework documents jointly produced by UNESCO and the IFLA (International Federation of Library Associations and Institutions) developed two framework documents that laid the foundations in helping define the educational role to be played by school libraries: the School library manifesto (1999),. Another immensely popular approach to imparting information literacy is the Big6 set of skills. Eisenberg claims that the Big6 is the most widely used model in K–12 education. This set of skills seeks to articulate the entire information seeking life cycle. The Big6 is made up of six major stages and two sub-stages under each major stages. It defines the six steps as being: task definition, information seeking strategies, location and access, use of information, synthesis, and evaluation. Such approaches seek to cover the full range of information problem-solving actions that a person would normally undertake, when faced with an information problem or with making a decision based on available resources. Efforts in higher education Information literacy instruction in higher education can take a variety of forms: stand-alone courses or classes, online tutorials, workbooks, course-related instruction, or course-integrated instruction. One attempt in the area of physics was published in 2009. The six regional accreditation boards have added information literacy to their standards. Librarians often are required to teach the concepts of information literacy during "one shot" classroom lectures. There are also credit courses offered by academic librarians to prepare college students to become information literate. In 2016, the Association of College & Research Libraries (ACRL, part of the American Library Association) adopted a new "Framework for Information Literacy for Higher Education", replacing the ACRL's "Information Literacy Standards for Higher Education" that had been approved in 2000. The standards were largely criticized by proponents of critical information literacy, a concept deriving from critical pedagogy, for being too prescriptive. It's termed a "framework" because it consists of interconnected core concepts designed to be interpreted and implemented locally depending on the context and needs of the audience. The framework draws on recent research around threshold concepts, or the ideas that are gateways to broader understanding or skills in a given discipline. It also draws on newer research around metaliteracy, and assumes a more holistic view of information literacy that includes creation and collaboration in addition to consumption, so is appropriate for current practices around social media and Web 2.0. The six concepts, or frames, are: Authority is constructed and contextual Information creation as a process Information has value Research as inquiry Scholarship as conversation Searching as strategic exploration This draws from the concept of metaliteracy, which offers a renewed vision of information literacy as an overarching set of abilities in which students are consumers and creators of information who can participate successfully in collaborative spaces (Association of College, p. 2) There is a growing body of scholarly research describing faculty-librarian collaboration to bring information literacy skills practice into higher education curriculum, moving beyond "one shot" lectures to an integrated model in which librarians help design assignments, create guides to useful course resources, and provide direct support to students throughout courses. A recent literature review indicates that there is still a lack of evidence concerning the unique information literacy practices of doctoral students, especially within disciplines such as the health sciences. Distance education Now that information literacy has become a part of the core curriculum at many post-secondary institutions, the library community is charged to provide information literacy instruction in a variety of formats, including online learning and distance education. The Association of College and Research Libraries (ACRL) addresses this need in its Guidelines for Distance Education Services (2000): Within the e-learning and distance education worlds, providing effective information literacy programs brings together the challenges of both distance librarianship and instruction. With the prevalence of course management systems such as WebCT and Blackboard, library staff are embedding information literacy training within academic programs and within individual classes themselves. Education in Singapore Public education In October 2013, The National Library of Singapore (NLB) created the S.U.R.E, (Source, Understand, Research, Evaluate) campaign. The objectives and strategies of the S.U.R.E. campaign were first presented at the 2014 IFLA WLIC. It is summarised by NLB as simplifying information literacy into four basic building blocks, to "promote and educate the importance of Information Literacy and discernment in informations searching". Public events furthering the S.U.R.E. campaign were organised 2015. This was called the "Super S.U.R.E. Show" involving speakers to engage the public with their anecdotes and other learning points, for example the ability to separate fact from opinion. Higher Education Information literacy is taught by librarians at institutes of higher education. Some components of information literacy are embedded in the undergraduate curriculum at the National University of Singapore. Assessment Many academic libraries are participating in a culture of assessment, and attempt to show the value of their information literacy interventions to their students. Librarians use a variety of techniques for this assessment, some of which aim to empower students and librarians and resist adherence to unquestioned norms. Oakleaf describes the benefits and dangers of various assessment approaches: fixed-choice tests, performance assessments, and rubrics. See also References Other sources Prague Declaration: "Towards an Information Literate Society" Alexandria Proclamation: A High Level International Colloquium on Information Literacy and Lifelong Learning 2006 Information Literacy Summit: American Competitiveness in the Internet Age 1989 Presidential Committee on Information Literacy: Final Report 1983 A Nation at Risk: The Imperative for Educational Reform Gibson, C. (2004). Information literacy develops globally: The role of the national forum on information literacy. Knowledge Quest. Breivik P.S. and Gee, E.G. (2006). Higher Education in the Internet Age: Libraries Creating a Strategic Edge. Westport, CT: Greenwood Publishing. Further reading Association of College and Research Libraries (2016). Framework for Information Literacy for Higher Education. Association of College Research Libraries (2007). The First-Year Experience and Academic Libraries: A Select, Annotated Bibliography. Retrieved April 20, 2008. Barner, R. (1996, March/April). Seven changes that will challenge managers-and workers. The Futurist, 30 (2), 14–18. Breivik. P. S., & Senn, J. A. (1998). Information literacy: Educating children for the 21st century. (2nd ed.). Washington, DC: National Education Association. Bruch, C & Wilkinson, C.W. (2012). Surveying Terrain, Clearing Pathways, in Carpenter, J. P. (1989). Using the new technologies to create links between schools throughout the world: Colloquy on computerized school links. (Exeter, Devon, United Kingdom, October 17–20, 1988). Doty, P. (2003). Bibliographic instruction: The digital divide and resistance of users to technologies. Retrieved July 12, 2009, from http://www.ischool.utexas.edu/~l38613dw/website_spring_03/readings/BiblioInstruction.html Doyle, C.S. (1992). Outcome Measures for Information Literacy Within the National Education Goals of 1990. Final Report to National Forum on Information Literacy. Summary of Findings. Eisenberg, M. (2004). Information literacy: The whole enchilada [PowerPoint Presentation]. Retrieved July 14, 2009, from https://web.archive.org/web/20111001054537/http://www.big6.com/presentations/sreb/ Eisenberg, M., Lowe, C., & Spitzer, K. (2004). Information Literacy: Essential Skills for the Information Age. 2nd. edition. Libraries Unlimited. Ercegovac, Zorana. (2008). Information literacy: Search strategies, tools & resources for high school students and college freshmen. (2nd ed.). Columbus, OH: Linworth Books. <https://web.archive.org/web/20130914191842/http://infoen.net/page1/page1.html>. Ercegovac, Zorana. (2012). "Letting students use Web 2.0 tools to hook one another on reading". Knowledge Quest, 40 (3), 36-39. <>. Fister, Barbara. (2019, February 14). Information Literacy's Third Wave. Inside Higher Ed. Retrieved November 10, 2019, from https://www.insidehighered.com/blogs/library-babel-fish/information-literacy%E2%80%99s-third-wave National Commission of Excellence in Education. (1983). A Nation at risk: The imperative for educational reform. Washington, DC: U.S. Government Printing Office. (ED 226 006) National Hispanic Council on Aging. (nd). Mission statement. Retrieved July 13, 2009, from National Forum on Information Literacy Web site. Obama, B. (2009). Presidential Proclamation: National Information Literacy Awareness Month, 2009. Washington, DC: U.S. Government Printing Office. Retrieved October 27, 2009 from https://web.archive.org/web/20121021192357/http://www.whitehouse.gov/assets/documents/2009literacy_prc_rel.pdf Osborne, R. (Ed.). (2004). From outreach to equity: Innovative models of library policy and practice. Chicago: American Library Association. Ryan, J., & Capra, S. (2001). Information literacy toolkit. Chicago: American Library Association. Schwarzenegger, S. (2009). Executive order S-06-09. Sacramento, CA. Retrieved October 27, 2009 from https://web.archive.org/web/20110602181538/http://gov.ca.gov/news.php?id=12393 SCONUL. (2007). The Seven Pillars of Information Literacy model. Retrieved November 3, 2010 from https://web.archive.org/web/20071028011653/http://www.sconul.ac.uk/groups/information_literacy/sp/model.html Secretary's Commission on Achieving Necessary Skills Tuominen et al. (2005). Information Literacy as a Sociotechnical Practice. The University of Chicago Press. External links ELD information literacy wiki, U. Cal. Davis Information science Digital divide Literacy
23591181
https://en.wikipedia.org/wiki/Herbar%20Digital
Herbar Digital
Herbar Digital is a research project at the Fachhochschule Hannover (FHH) from 2006 to 2011 for rationalising the virtualization of botanical document material and their usage by process optimization and automation. Research project Conservatively estimated, 500 million dried plants — so called herbar specimens — are stored in herbariums at botanic gardens across the world under scientifically controlled conditions. The aim of the third-party-funds financed research project is to automate the process of virtualization of herbar specimens and their management to make them digitally accessible to botanists and biologists. An examplary solution for the example herbarium of the botanic Garden in Berlin-Dahlem (ca. 3.5 million plant specimens) will allow to generalize the applied structure, software and technique to a level from that general reference solutions for efficient high-resolution scanning of any object of museum quality can be derived. Workflow Herbar specimens have been scanned for some time at different spaces. However, the degree of automation of these solutions is very low, so that only a small number of herbarium specimens can be scanned each day. The University of Applied Sciences and Arts in Hannover has analysed industrial production workflows and derived approaches for scanning herbarium specimens. Automation is divided into three development focus points: Scanner–workplace with control mode, Technology management for scanning, System solutions taking staff into account. The herbar specimens get scanned to RAW data with a scanner camera attached to a standard personal computer and stored on a server using SilverFast Ai software. Image optimization (color, contrast, and brightness) is done on a second computer with SilverFast HDR software. The post processed RAW files get converted in a suitable image format, controlled at a profiled graphic screen and stored back on the server. A base unit is suited for paper sizes from DIN A3 to DIN A2 and is equipped with panel lights, a standard camera column and a scanner camera. The use of a controlled rotary indexing table provides new possibilities to improve performance. The rotary indexing table allows a new herbarium specimen to be loaded while another one is scanned. Access is also much easier, as the herbarium specimens are not reloaded directly under the panel lights. Apart from the time needed for scanning and turning the table 180°, the user determines the timing of the machine (3–12 rpm). The rotary indexing table can be controlled with a standard PC connected through a USB port. The scanner software is also installed on this computer. The rotary indexing table can be accessed from this computer without any problems using Herbar Digital Control software (HDC). Project partners Herbar specimen: Botanischer Garten und Botanisches Museum Berlin-Dahlem Development of the automating: Fachhochschule Hannover Technology management and scanner software: LaserSoft Imaging Hardware for digitization: Kaiser Fototechnik, Pentacon and Quato Control and systems engineering: isel Simulation: DELMIA See also Botanic Garden and Botanical Museum Berlin-Dahlem, Freie Universität Berlin Herbarium Virtual herbarium References 2006 establishments in Germany Herbaria Research projects
30858132
https://en.wikipedia.org/wiki/Cisco%20IOS%20XR
Cisco IOS XR
IOS XR is a train of Cisco Systems' widely deployed Internetworking Operating System (IOS), used on their high-end Network Convergence System (NCS), carrier-grade routers such as the CRS series, 12000 series, and ASR9000 series. Architecture According to Cisco's product literature, IOS XR shares very little infrastructure with the other IOS trains, and is instead built upon a "preemptive, memory protected, multitasking, microkernel-based operating system". The microkernel was formerly provided by QNX; versions 6.0 onwards use the Wind River Linux distribution. IOS XR aims to provide the following advantages over the earlier IOS trains: Improved high availability (largely through support for hardware redundancy and fault containment methods such as protected memory spaces for individual processes and process restartability) Better scalability for large hardware configurations (through a distributed software infrastructure and a two-stage forwarding architecture) A package based software distribution model (allowing optional features such as multicast routing and MPLS to be installed and removed while the router is in service) The ability to install package upgrades and patches (potentially while the router remains in service) A web-based GUI for system management (making use of a generic, XML management interface) History IOS XR was announced along with the CRS-1 in May 2004. The first generally available version was 2.0. The most recent release is version 6.5.3 which was released in March 2019. Other significant releases include the following. 3.2 – first generally available version for the 12000 router series 3.9 – first generally available version for the ASR9000 series 5.0 (September 20, 2013) – first generally available version for the NCS6000 series, which is based upon a Linux kernel 6.1.1 - Introduces support for the 64-bit Linux-based IOS XR operating system on ASR9000 series Differences between IOS and IOS XR Example BGP configuration for both IOS and IOS XR. More examples can be found in the Cisco document Converting Cisco IOS Configurations to Cisco IOS XR Configurations. ! Cisco IOS ! router bgp 109 no synchronization bgp log-neighbor-changes neighbor 203.0.113.1 remote-as 109 neighbor 203.0.113.1 update-source Loopback0 no auto-summary ! ! Cisco IOS XR ! router bgp 109 neighbor 203.0.113.1 remote-as 109 update-source Loopback0 ! ! See also Cisco IOS Cisco IOS XE Cisco NX-OS References External links Cisco multimedia documentation covering IOS XR and its supported systems Cisco Security Advisories - complete history Cisco IOS XR Software General Information Cisco CRS Support Page Cisco XR 12000 Series Router Support Cisco ASR 9000 Series Support HEAnet's New Network and Working with IOS-XR Cisco products
20932769
https://en.wikipedia.org/wiki/InMage
InMage
InMage was a computer software company based in the US and India. It marketed a product line called Scout that used continuous data protection (CDP) for backup and replication. Scout consisted of two product lines: the host-offload line, which uses a software agent on the protected servers, and the fabric line, which uses an agent on the Fibre Channel switch fabric. The software protects at the volume or block level, tracking all write changes. It allows for local or remote protection policies. The first version of the product was released in 2002. Product details Scout features a capacity optimized CDP repository. The continuous approach allows for near zero backup windows, any point in time restores, second level RPOs, both within the datacenter and across datacenters. The target volume is kept updated either synchronously or asynchronously based on the product line. The retention logs allow for any point in time recovery to the user specified retention window. Scout also features application failover support in a disaster recovery. The 6.2 release of the Scout supported on Microsoft Windows, Linux, Solaris, AIX and HP UX. It also supports VMware, XenServer, Hyper-V, Solaris Zones and a few other server virtualization platforms. Both server and application failover is supported for the Microsoft Windows. Application Failover supports Microsoft Exchange, BlackBerry Enterprise Server, Microsoft SQL Server, File Servers, Microsoft SharePoint, Oracle, MySQL among others. The 6.2 release series supports both clustered and non clustered operation. Scout integrates with traditional tape backup software to enable longer term retention on tape. Scout integrates with Microsoft's VSS APIs for SQL, Exchange, Hyper-V consistent snapshots. It also integrates with Oracle, DB2, MySQL, and Postgresql consistency APIs. Company InMage was founded in 2001 by Dr. Murali Rajagopal and Kumar Malvalli as SV Systems. Microsoft Corporation acquired InMage in 2014. References Microsoft acquisitions Backup software Storage software Former Microsoft subsidiaries
36312094
https://en.wikipedia.org/wiki/1948%20USC%20Trojans%20baseball%20team
1948 USC Trojans baseball team
The 1948 USC Trojans baseball team represented the University of Southern California in the 1948 NCAA baseball season. The team was coached by co-head coaches Sam Barry and Rod Dedeaux. The Trojans won the College World Series, defeating future U.S. President George H. W. Bush and the Yale Bulldogs in the championship series. Roster Schedule Awards and honors Jim Brideweser All-PCC First Team Wally Hood All-American First Team All-PCC First Team Bill Lillie All-PCC Honorable Mention Art Mazmanian All-PCC First Team All-American First Team Bruce McKelvey All-PCC Second Team Hank Workman All-PCC First Team All-American First Team Bob Zuber All-PCC Honorable Mention References USC USC Trojans baseball seasons College World Series seasons NCAA Division I Baseball Championship seasons Pac-12 Conference baseball champion seasons
9317603
https://en.wikipedia.org/wiki/Deckadance
Deckadance
Deckadance (often referred to as DD) is a DJ console and mixing tool developed by Image-Line software and acquired in 2015 by Gibson. Initially released in May 2007, it operates on Windows and Mac OS X, and comes in a House Edition and Club Edition. The latter has support for timecoded vinyl. Deckadance can be used as a standalone application or as a VST plugin inside VST-supporting software hosts like Ableton Live. It can host any VST-compliant effect or software synthesizer, and can be controlled by most Midi controllers. History Deckadance was created by Image-Line as a mixing application for DJs. Image-Line worked closely with DJ and programmer Arguru to develop the first version, which was released for Windows in May 2007. After Arguru died in a car accident in June 2007, future versions were worked on by the Image-Line developers Arguru had been cooperating with, many of whom are also DJs. Deckadance was made compatible with Mac OS X after the release of version 1.20.0 in January 2008. The most recent release is version 2.43 from April 28, 2015. Software overview System requirements As of version 1.9, the minimum system requirements for Deckadance on a PC are Windows 7, Vista, or XP (SP2). Hardware requirements consist of 512 MB RAM, 200 MB free hard drive space, and a DirectSound or ASIO compatible soundcard. Also required is either an Intel Pentium III 1 GHz or AMD Athlon XP 1.4 GHz processor. A Mac requires Mac OS X v10.4 (Universal binary), 512 Mb RAM, 200 Mb free hard drive space, and a sound card with CoreAudio drivers. Processor must be either G4 1.5 GHz or Intel Core Duo family. Versions Deckadance is available in two different editions. The House Edition can host VST compliant effects and can be controlled via a MIDI controller. The Club Edition contains all of the features of the House Edition, in addition to support for timecoded vinyls. Features Among Deckadance features are iTunes integration, an audio synchronization engine that can work in tandem with other VST hosts such as Ableton Live, a detachable Song Manager (SM) that can integrate with iTunes, zPlane Elastique technology, a colored waveform with red to distinguish bass, the ability to time-code your own CD (Club Edition), beat detection, a 2-channel mixer with 3-band EQ, and headphone cueing. As of version 1.9 Deckadance has seven internal performance effects, including LP, HP, BP, Notch, Phaser, Echo, and Low fidelity. User Interface - Deckadance uses a GUI that slightly resembles that of Image-Line's digital audio workstation FL Studio, which consists of one main window that can expand to fill the entire screen. As of 1.3x there are 6 changeable user skins. As of version 1.9, the program no longer covers the start bar and the icons resemble those of Apple's Aqua graphics. VST options - Deckadance is designed to work either as a standalone program or as a VSTi 2.4 plugin inside VST-supporting software hosts. For example, Deckadance can be used as a plugin in digital audio workstations such as FL Studio, Ableton Live, Cakewalk Sonar, and Cubase. Deckadance can host VST-compliant effects or software synthesizers, and the VSTs can be controlled with MIDI files, making Deckadance into an 8 track music sequencer. Samplers - Deckadance has eight integrated sampler banks that can save 1, 2, 4, or 8 beat pattern loops from the decks. The sampling process works in conjunction with a beat detection feature, meaning the samples can be automatically synced to tempo. There is a volume control for sampler slot output, and effects can be layered onto the sample banks. It also allows for the recording and looping of live audio. ReLooper - The ReLooper slices and re-arranges samples in the playback buffer for either Deck A or Deck B, with the looped region defined by beat markers. Master ReLooper effects include a wha-wha filter, panoramic LFO, ring-modulator, and track-coder that combines a vocoder and low-fi distortion effect. Controllers Deckadance can be controlled using a mouse, keyboard, CD system, MIDI controller, or in the case of the Club Edition, timecoded vinyl. The program uses a MIDI auto detection system. Deckadance works with several timecoded vinyl and CDs. Through an "autolearning system," Image-Line claims the program can use essentially all CD and vinyl controllers on the market. When using vinyl, the program distinguishes between "absolute mode", which allows for needle dropping and jump track position from the vinyl, and "relative mode", which doesn't. Both modes allow for scratching and the manual control of playback speed and direction. Supported Midi Controllers Version history See also FL Studio Image-Line software Arguru VST plug-in References External links MacOS multimedia software Audio mixing software Windows multimedia software 2007 software DJing DJ software DJ equipment
55259838
https://en.wikipedia.org/wiki/NOVA%20%28filesystem%29
NOVA (filesystem)
The NOVA (non-volatile memory accelerated) file system is an open-source, log-structured file system for byte-addressable persistent memory (for example non-volatile dual in-line memory module (NVDIMM) and 3D XPoint DIMMs) for Linux. NOVA is designed specifically for byte-addressable persistent memories and aims to provide high-performance, atomic file and metadata operations, and fault tolerance. To meet these goals NOVA combines several techniques found in other file systems. NOVA uses log structure, copy-on-write (COW), journaling, and log-structured metadata updates to provide strong atomicity guarantees, and it uses a combination replication, metadata checksums, and RAID 4 parity to protect data and metadata from media errors and software bugs. It also supports checkpoints to facilitate backups. Filesystem NOVA was developed at the University of California, San Diego, in the Non-Volatile Systems Laboratory of the Computer Science and Engineering Department. Patches were initially made available for version 4.12 of the Linux kernel. it is limited to x86-64 Linux, and not ready for merging with the upstream kernel. Log structure NOVA is primarily a log-structured file system, but it differs from other log-structured file systems in several respects. First, rather than using a single log for the entire file system, each inode has its own, dedicated log that records the updates to the inode. This allows for increased concurrency in file operations, since different threads can operate on inodes in parallel. Second, the logs do not contain file data, but only metadata updates, resulting in smaller logs. Third, the logs are not stored in physically contiguous memory. Instead, NOVA stores the logs in a linked list of 4 KB memory pages. NOVA uses the logs to provide atomicity for operations that affect a single file (e.g., writing to a file or modifying its metadata). To do this, NOVA writes a log entry to empty space past the end of the log and then atomically updates the inode's pointer to the log tail. Copy-on-write NOVA uses copy-on-write (COW) to update file data. When a program writes data to a file, NOVA allocates some unused memory pages to hold the data and writes the data into them. Then, it appends a log entry to the inode's log that points to the new pages and describes their logical location in the file. Since appending the log entry is atomic, the write is also atomic. Journaling Some file operations (e.g., moving a file from one directory to another) require modifying multiple inodes. To make these operations atomic, NOVA uses a simple journaling mechanisms. First, it writes the new log entries to ends of the inodes that the operation will affect, then it uses the journal to record the necessary updates to the inodes' log tail pointers. Next, it marks the journal as committed and applies the updates to the tail pointers. Metadata protection NOVA uses replication and checksums to provide protection against metadata corruption due to media errors and software bugs. Every metadata structure (e.g., inodes, superblocks, and log entries) contains a CRC32 checksum that allows NOVA to detect if structures contents have changed with its knowledge. NOVA also stores two copies of each data structure – the "primary" and the "replica" – and stores them far from one another in memory. Whenever NOVA accesses a metadata structure, it first recomputes the checksum on both the primary and the replica. If either check results in a mismatch, NOVA repairs the damage using the other copy. If neither checksum matches, then the structure is lost and NOVA returns an error. Data protection NOVA uses RAID 4 to protect file data. It divides each 4 KB page into 512-byte strips and stores a parity strip in a dedicated region of persistent memory. It also computes (and stores a replica of) a CRC32 checksum for the eight data strips and the parity strip. When NOVA reads a page, it confirms the checksum on each strip. If one of the strips is corrupt, it tries to recover the strip using the parity bits. If no other strips have experienced data corruption, recovery will succeed. Otherwise, recovery fails, the contents of the page are lost, and NOVA returns an error. References External links NOVA: A Log-structured File System for Hybrid Volatile/Non-volatile Main Memories Hardening the NOVA File System UCSD-CSE Techreport CS2017-1018 NOVA: The Fastest File System for NVDIMMs Free special-purpose file systems Free software programmed in C Linux kernel features Unix file system-related software
27138025
https://en.wikipedia.org/wiki/Ruckus%20Networks
Ruckus Networks
Ruckus Networks (formerly known as Ruckus Wireless) is a brand of wired and wireless networking equipment and software owned by CommScope. Ruckus offers Switches, Wi-Fi access points, CBRS access points, Controllers, Management systems, Cloud management, AAA/BYOD software, AI and ML analytics software, location software and IoT controller software products to mobile carriers, broadband service providers, and corporate enterprises. As a company, Ruckus invented and has patented wireless voice, video, and data technology, such as adaptive antenna arrays that extend signal range, increase data rates, and avoid interference, providing distribution of delay-sensitive content over standard 802.11 Wi-Fi. Ruckus began trading on the New York Stock Exchange in 2012, and was delisted in 2016, after it was acquired by Brocade Communications Systems for approximately $1.5 billion on May 27, 2016. Ruckus Wireless and Brocade ICX line of Switching products were acquired by Arris International for $800 million in a deal finalized on December 1, 2017. The company was renamed as Ruckus Networks, an ARRIS company from Ruckus Wireless. On April 4, 2019, CommScope completed its acquisition of Arris, which included the recently acquired Ruckus. History Origin, Incubation and Funding Ruckus Networks started in 2002 as an incubator project with name SCEOS (Sequoia Capital Entertainment Operating System) funded by a small seed round from Sequoia Capital, in Menlo Park, California. After incubation Ruckus was incorporated in June 2004 as Video54 Technologies Inc., by William Kish and Victor Shtrom. Sequoia Capital, WK Technology Fund, and Sutter Hill Ventures initially funded the company. Selina Lo was the first CEO of the company and continued till company's acquisition by Brocade Communications in 2016. At its initial days, Ruckus focuses on In-Home IPTV content distribution over wireless. In 2007, Ruckus introduced a miniaturised wireless multimedia adapter, the MediaFlex USB Dongle, designed to provide wireless connectivity to in-home multimedia devices such as set-tops and media center systems. At 2007 CES event, Ruckus demonstrated the dongle with Motorola set top box VIP1720. On September 19, 2005, Ruckus announced that it has secured $9 million in its second round of financing, which increases total investment in the company to over $14 million since its formation in June 2004. Sutter Hill Ventures and Investor Growth Capital, the venture capital arm of Investor AB of Sweden, led the new financing. Existing investors Sequoia Capital and WK Technology Fund completed the oversubscribed round. WK Technology Fund led the $3.5 million first round of financing, while Sequoia Capital and private investors provided the initial $1.5 million of seed money. Ruckus Wireless has also appointed Wen Ko, founding managing director of WK Technology Fund, to its board of directors. In addition to Mr. Ko, the Ruckus board of directors includes Dominic Orr, chairman and former CEO of Alteon WebSystems and later Aruba Networks, Selina Lo, president and CEO, Gaurav Garg, former founder of Redback Networks and partner at Sequoia Capital, and Bill Kish, CTO and co-founder of Ruckus Wireless. Company Growth In addition Ruckus announced that PCCW, Hong Kong's primary telecommunications provider, will become the first company in the world to offer consumers a complete wireless multimedia solution, as part of its popular Netvigator Broadband Services, using new products and technology from Ruckus. On January, 2005, Ruckus announced that company signed a worldwide agreement to license its BeamFlex technology to NETGEAR for RangeMax product line. In April 2008, Ruckus introduced ZoneDirector 3000 - An enterprise WLAN controller, FlexMaster - A remote centralized Wi-Fi management system and SmartMesh on its 802.11n access point. In 2008 Ruckus shifted focus on Enterprise, Carrier SP, Education and Hospitality verticals with ZoneDirector controllers which is built on technology acquired from Airspider. In 2009, Ruckus entered into Metro-Wi-Fi with outdoor mesh system. Dispute In 2008, Ruckus sued Netgear, along with its new MIMO antenna supplier Rayspan, for patent infringement, over the antenna designs and associated software that have replaced the Beamflex technology which Netgear used to buy from Ruckus. Netgear licensed BeamFlex adaptive antenna to use in its RangeMax 824 v1 and v2 wireless routers. For RangeMax 824 v3 routers, Netgear used Rayspan made antennas which violated patents by Ruckus. In 2010, three months after purchasing ’454 and ’143 patents from Adaptix Corporation., Netgear sued Ruckus for patent infringement. Netgear alleged that Ruckus’ Wi-Fi access points equipped with Ruckus’ BeamFlex and ChannelFly technology infringed the ’454 and ’143 patents.  Netgear also asserted claims for induced infringement and contributory infringement, alleging that Ruckus knowingly and intentionally caused Ruckus’ customers to infringe the asserted patents. On 4 November 2013, The Delaware jury returned a verdict of non-infringement for Ruckus, finding that Netgear failed to establish that any of the Ruckus accused products infringed the asserted claims of the ’454 and ’143 patents. Further technology and products development On May 24, 2010, Ruckus announced that it has been granted a Wi-Fi Security Patent for Dynamic Authentication and Encryption by the United States Patent and Trademark Office (USPTO) This patent is based on a technology developed by Ruckus called Dynamic Pre Shared Key, which require unique security keys to each WLAN user while Pre Shared Keys requires all user of WLAN to use same security key. The technology also aims to dynamically generate security keys and provision. On May 20, 2013, Ruckus introduced a follow up technology to DPSK called 'Zero-IT Activation' to provision unique security keys to client devices by a small agent-like software. Ruckus Unveiled their Switching portfolio on June 21, 2010, named as ZoneSwitch Series of Switches. ZoneSwitch Series had two models 4124 and 4224, 24 Port Ethernet RJ-45 PoE Switches and Fiber uplinks with 180Watts and 375Watts power budget respectively. On the same day Ruckus introduced a new Wall Switch access points ZoneFlex 7025 which fits compact into wall jack junction box and provides additional switch ports. Ruckus aims to provide multi-service using this access points in-room hospitality requirements and claimed it is Industry's first Wall jack/Wall switch/Wall jack access point. On Jan 27, 2012, Ruckus opened a new R&D center in Diamond District Office Building, Bangalore, India operations will be the company's first development centre in India, joining the company's four other R&D centres in Taiwan (Taipei), China (Shenzhen), Israel (Tel Aviv), and US (Sunnyvale, CA). With 2013 YFind's acquisition, a R&D center in Singapore also added. IPO and Awards Ruckus went public on the New York Stock Exchange on November 16, 2012, under the symbol RKUS. On 12 June 2012, In an awards gala in London, Global Telecoms Business recognized Ruckus and Tikona Digital Networks for building world's largest outdoor wireless mesh network in India, naming the companies the winner of its 2012 Broadband Wireless Access Innovation award. Tikona network consisted of over 40,000 Ruckus ZoneFlex™ access points across 30 cities. IDC MarketScape: Worldwide Enterprise WLAN 2013-2014 Vendor Analysis” (doc #243354, September 2013) announced that Ruckus is named as leader in WLAN market. The report had 3 vendors named as leaders including Cisco, Ruckus and Aruba. Acquisitions and Mergers On April 4, 2016, it was announced that Brocade Communications Systems had agreed to acquire Ruckus for $1.5 billion. The deal was completed on May 27, 2016. In November 2016, Broadcom announced their intention to acquire Brocade for $5.9 billion. The deal was finalized on November 21, 2017. Broadcom said it would retain Brocade's fibre channel SAN switching business and divest Brocade's IP networking business – including the recently acquired Ruckus Wireless. In February 2017, Arris International announced that it would acquire Broadcom's Ruckus Wireless and ICX Switch businesses for $800 million. The deal was completed on December 1, 2017. In November 2018 it was announced that CommScope would purchase Arris group including Ruckus Networks, for $7.4 billion. The acquisition was completed in April 2019. 2018-Present At MWC2018 at Barcelona, Ruckus Networks announced the new CBRS LTE portfolio. U.S. Citizens Broadband Radio Service (CBRS) 3.5 GHz LTE access points and related cloud subscription services were introduced by Ruckus to enable enterprises to deploy private LTE network. Ruckus conducted live demonstration at MWC'18. In December 2019, it was reported that a security researcher had found several vulnerabilities in the web user interface software that runs on a number of Ruckus Wireless routers, which the company had since patched. Technology Ruckus designed an adaptive directional antenna technology called BeamFlex, and sold the technology to other manufacturers to enable them to include it in their products. Ruckus also came out with a Customer Premises Equipment (CPE) device that was sold to service providers. Ruckus offers WiFi products such as indoor and outdoor access points, including BeamFlex, which has directional antenna technology, and automatically adjusts to changes in the Radio Frequency (RF) environment providing stronger signals. Beamforming was integrated into the version of the 802.11 standard and 802.11ac. Ruckus introduced BeamFlex+ with Polarization Diversity Maximal Ration Combining (PD-MRC) to handle complex wireless environment and it was a upgraded technology of BeamFlex. Ruckus ICX Switches offers Campus Fabric technology in FastIron Operation system across all portfolio of Switches. It is based on IEEE 802.1BR standard for network automation and management simplicity. Other products that Ruckus offers include controllers, software, and smart wireless services. Acquisitions 2006 - AirSpider Networks 2011 October- India development centre of Intellinet Technologies Inc 2011/2012 - ComAbility 2013 - YFind Technologies Private Limited 2015 - Cloudpath Networks Alliances Ruckus and Juniper Networks announced technology alliance to provide Unified Wired and Wireless solutions to enterprise, government and education customer segments. In 2018 February 18, Ruckus Networks signed a strategic global OEM agreement with Dell EMC to deliver Ruckus’ portfolio of wireless, including access points (APs), controllers, virtualized and data analytics assets, and Cloudpath secure network access software—along with Ruckus IoT and OpenG LTE products—as Dell-branded solutions Products and Services Ruckus Networks design, sell and service IT networking products, such as switches, WLAN controller, Access points, IoT gateways and software. Ruckus started as Wireless only company selling to Internet Service Providers(ISP), Hotel chains, Large public venues and later extended to Education. Ruckus Networks products are generally categorized as Switches Ruckus ICX (formerly Brocade ICX) switches feature the ability to manage multiple switches from a single IP address. The streamlined efficiency of the Layer 2 to Layer 3 transition requires no upgrades in hardware. Moreover, ICX products allow stacking with other like model switches for improved network device interoperability. By introducing ZoneSwitch Ruckus entered into switching business in 2010, However, with Brocade ICX switching business, ZoneSwitch has been retired since. Brocade ICX Switching line focuses on Campus Switching business line for Enterprises, Small data centers, branch offices and access and Core deployments. Wireless Access points Ruckus offers wireless access points that can support up to 500 simultaneous connections on a single node. In 2007, Ruckus launched ZoneFlex, a centrally managed WLAN platform for enterprises and carriers. In MWC2018, Ruckus introduced the IoT capabilities of Access Points. Network Controllers SmartZone Network controllers is the model name Ruckus uses to provide management systems that can manage ICX model switches and its Wireless access point under single GUI. Wireless Controllers Ruckus traditionally offered two different controllers: ZoneDirector geared for smaller, single-site deployments and SmartZone directed at carrier and large, multi-site enterprise deployments. SmartZone can be delivered as both a virtual and physical appliance. Ruckus IoT Modules Radio or radio-and-sensor devices that connect to a Ruckus IoT-ready AP to enable endpoint connectivity based on standards such as Bluetooth Low Energy (BLE), Zigbee and LoRa protocols. Software and SaaS Ruckus Wireless provides software-products to accompany their hardware. They offer the FlexMaster, SmartCell Insight, Ruckus ZonePlanner, Cloudpath, Ruckus Cloud Wi-Fi and Ruckus Mobile Applications. Cloud In mid-2016 Ruckus added Cloud Wi-Fi named RUCKUS Cloud, a cloud-based WLAN controller that enables administrators to remotely manage distributed sites using a web UI or mobile app. Ruckus Cloud Wi-Fi is a subscription service. Ruckus Cloud services was extended to support ICX switches in 2019. RUCKUS Cloud Supported devices. Unleashed Ruckus Unleashed is a controller-less platform for small and medium businesses that includes controller functionality in each access point, negating the need for a separate network appliance to manage the Wi-Fi access points. Ruckus Unleashed can be managed by a Mobile app. Cloudpath Cloudpath Networks founded in 2006 in Ontario, Canada. On Oct 22, 2015, Ruckus acquired Cloudpath Networks for an undisclosed amount. Cloudpath invented the Wi-Fi onboarding paradigm in 2006 and onboards personal devices each year utilizing the Automated Device Enablement (ADE) approach built into its XpressConnect product line. Cloudpath believed on adoption of WPA2-Enterprise in environments large and small with its self-service onboarding portal. Utilizing standards-based security, such as WPA2-Enterprise, 802.1X, and X.509. Cloudpath at its core provides AAA services. EAP-TLS Certificate based authentication as its primary focus, a onboarding portal with a dis-solvable agent to provision certificates to TPM directly, Cloudpath aims at reducing administrative overload of EAP-TLS deployment. It also focusses on BYOD and Guest authentication. Apart from using external certificate authorities it also has built-in certificate authority to manage own Public Key Infrastructure (PKI). Ruckus provides Cloudpath software as 1. Software as a Service - SaaS (Cloud Hosted) version and 2. On-premises install-able on vmware. Location services With the acquisition of YFind technologies in 2013 Ruckus entered into Location based services and analytics business. It was named as Ruckus SPoT (Smart Positioning Technology). SPoT is design to understand the footfall analytics with inbuilt webGUI and API to integrate with external system. Through API SPoT integrated with Aislelabs, Skyfi, Ragapa, Gimbal and few other software and services. Ruckus announced they partnered with IBM to provide Location services on top of their Wi-Fi in Vivid Sydney festival. Vivid Sydney provided visitors with over 50 light installations around the city, including Vivid Aquatique Water Theatre at Darling Harbour, Sydney Opera House Sails, Museum of Contemporary Art Australia and Martin Place ‘Urban Tree Project’. SPoT is introduced as on-premises installation or Cloud based SaaS model. Ruckus IoT Suite In Mobile World Congress 2018, Barcelona, Ruckus announced the Ruckus IoT Suite, which enables organizations to readily construct a secure IoT access network that consolidates multiple physical-layer IoT networks into a single network. The aim of the product is to reuse WLAN products for IoT infrastructure thus making a common infrastructure for both WLAN and IoT network to save costs and achieve WLAN levels of security. Ruckus initially tied up with various IoT ecosystem vendors such as Assa Abloy, Actility, IBM Watson, Tile, Kontakt.IO and TrackR to provide various IoT solutions on common infrastructure Unleashed Multi-Site Manager (formerly FlexMaster) The Ruckus Unleashed Multi-Site Manager Management (UMM) is a Linux-based managed service platform for configuration, fault detection, audit, performance management and optimisation of hundreds of thousands of remote Smart Wi-Fi APs or Smart Wireless LANs (WLANs) from a single point. SmartCell Insight The Ruckus SmartCell Insight collects and correlates Wi-Fi performance metrics into 19 available reports, has do it yourself analytics using customer reports, long term retention, and software only solution residing on a Linux server using CENTOS/RHEL OS. ZonePlanner The Ruckus ZonePlanner is a Smart Wi-Fi RF simulation. It integrates Ruckus’ adaptive antenna patterns. ZonePlanner is a rebranded AirMagnet planner locked to Ruckus access points only. Ruckus replaces ZonePlanner with Intangi Iris Wireless planner and YagnaIQ. Ruckus mobile applications Ruckus Wireless has mobile applications called Ruckus SpeedFlex(zapd), Ruckus ZD Remote Control, Ruckus Cloud, Ruckus Unleashed, Ruckus SPoT mobile calibration tool and Ruckus SWIPE. Ruckus XclaimWireless division provided Harmony, a mobile app to manage XclaimWireless Access points when no cloud connectivity. In 2010, Ruckus internal developed Zap tool has been made open source. Zap is a test tool developed by Ruckus to assist in wireless network test and characterisation. Ruckus SpeedFlex mobile app uses Zap protocol to test packets and throughput statistics. Industries Served Enterprises Ruckus relies heavily on its enterprise business, as it is two-thirds of their revenue with almost 100% of enterprise sales coming from the channel. Many of these enterprises include hospitality, retail, healthcare, transportation hubs, and education. Carriers Ruckus Wireless started out by selling to carrier companies and they are still a good portion of their business. Their customers include C3ntro Telecom, Verizon, AT&T, Time Warner Cable, Deutsche Telekom, and China Telecom. One-third of Ruckus’ business is selling to carrier businesses. Hospitality Ruckus primarily focussed at Hospitality industry with few vertical-focussed products for Guest Wi-Fi & Switching, IPTV connectivity. Ruckus introduced hospitality focussed products like Wall switch In-room APs (ZoneFlex 7025, ZoneFlex 7055, H500, H510, H320, H550) and switching product lines with PoE and fiber ports and also introduced several integrations with Hospitality software and services. Further in 2018, Ruckus introduced IoT suite focussed for Hospitality business. Small businesses Xclaim Wireless was launched by Ruckus in May 2015 as a new division aimed at small businesses and Home office setups. Xclaim Wireless provides Cloud based product portfolio designed for enterprise-class features at consumer-level prices. Xclaim Wireless introduced two 11n indoor, one 11ac indoor and one 11ac outdoor access point models namely Xi-1, Xi-2, Xi-3 and Xo-1 respectively. To manage APs company provided CloudManager, a free to use Cloud based access point configuration manager. CloudManager offered configuration, reporting, multi-location management, ACL, Guest portal. CloudManager utilizes Amazon Web Services Cloud platform for SaaS service. Ruckus announced ConnectedData and Smartnet as their customers. XclaimWireless also introduced 'Harmony' mobile app based configuration for intranet management without CloudManager. Ruckus ended sale of Xclaimwireless portfolio favouring its Unleashed Controller-less and Ruckus Cloud platforms on July 25, 2018. References External links Website : http://commscope.com/ruckus Companies formerly listed on the New York Stock Exchange Networking hardware companies Companies based in Sunnyvale, California American companies established in 2004 2016 mergers and acquisitions 2017 mergers and acquisitions American corporate subsidiaries
13364214
https://en.wikipedia.org/wiki/SciTech%20SNAP
SciTech SNAP
SciTech SNAP (System Neutral Access Protocol) is an operating system portable, dynamically loadable, native-size 32-bit/64-bit device driver architecture. SciTech SNAP defines the architecture for loading an operating system neutral binary device driver for any type of hardware device, be it a graphics controller, audio controller, SCSI controller or network controller. SciTech SNAP drivers are source code portable between different microprocessor platforms, and the binary drivers are operating system portable within a particular microprocessor family. SNAP drivers were originally developed for Intel 386+ CPU with any 32-bit operating system or environment supported on that CPU. With the introduction of SNAP 3.0, native binary SNAP drivers are available for 32-bit PowerPC CPUs and 64-bit x86-64 CPUs. On 27 August 2002, SciTech Software, Inc. announced the intention to release the Scitech SNAP driver development kit. On 16 November 2006, SciTech Software, Inc. announced that it has ceased further development of its SNAP device driver technology in favor of a new line of web and business logic technologies. SciTech also announced that it would begin looking for a buyer for SciTech SNAP. In December 2008 Alt Richmond Inc. closed the acquisition of SciTech Software's SNAP technology. The plans of SciTech Software in 2008 to create OpenSNAP, an open source version of the driver technology, are therefore no longer an option unless Alt Richmond decides to pick this up. In May 2015, Arca Noae, LLC announced that they have reached an agreement with Alt Richmond, Inc. to license the source code for SNAP Graphics for OS/2. Relationship with Scitech Display Doctor SciTech Display Doctor 6.5 included a replacement video driver for Windows 95 or higher, which works with any hardware supported by SDD. In SDD 7, the driver was renamed to Scitech Nucleus Graphics driver. The Nucleus Graphics driver was later incorporated into SciTech SNAP Graphics. In SNAP 3, Nucleus was renamed to SNAP. SciTech SNAP Graphics version 2 also included VBETest/Lite - VESA BIOS Extensions (VBE) Compliance Test version 8.00. It was later removed in SciTech SNAP Graphics 3. In SciTech SNAP 3 for DOS, most of the OpenGL tests from SciTech Display Doctor 7 beta can be found in GACtrl Driver Control Center. The Windows version of Scitech SNAP Graphics maintained the user interface found in SDD 7 beta. SciTech SNAP Graphics It is the first product for the SciTech SNAP line, which provides accelerated graphics. SciTech SNAP Graphics has been ported to DOS, OS/2, Microsoft Windows (CE, NT, 2000, XP), QNX, SMX (the SunOS/Solaris port of MINIX), Linux, On Time RTOS-32, Unununium OS operating systems. Supported hardware included video processors from 3dfx, 3Dlabs, Alliance Semiconductor, AMD (Geode GX2), ARK Logic, ATI, Chips & Technologies, Cirrus Logic, Cyrix, IBM, InteGraphics, Intel, Matrox, NeoMagic, Number Nine, NVIDIA, Oak, Philips, Rendition, S3, Sigma Designs, Silicon Motion, SiS, Tseng Labs, Trident, VIA, Weitek, as well as any video card supporting VBE 1.2 or higher. Although SciTech SNAP Graphics does not offer standalone VBE driver, SNAP driver accelerates applications using VBE calls via SciTech SNAP Graphics driver. SNAP Graphics for Windows can also accelerate VBE 3 calls, if DOS programs is run in Windows DOS box. Spin-off products SciTech SNAP Graphics ENT SciTech SNAP Graphics ENT/BC with DPVL support (SciTech SNAP Graphics VESA DPVL) SciTech SNAP Graphics IES Personal Edition SciTech also offer SciTech SNAP Graphics "PE" (Personal Edition) under the My SciTech site, which allows registered users to download a SNAP driver of hardware and operating system specified by users. Each user account can download two drivers per week. The driver generated by the service can be run for six months. In Scitech SNAP Graphics PE, tools GACtrl, GAMode, GAOption, GAPerf DOS tools are included. The GLDirect tests are not included in Windows driver. SciTech SNAP Audio Similar to Scitech SNAP Graphics, it provides OS-independent audio drivers. It has been ported to Windows NT 4.0. Supported hardware include AC'97 and Intel HDA, but HDA does not support modem function. SciTech SNAP DDC It is designed to provide easy access to an attached display in order to program it directly via I²C or simply to read the monitor's EDID record. See also Allegro (software library) References External links SciTech Software Inc. Announces that the Complete Source Code to its Leading Edge Graphics Driver Technology SciTech SNAP Graphics is for Sale Alt Richmond Inc., the new owner of the SciTech SNAP technology Device drivers
25748
https://en.wikipedia.org/wiki/Router%20%28computing%29
Router (computing)
A router is a networking device that forwards data packets between computer networks. Routers perform the traffic directing functions on the Internet. Data sent through the internet, such as a web page or email, is in the form of data packets. A packet is typically forwarded from one router to another router through the networks that constitute an internetwork (e.g. the Internet) until it reaches its destination node. A router is connected to two or more data lines from different IP networks. When a data packet comes in on one of the lines, the router reads the network address information in the packet header to determine the ultimate destination. Then, using information in its routing table or routing policy, it directs the packet to the next network on its journey. The most familiar type of IP routers are home and small office routers that simply forward IP packets between the home computers and the Internet. More sophisticated routers, such as enterprise routers, connect large business or ISP networks up to the powerful core routers that forward data at high speed along the optical fiber lines of the Internet backbone. Operation When multiple routers are used in interconnected networks, the routers can exchange information about destination addresses using a routing protocol. Each router builds up a routing table, a list of routes, between two computer systems on the interconnected networks. The software that runs the router is composed of two functional processing units that operate simultaneously, called planes: Control plane: A router maintains a routing table that lists which route should be used to forward a data packet, and through which physical interface connection. It does this using internal pre-configured directives, called static routes, or by learning routes dynamically using a routing protocol. Static and dynamic routes are stored in the routing table. The control-plane logic then strips non-essential directives from the table and builds a forwarding information base (FIB) to be used by the forwarding plane. Forwarding plane: This unit forwards the data packets between incoming and outgoing interface connections. It reads the header of each packet as it comes in, matches the destination to entries in the FIB supplied by the control plane, and directs the packet to the outgoing network specified in the FIB. Applications A router may have interfaces for different types of physical layer connections, such as copper cables, fiber optic, or wireless transmission. It can also support different network layer transmission standards. Each network interface is used to enable data packets to be forwarded from one transmission system to another. Routers may also be used to connect two or more logical groups of computer devices known as subnets, each with a different network prefix. Routers may provide connectivity within enterprises, between enterprises and the Internet, or between internet service providers' (ISPs') networks. The largest routers (such as the Cisco CRS-1 or Juniper PTX) interconnect the various ISPs, or may be used in large enterprise networks. Smaller routers usually provide connectivity for typical home and office networks. All sizes of routers may be found inside enterprises. The most powerful routers are usually found in ISPs, academic and research facilities. Large businesses may also need more powerful routers to cope with ever-increasing demands of intranet data traffic. A hierarchical internetworking model for interconnecting routers in large networks is in common use. Access, core and distribution Access routers, including small office/home office (SOHO) models, are located at home and customer sites such as branch offices that do not need hierarchical routing of their own. Typically, they are optimized for low cost. Some SOHO routers are capable of running alternative free Linux-based firmware like Tomato, OpenWrt, or DD-WRT. Distribution routers aggregate traffic from multiple access routers. Distribution routers are often responsible for enforcing quality of service across a wide area network (WAN), so they may have considerable memory installed, multiple WAN interface connections, and substantial onboard data processing routines. They may also provide connectivity to groups of file servers or other external networks. In enterprises, a core router may provide a collapsed backbone interconnecting the distribution tier routers from multiple buildings of a campus, or large enterprise locations. They tend to be optimized for high bandwidth, but lack some of the features of edge routers. Security External networks must be carefully considered as part of the overall security strategy of the local network. A router may include a firewall, VPN handling, and other security functions, or they may be handled by separate devices. Routers also commonly perform network address translation which restricts connections initiated from external connections but is not recognized as a security feature by all experts. Some experts argue that open source routers are more secure and reliable than closed source routers because open-source routers allow mistakes to be quickly found and corrected. Routing different networks Routers are also often distinguished on the basis of the network in which they operate. A router in a local area network (LAN) of a single organisation is called an interior router. A router that is operated in the Internet backbone is described as exterior router. While a router that connects a LAN with the Internet or a wide area network (WAN) is called a border router, or gateway router. Internet connectivity and internal use Routers intended for ISP and major enterprise connectivity usually exchange routing information using the Border Gateway Protocol (BGP). defines the types of BGP routers according to their functions: Edge router (also called a provider edge router): Placed at the edge of an ISP network. The router uses Exterior Border Gateway Protocol (EBGP) to routers at other ISPs or large enterprise autonomous systems. Subscriber edge router (also called a customer edge router): Located at the edge of the subscriber's network, it also uses EBGP to its provider's autonomous system. It is typically used in an (enterprise) organization. Inter-provider border router: A BGP router for interconnecting ISPs that maintains BGP sessions with other BGP routers in ISP Autonomous Systems. Core router: Resides within an Autonomous System as a backbone to carry traffic between edge routers. Within an ISP: In the ISP's autonomous system, a router uses internal BGP to communicate with other ISP edge routers, other intranet core routers, or the ISP's intranet provider border routers. Internet backbone: The Internet no longer has a clearly identifiable backbone, unlike its predecessor networks. See default-free zone (DFZ). The major ISPs' system routers make up what could be considered to be the current Internet backbone core. ISPs operate all four types of the BGP routers described here. An ISP core router is used to interconnect its edge and border routers. Core routers may also have specialized functions in virtual private networks based on a combination of BGP and Multi-Protocol Label Switching protocols. Port forwarding: Routers are also used for port forwarding between private Internet-connected servers. Voice, data, fax, and video processing routers: Commonly referred to as access servers or gateways, these devices are used to route and process voice, data, video and fax traffic on the Internet. Since 2005, most long-distance phone calls have been processed as IP traffic (VOIP) through a voice gateway. Use of access server-type routers expanded with the advent of the Internet, first with dial-up access and another resurgence with voice phone service. Larger networks commonly use multilayer switches, with layer-3 devices being used to simply interconnect multiple subnets within the same security zone, and higher-layer switches when filtering, translation, load balancing, or other higher-level functions are required, especially between zones. History The concept of an Interface computer was first proposed by Donald Davies for the NPL network in 1966. The same idea was conceived by Wesley Clark the following year for use in the ARPANET. Named Interface Message Processors (IMPs), these computers had fundamentally the same functionality as a router does today. The idea for a router (called gateway at the time) initially came about through an international group of computer networking researchers called the International Networking Working Group (INWG). Set up in 1972 as an informal group to consider the technical issues involved in connecting different networks, it became a subcommittee of the International Federation for Information Processing later that year. These gateway devices were different from most previous packet switching schemes in two ways. First, they connected dissimilar kinds of networks, such as serial lines and local area networks. Second, they were connectionless devices, which had no role in assuring that traffic was delivered reliably, leaving that function entirely to the hosts. This particular idea, the end-to-end principle, had been previously pioneered in the CYCLADES network. The idea was explored in more detail, with the intention to produce a prototype system as part of two contemporaneous programs. One was the initial DARPA-initiated program, which created the TCP/IP architecture in use today. The other was a program at Xerox PARC to explore new networking technologies, which produced the PARC Universal Packet system; due to corporate intellectual property concerns it received little attention outside Xerox for years. Some time after early 1974, the first Xerox routers became operational. The first true IP router was developed by Ginny Strazisar at BBN, as part of that DARPA-initiated effort, during 1975–1976. By the end of 1976, three PDP-11-based routers were in service in the experimental prototype Internet. The first multiprotocol routers were independently created by staff researchers at MIT and Stanford in 1981 and both were also based on PDP-11s. Stanford's router program was by William Yeager and MIT's by Noel Chiappa. Virtually all networking now uses TCP/IP, but multiprotocol routers are still manufactured. They were important in the early stages of the growth of computer networking when protocols other than TCP/IP were in use. Modern routers that handle both IPv4 and IPv6 are multiprotocol but are simpler devices than ones processing AppleTalk, DECnet, IP, and Xerox protocols. From the mid-1970s and in the 1980s, general-purpose minicomputers served as routers. Modern high-speed routers are network processors or highly specialized computers with extra hardware acceleration added to speed both common routing functions, such as packet forwarding, and specialized functions such as IPsec encryption. There is substantial use of Linux and Unix software-based machines, running open source routing code, for research and other applications. The Cisco IOS operating system was independently designed. Major router operating systems, such as Junos and NX-OS, are extensively modified versions of Unix software. Forwarding The main purpose of a router is to connect multiple networks and forward packets destined either for directly attached networks or more remote networks. A router is considered a layer-3 device because its primary forwarding decision is based on the information in the layer-3 IP packet, specifically the destination IP address. When a router receives a packet, it searches its routing table to find the best match between the destination IP address of the packet and one of the addresses in the routing table. Once a match is found, the packet is encapsulated in the layer-2 data link frame for the outgoing interface indicated in the table entry. A router typically does not look into the packet payload, but only at the layer-3 addresses to make a forwarding decision, plus optionally other information in the header for hints on, for example, quality of service (QoS). For pure IP forwarding, a router is designed to minimize the state information associated with individual packets. Once a packet is forwarded, the router does not retain any historical information about the packet. The routing table itself can contain information derived from a variety of sources, such as a default or static routes that are configured manually, or dynamic entries from routing protocols where the router learns routes from other routers. A default route is one that is used to route all traffic whose destination does not otherwise appear in the routing table; it is common – even necessary – in small networks, such as a home or small business where the default route simply sends all non-local traffic to the Internet service provider. The default route can be manually configured (as a static route); learned by dynamic routing protocols; or be obtained by DHCP. A router can run more than one routing protocol at a time, particularly if it serves as an autonomous system border router between parts of a network that run different routing protocols; if it does so, then redistribution may be used (usually selectively) to share information between the different protocols running on the same router. Besides deciding to which interface a packet is forwarded, which is handled primarily via the routing table, a router also has to manage congestion when packets arrive at a rate higher than the router can process. Three policies commonly used are tail drop, random early detection (RED), and weighted random early detection (WRED). Tail drop is the simplest and most easily implemented: the router simply drops new incoming packets once buffer space in the router is exhausted. RED probabilistically drops datagrams early when the queue exceeds a pre-configured portion of the buffer, until reaching a pre-determined maximum, when it drops all incoming packets, thus reverting to tail drop. WRED can be configured to drop packets more readily dependent on the type of traffic. Another function a router performs is traffic classification and deciding which packet should be processed first. This is managed through QoS, which is critical when Voice over IP is deployed, so as not to introduce excessive latency. Yet another function a router performs is called policy-based routing where special rules are constructed to override the rules derived from the routing table when a packet forwarding decision is made. Some of the functions may be performed through an application-specific integrated circuit (ASIC) to avoid overhead of scheduling CPU time to process the packets. Others may have to be performed through the CPU as these packets need special attention that cannot be handled by an ASIC. See also Mobile broadband modem Modem Residential gateway Switch virtual interface Wireless router Notes References External links Internet architecture Hardware routers Networking hardware Server appliance Computer networking
13137945
https://en.wikipedia.org/wiki/Windows%20Resource%20Protection
Windows Resource Protection
Windows Resource Protection is a feature in Windows Vista and newer Windows operating systems that replaces Windows File Protection. It protects registry keys and folders in addition to critical system files. The way it protects resources differs entirely from the method used by Windows File Protection. Overview Windows Resource Protection works by registering for notification of file changes in Winlogon. If any changes are detected to a protected system file, the modified file is restored from a cached copy located in a compressed folder at . Windows Resource Protection works by setting discretionary access control lists (DACLs) and access control lists (ACLs) defined for protected resources. Permission for full access to modify WRP-protected resources is restricted to the processes using the Windows Modules Installer service (TrustedInstaller.exe). Administrators no longer have full rights to system files, they have to use the SetupAPI or take ownership of the resource and add the appropriate Access Control Entries (ACEs) to modify or replace it. The "Trusted Installer" account is used to secure core operating system files and registry keys. Protected resources Windows Resource Protection protects a large number of file types: *.acm *.ade *.adp *.app *.asa *.asp *.aspx *.ax *.bas *.bat *.bin *.cer *.chm *.clb *.cmd *.cnt *.cnv *.com *.cpl *.cpx *.crt *.csh *.dll *.drv *.dtd *.exe *.fxp *.grp *.h1s *.hlp *.hta *.ime *.inf *.ins *.isp *.its *.js *.jse *.ksh *.lnk *.mad *.maf *.mag *.mam *.man *.maq *.mar *.mas *.mat *.mau *.mav *.maw *.mda *.mdb *.mde *.mdt *.mdw *.mdz *.msc *.msi *.msp *.mst *.mui *.nls *.ocx *.ops *.pal *.pcd *.pif *.prf *.prg *.pst *.reg *.scf *.scr *.sct *.shb *.shs *.sys *.tlb *.tsp *.url *.vb *.vbe *.vbs *.vsmacros *.vss *.vst *.vsw *.ws *.wsc *.wsf *.wsh *.xsd *.xsl WRP also protects several critical folders. A folder containing only WRP-protected files may be locked so that only the trusted installer is able to create files or subfolders in the folder. A folder may be partially locked to enable administrators to create files and subfolders in the folder. Essential registry keys installed by Windows Vista are also protected. If a key is protected by WRP, all its sub-keys and values can be protected. Also, WRP copies only those files that are needed to restart Windows to the cache directory located at . Critical files that are not needed to restart Windows are not copied to the cache directory, unlike Windows File Protection which cached the entire set of protected file types in the Dllcache folder. The size of the cache directory and the list of files copied to cache cannot be modified. Windows Resource Protection applies stricter measures to protect files. As a result, Windows File Protection is not available under Windows Vista. In order to replace any single protected file, Windows File Protection had to be disabled completely; Windows Resource Protection works on a per-item basis by setting ACLs. Therefore, by taking ownership of any single item, that particular item can be replaced, while other items remain protected. System File Checker is also integrated with WRP. Under Windows Vista, Sfc.exe can be used to check specific folder paths, including the Windows folder and the boot folder. See also Windows File Protection System File Checker Access Control List Security Identifier External links Windows Resource Protection in Windows Vista More information on WRP and application compatibility Windows administration Windows components Windows Vista
474714
https://en.wikipedia.org/wiki/Quartz%202D
Quartz 2D
Quartz 2D is the native two-dimensional graphics rendering API for macOS and iOS platforms, part of the Core Graphics framework. Overview Quartz 2D is available to all macOS and iOS application environments and provides resolution-independent and device-independent rendering of bitmap graphics, text, and vectors both on-screen and in preparation for printing. Its responsibilities within the graphics layer include: Rendering text Displaying, manipulating, and rendering PDF documents Converting PostScript data to PDF data, and vice versa Displaying, manipulating, and rendering bitmap images Providing color management via ColorSync Displaying the elements of the Aqua user interface Because Quartz 2D is one of several Quartz Technologies, the term "Quartz" by itself must be taken in context. Drawing in Quartz 2D Quartz 2D expands the drawing functions associated with QuickDraw. The most notable difference is that Quartz 2D eliminates output device and resolution specificity. The drawing model utilized by Quartz 2D is based on PDF specification 1.4. Drawing takes place using a Cartesian coordinate system, where text, vectors, or bitmap images are placed on a grid. However, drawing output is not sent directly to the output device. Quartz 2D uses graphics contexts, environments in which drawing takes place. Each graphics context defines how the drawing should be presented: in a window, sent to a printer, an OpenGL layer, or off-screen. Each context rasterizes the drawing at the desired resolution without altering the data that defines the drawing. Thus, contexts are the mechanism by which Quartz 2D employs resolution- and device-independence. For example, a window context may rasterize an object to the appropriate screen resolution to create actual graphics on the display. The same object can be sent to a printing context at a much higher resolution. This permits the same graphics commands to yield output on any device using the most appropriate resolution. History Quartz 2D is similar to NeXT's Display PostScript in its use of contexts. It first appeared as the 2D graphics rendering library called Core Graphics Rendering; along with Core Graphics Services (Compositing), it was wrapped into the initial incarnation of Quartz. Quartz (and its renderer) were first demonstrated at WWDC in May 1999. Presently, the name Quartz 2D more precisely defines the 2D rendering capabilities of Core Graphics (Quartz). With the release of Mac OS X 10.2, marketing attention focused on Quartz Extreme, the composition layer, leaving the term "Quartz" to refer to the Core Graphics framework or just its 2D renderer. Presently, Quartz technologies can describe all of the rendering and compositing technologies introduced by macOS (including Core Image for example). Prior to Mac OS X Tiger, QuickDraw rendering outperformed that of Quartz 2D. Mac OS X 10.4 rectified this, substantially increasing the standard rendering performance of Quartz 2D. Tiger also introduced Quartz 2D Extreme: optional graphics processor (GPU) acceleration for Quartz 2D, although it is not an officially supported feature. Quartz 2D Extreme is disabled by default in Mac OS X 10.4 because it may lead to video redraw issues or kernel panics. In Mac OS X Leopard, Quartz 2D Extreme was renamed QuartzGL. See also Quartz (graphics layer) Quartz Compositor QuickDraw Display PostScript Core Image Direct2D References External links Mac OS X – Features – Quartz Extreme – from Apple Introduction to Quartz 2D Programming Guide – developer documentation from Apple Introduction to Quartz 2D for QuickDraw Programmers – developer documentation from O'Reilly's MacDevCenter Graphics libraries macOS APIs
29215118
https://en.wikipedia.org/wiki/Comparison%20of%20desktop%20publishing%20software
Comparison of desktop publishing software
The following is a comparison of major desktop publishing software. Overview This table provides general software information including the developer, latest stable version, the year in which the software was first released, and the license under which it is available. Operating system This table gives a comparison of what operating systems are compatible with each software in their latest version. Input format This table gives a comparison of the file formats each software can import or open. Output format This table gives a comparison of the file formats each software can export or save. See also List of desktop publishing software References Desktop publishing software Desktop publishing software
1558384
https://en.wikipedia.org/wiki/Web%20conferencing
Web conferencing
Web conferencing is used as an umbrella term for various types of online conferencing and collaborative services including webinars (web seminars), webcasts, and web meetings. Sometimes it may be used also in the more narrow sense of the peer-level web meeting context, in an attempt to disambiguate it from the other types known as collaborative sessions. The terminology related to these technologies is exact and agreed relying on the standards for web conferencing but specific organizations practices in usage exist to provide also term usage reference. In general, web conferencing is made possible by Internet technologies, particularly on TCP/IP connections. Services may allow real-time point-to-point communications as well as multicast communications from one sender to many receivers. It offers data streams of text-based messages, voice and video chat to be shared simultaneously, across geographically dispersed locations. Applications for web conferencing include meetings, training events, lectures, or presentations from a web-connected computer to other web-connected computers. Installation and operation Web conferencing software is invoked by all participants in a web meeting. Some technologies include software and functionality that differs for presenters and attendees. Software may run as a web browser application (often relying on Adobe Flash, Java, or WebRTC to provide the operational platform). Other web conferencing technologies require download and installation of software on each participant's computer, which is invoked as a local application. Many web conferencing vendors provide the central connectivity and provisioning of meeting "ports" or "seats" as a hosted web service, while others allow the web conference host to install and run the software on its own local servers. Another installation option from certain vendors allows for use of a proprietary computer appliance that is installed at the hosting company's physical location. Depending on the technology being used, participants may speak and listen to audio over standard telephone lines or via computer microphones and speakers. Some products allow for use of a webcam to display participants, while others may require their own proprietary encoding or externally provided encoding of a video feed (for example, from a professional video camera connected via an IEEE 1394 interface) that is displayed in the session. Vendor-hosted web conferencing is usually licensed as a service based on one of three pricing models: a fixed cost per user per minute, a monthly or annual flat fee allowing unlimited use with a fixed maximum capacity per session, or a sliding rate fee based on the number of allowed meeting hosts and per-session participants (number of "seats"). Presentation of visual materials most often is accomplished through one of two primary methodologies. The web conferencing software may show participants an image of the presenter's computer screen (or desktop). Again, depending upon the product, the software may show the entire visible desktop area or may allow selection of a physical area or application running on the presenter's computer. The second method relies on an upload and conversion process (most commonly consisting of Microsoft PowerPoint files, other Microsoft Office electronic documents, or Adobe PDF documents). Etymology The term "webinar" is a portmanteau of web and seminar, meaning a presentation, lecture, or workshop that is transmitted over the web. The coined term has been attacked for improper construction, since "inar" is not a valid root. Webinar was included on the Lake Superior University 2008 List of Banished Words, but was included in the Merriam-Webster dictionary that same year. The term "webcast" derives from its original similarity to a radio or television broadcast. Early usage referred purely to transmission and consumption of streaming audio and video via the World Wide Web. Over time, webcast software vendors have added many of the same functional capabilities found in webinar software, blurring the distinction between the two terms. Webcasts are now likely to allow audience response to polls, text communication with presenters or other audience members, and other two-way communications that complement the consumption of the streamed audio/video content. Features Other typical features of a web conference include: Slideshow presentations - where images are presented to the audience and markup tools and a remote mouse pointer are used to engage the audience while the presenter discusses slide content. Live or streaming video - where full-motion webcam, digital video camera or multi-media files are pushed to the audience. VoIP - Real-time audio communication through the computer via use of headphones and speakers. Web tours - where URLs, data from forms, cookies, scripts and session data can be pushed to other participants enabling them to be pushed through web-based logons, clicks, etc. This type of feature works well when demonstrating websites where users themselves can also participate. Meeting recording - where presentation activity is recorded on the client side or server side for later viewing and/or distribution. Whiteboard with annotation (allowing the presenter and/or attendees to highlight or mark items on the slide presentation. Or, simply make notes on a blank whiteboard.) Text chat - For live question and answer sessions, limited to the people connected to the meeting. Text chat may be public (echoed to all participants) or private (between two participants). Polls and surveys (allows the presenter to conduct questions with multiple choice answers directed to the audience) Screen sharing/desktop sharing/application sharing (where participants can view anything the presenter currently has shown on their screen. Some screen sharing applications allow for remote desktop control, allowing participants to manipulate the presenters screen, although this is not widely used.) Standards Web conferencing technologies are not standardized, which has reduced interoperability and transparency and increased platform dependence, security issues, cost and market segmentation. In 2003, the IETF established a working group to establish a standard for web conferencing, called "Centralized Conferencing (xcon)". The planned deliverables of xcon include: A binary floor control protocol. Binary Floor Control Protocol (BFCP) published as RFC 4582 A mechanism for membership and authorization control A mechanism to manipulate and describe media "mixing" or "topology" for multiple media types (audio, video, text) A mechanism for notification of conference related events/changes (for example a floor change) Deployment models Web conferencing is available with three models: hosting service, software and appliance. An appliance, unlike the online hosted solution, is offered as hardware. It is also known as "in-house" or "on-premises" web conferencing. It is used to conduct live meetings, remote training, or presentations via the Internet. History Real-time text chat facilities such as IRC appeared in the late 1980s. Web-based chat and instant messaging software appeared in the mid-1990s. The PLATO computer learning system allowed students to collaborate on networked computers to accomplish learning tasks as early as the 1960s, but the early networking was not accomplished via the World Wide Web and PLATO's collaborative goals were not consistent with the presenter-audience dynamic typical of web conferencing systems. PLATO II, in 1961, featured two users at once. In 1992, InSoft Inc. launched Communique, a software-based Unix teleconferencing product for workstations that enabled video/audio/data conferencing. Communique supported as many as 10 users, and included revolutionary features such as application sharing, audio controls, text, graphics, and whiteboarding which allowed networked users to share and manipulate graphic objects and files using simple paint tools. Several point-to-point and private-network video conferencing products were introduced in the 1990s, such as CU-SeeMe, which was used to link selected schools around the United States of America in real-time collaborative communications as part of the Global Schoolhouse project from Global SchoolNet. In May 1995, PictureTel announced LiveShare Plus as a general-use data collaboration product for Windows-based personal computers. The software allowed application sharing, user-granted control of a remote PC, shared whiteboard markup, file transfer, and text messaging. List price was given as $249 per computer. PictureTel referenced an agreement with Microsoft in its announcement press release, and a May 26, 1995 memo from Bill Gates to Microsoft executive staff and direct reports said "Our PictureTel screen sharing client allowing Window sharing should work easily across the Internet." In May 1996, Microsoft announced NetMeeting as an included component in Internet Explorer 3.0. At the time, Microsoft called NetMeeting "the Internet's first real-time communications client that includes support for international conferencing standards and provides true multiuser application-sharing and data-conferencing capabilities." In 1996, PlaceWare was founded as a spinoff from Xerox PARC. In November of that year, PlaceWare Auditorium was described in a public talk at Stanford University as allowing "one or more people to give an interactive, online, multimedia presentation via the Web to hundreds or thousands of simultaneous attendees; the presentation can include slides (made in PowerPoint or any GIF-image editor), live annotation on the slide images, real-time polls of the audience, live audio from the presenter and those asking questions, private text and audio conversations in the auditorium's "rows", and other features." PlaceWare Auditorium was formally announced in March 1997 at a price of $150 per simultaneous user. Unveiled in 1996 by InSoft Inc., CoolTalk was a multimedia software tool that let PC users view data displayed on a shared whiteboard, exchange real-time messages via a chat tool or speak with each other via a TCP/IP voice connection. The product worked with Microsoft Sound System-compatible audio boards and was available in a 14.4-kbit/s version or 28.8-kbit/s version. CoolTalk was later packaged with popular Web browsers of the time. CoolTalk 14.4 and 28.8 sold for $49.95 and $69.95, respectively, in 1996. In February 1998, Starlight Networks released StarLive! (the exclamation point being part of the product name). The press release said "customers can access familiar Web browser interfaces to view live and pre-recorded corporate presentations, along with synchronized slides. End users can communicate directly with the presenter using real-time chat technology and other Web-based collaboration tools." In June 1998, PlaceWare 2.0 Conference Center was released, allowing up to 1000 live attendees in a meeting session. In February 1999, ActiveTouch announced WebEx Meeting Center and the webex.com website. In July 1999 WebEx Meeting Center was formally released with a 1000-person meeting capacity demonstrated. In September of the same year, ActiveTouch changed its company name to WebEx. In April 1999, Vstream introduced the Netcall product for web conferencing as "a fee-based Internet software utility that lets you send business presentations and other graphic information via e-mail to a Vstream server. Vstream converts the content, again using streaming technology, and makes the presentation available for viewing by up to 1,200 people at a time." Vstream changed the company name to Evoke Communications in 2000, with a further change to Raindance Communications in 2002. In February 2006, Raindance was acquired by the InterCall division of West Corporation. In December 2003, Citrix Systems acquired Expertcity, giving it the GoToMyPC and GoToAssist products. The acquired company was renamed as the Citrix Online division of Citrix Systems. In July 2004, Citrix Online released GoToMeeting as its first generic web conferencing product. In June 2006, GoToWebinar was added, allowing additional registration and reporting functionality along with larger capacity in sessions. In January 2003, Macromedia acquired Presedia, including the Breeze Presentation product. Breeze Live was added with the 4.0 release of Macromedia Breeze to support web conferencing. In April 2005, Adobe Systems announced acquisition of Macromedia (completed in December 2005) and changed the Breeze product name to Adobe Connect. A trademark for the term WEBinar (first three letters capitalized) was registered in 1998 by Eric R. Korb (Serial Number 75478683, USPTO) and was reassigned to InterCall. The trademark registration was cancelled in 2007. Learn.com filed a claim for the term "webinar" without regard to font or style in 2006 (Serial Number 78952304, USPTO). That trademark claim was abandoned in 2007 and no subsequent filing has been made. During the COVID-19 pandemic, webinars became the norm of teaching and instruction in numerous schools, universities and workplaces around the world. This new form of transferring knowledge challenged institutions and instructors, and it fostered new practices of teaching. At the same time this new form of teaching also demonstrated the advantages of moving these events online, as virtual conferences were found to be more inclusive, more affordable, less time-consuming and more accessible worldwide, especially for early-career researchers. Providing a great opportunity to identify best practices for designing intentionally inclusive online events, so the positive advantages of these can continue when in-person conferences resume. See also Comparison of web conferencing software Collaborative software Electronic meeting system Hybrid event Videoconferencing Web television Webcast References Teleconferencing Educational technology
50565707
https://en.wikipedia.org/wiki/Continuous%20analytics
Continuous analytics
Continuous analytics is a data science process that abandons ETLs and complex batch data pipelines in favor of cloud-native and microservices paradigms. Continuous data processing enables real time interactions and immediate insights with fewer resources. Defined Analytics is the application of mathematics and statistics to big data. Data scientists write analytics programs to look for solutions to business problems, like forecasting demand or setting an optimal price. The continuous approach runs multiple stateless engines which concurrently enrich, aggregate, infer and act on the data. Data scientists, dashboards and client apps all access the same raw or real-time data derivatives with proper identity-based security, data masking and versioning in real-time. Traditionally, data scientists have not been part of IT development teams, like regular Java programmers. This is because their skills set them apart in their own department not normally related to IT, i.e., math, statistics, and data science. So it is logical to conclude that their approach to writing software code does not enjoy the same efficiencies as the traditional programming team. In particular traditional programming has adopted the Continuous Delivery approach to writing code and the agile methodology. That releases software in a continuous circle, called iterations. Continuous analytics then is the extension of the continuous delivery software development model to the big data analytics development team. The goal of the continuous analytics practitioner then is to find ways to incorporate writing analytics code and installing big data software into the agile development model of automatically running unit and functional tests and building the environment system with automated tools. To make this work means getting data scientists to write their code in the same code repository that regular programmers use so that software can pull it from there and run it through the build process. It also means saving the configuration of the big data cluster (sets of virtual machines) in some kind of repository as well. That facilitates sending out analytics code and big data software and objects in the same automated way as the continuous integration process. External links Continuous analytics Development model References Data analysis Big data
7097882
https://en.wikipedia.org/wiki/Olive%20Tree%20Bible%20Software
Olive Tree Bible Software
Olive Tree Bible Software creates Biblical software and mobile apps, and is an electronic publisher of Bible versions, study tools, Bible study tools, and Christian eBooks for mobile, tablet, and desktop devices. The firm is headquartered in Spokane, Washington and is a member of the Evangelical Christian Publishers Association (ECPA). Olive Tree currently supports Android, iPad, iPhone, Macintosh, Windows, and personal computer devices. History In 1984, Drew Haninger began development via a student project on a monochrome personal computer. During the 1980s and 1990s, engineering was focused on a multilingual word processor and programs for searching the Bible. In August 1998, the first BibleReader(TM) was released for Palm OS. In 1999, the BibleReader for Pocket PC, running the Windows Mobile operating system, was released. The company continued to grow and, in 2000 assumed the name Olive Tree Bible Software. As the mobile device market continued to expand, the BibleReader was released for Android, BlackBerry, iOS (iPhone, iPod touch, iPad) Smartphones, and Symbian operating systems. , Olive Tree had over 20 employees. In November 2011, Olive Tree announced the release of BibleReader for Mac. In December 2011, the Windows PC version was released. On 5 May 2014, HarperCollins announced it had acquired Olive Tree, with Drew Haninger moving to an advisory role. On 11 September 2020, Gospel Technologies, owned by Olive Tree's Vice President of Operations Steven Cummings, acquired Olive Tree from HarperCollins. Bible+ Olive Tree is best known for its Bible+ application (formerly called BibleReader), a software tool designed for reading and searching electronic books. Recent enhancements include bookmarks, personal notes, highlighting, and auto-scrolling. Font sizes, colors, and the application display can be customized. Bible+ uses a markup language called Olive Tree Markup Language (OTML). Bible resources Olive Tree Bible Software provides a broad catalogue of over 1000 Bible resources, including audio Bibles, Bibles, commentaries, dictionaries, devotionals, ebooks, multimedia, and Strong's numbering system. Some of the most notable resources include the Amplified Bible (AMP), Authorized King James Version (KJV), Darby Bible (DBY), English Standard Version (ESV), The ESV Study Bible (ESVSB), Holman Christian Standard Bible (HCSB), New American Standard Bible (NASB), New International Version (NIV), New King James Version (NKJV), New Living Translation (NLT), NLT Study Bible, and New Revised Standard Version (NRSV). Languages Bibles are also available in various languages, including English, French, German, and Spanish. Ancient language resources include: the Novum Testamentum Graece (NA27) Greek New Testament with morphological information and UBS dictionary; the Biblia Hebraica Stuttgartensia, the Hebrew and Aramaic Old Testament/Hebrew Bible with the Groves-Wheeler Westminster Hebrew Morphology and an abridged BDB dictionary; the Qumran (non-biblical) texts with morphological information, lexical glosses, and an abridged BDB dictionary; and the Septuagint with morphological information and the LEH dictionary. Publishers Olive Tree has continued to work with several Christian publishers, such as AMG International, Baker Publishing Group, Eerdmans, Good News Publishers, Moody Publishers, Thomas Nelson (publisher), and Zondervan. Authors Olive Tree offers resources by several recent authors, including Gary Chapman (author), Mark Dever, Wayne Grudem, John F. MacArthur, Beth Moore, John Piper (theologian), R. C. Sproul, Chuck Swindoll, and Rick Warren, and the classic authors John Calvin and Matthew Henry. References External links Reviews Introducing BibleReader 5 Hebrew screen shots from BibleReader Overview of Olive Tree’s BibleReader 5.0 App for iPad, iPhone Electronic Bibles Digital library software Christian publishing companies Publishing companies established in 2000 2000 establishments in Washington (state) Companies based in Spokane, Washington
33172210
https://en.wikipedia.org/wiki/IdeaCentre
IdeaCentre
The IdeaCentre is a line of consumer-oriented desktop computers designed, developed and marketed by Lenovo. The first IdeaCentre desktop, the IdeaCentre K210, was announced by Lenovo on June 30, 2008. While the IdeaCentre line consists entirely of desktops, they share a common design language with the IdeaPad line of laptops and hybrids. One such feature is Veriface facial recognition technology. Product series A Series Lenovo's IdeaCentre A Series is a line of all-in-one desktops designed primarily for home use. B Series The IdeaCentre B Series all-in-one desktops from Lenovo were first launched in 2010. Like other desktops in the IdeaCentre product line, the B Series desktops were designed for home users. The first model in the series was the B500. K Series The 'IdeaCentre K Series desktops from Lenovo are described by the manufacturer as being gaming-oriented desktops. Typical features on the desktops include mid-range to high-end processors, discrete graphics cards, multiple hard disk drives, multiple RAM DIMMS, multiple USB ports, and multiple optical disk drives. The K Series desktops also come with a physical switch on the CPU that allows users to shift between different levels of processing power. For example, the K330 offered red for high performance, blue for moderate performance, and green for less processing- and resource-intensive tasks. The IdeaCentre K Series desktops were originally part of the Lenovo 3000 line of products. This series consisted of budget-friendly computers – both laptops and desktops. In 2008, the Lenovo 3000 series was moved by Lenovo into its ‘Idea’ line of products. The Lenovo 3000 K100 desktop was replaced by the IdeaCentre K210. The IdeaCentre line was described as having improved in term of design, while retaining the low price that was characteristic of the Lenovo 3000 line. Q Series The IdeaCentre Q Series PCs from Lenovo are a series of nettops meant primarily for home and personal use. The Q Series nettops are described by the manufacturer as being multimedia-oriented nettops. Comparing the size to a typical paperback book, Lenovo describes the Q Series nettops as the smallest desktops in production. The general features of the Q Series desktops are the small size, low energy requirements, ability to play HD video, and low noise levels. These nettops are designed to be extremely compact processing units. A nettop is a desktop computer that uses the same (or similar) components found in netbook PCs. The first nettop in the IdeaCentre Q series was the Q100, launched in 2009. Y Series Y900 Razer Edition Lenovo's Y900 Razer Edition gaming PC is the result of a partnership announced with Razer in November 2015. Lenovo equipped its existing IdeaCenter Y900 model with Razer's Chroma Full Spectrum lighting. The two companies say it is the first of many planned joint projects. This version of the Y900 is also bundled with Razer's Blackwidow Chroma mechanical keyboard and Mamba Chroma mouse. Lenovo says future products will include Razer software such as Comms, Synapse, and Cortex. Gaming Series IdeaCentre Gaming 5i was announced in April, 2020. Horizon The IdeaCentre Horizon is a table pc released at the 2013 International CES. The Horizon features a 27-inch screen and is designed for multiple simultaneous users. It was designed specifically with gaming in mind but can also serve as a desktop computer The Horizon is Lenovo's initial entry into nascent table computer market. Peter Hortensius, a senior Lenovo executive said, "We've seen technology shifts across the four screens, from the desktop to the laptop, tablet and smartphone, and yet … there is still room for technologies like Horizon that bring people together." The Horizon was announced at the International CES in Las Vegas. Lenovo will start selling the Horizon early in the summer of 2013 at a starting price of US$1,699. Stick Lenovo started shipping the ideacentre Stick 300 in July 2015. The Stick 300 plugs into any computer display or television with HDMI. It is based on the Intel Atom Z3735 processor, has 32 gigabytes of storage, 2 gigabytes of RAM, a MicroSD card slot, a full-sized USB 2.0 port, 802.11 b/g/n Wi-Fi, and Bluetooth 4.0. It was released with Windows 8.1 with a free upgrade to Windows 10. 2016 IdeaCentre 610S The IdeaCentre 610S is a small-chassis desktop computer that uses both a monitor and an included detachable projector. The 610S has a pyramid-shaped case. The projector is designed to fit on top but can also be placed in other positions. The projector has 720p resolution and a brightness rating of 220 lumens. The 610S comes standard with an Intel Core i7 processor, supports up to 16 gigabytes of RAM, and has an Nvidia GeForce 750Ti graphics card. A choice of a 2-terabyte hard drive or a 128-gigabyte SSD is standard. IdeaCentre 700 The IdeaCentre 700 can be purchased as an All-in-one computer or a desktop. The All-in-one computer has a 23.8-inch touchscreen with 1080p resolution. It comes standard with an Intel "Skylake" Core i5-6400 central processor with 2.8-gigahertz base clock speed, 8 gigabytes of DDR4 RAM, and a 2-terabyte hard drive, an Nvidia GeForce GT 930A graphics processor with 2 gigabytes of VRAM, and an optical drive. An Intel RealSense camera is included for logging in via facial recognition and video chat. The desktop has Intel Core i7 6th Gen 6700 (3.4 GHz) [Gigahertz], 12 GigaBytes of DDR4, 1 Terabyte Hard Drive, 120 GB Solid State Drive, an NVidia Geforce GTX 960 dedicated graphics card, and generally is installed with Windows 10 Home. However the desktop does include a touchscreen monitor. 2011 At CES 2011, Lenovo announced the launch of four IdeaCentre desktops: the A320, B520, B320, and C205. All desktops were designed as All-in-ones, combining processor and monitor into a single unit. The desktops were described by HotHardware as being "uniquely designed," with users needing to "gaze on each one to see which design would look best in your place." 2010 Lenovo announced three IdeaCentre desktops at CES 2010: the A300, C310, and K320. The A300 was the industry's thinnest desktop at the time – only 1.85 cm thick. The desktop was designed to be asymmetrical, with the processor in the base as opposed to AIO conventions, in which the processor was located behind the screen. The desktop had a 21.5” full HD LED screen, up to Intel Core 2 Duo processors, an integrated web camera, HDMI in/out, integrated 802.11n Wi-Fi, and a wireless Bluetooth mouse and keyboard. Software on the desktop included Lenovo Rescue System for data recovery and CamSuite. The IdeaCentre C310 was Lenovo's first multitouch all-in-one desktop. The 20” HD 16:9 widescreen included the Lenovo NaturalTouch Panel for touch screen technology. A collection of applications optimized for touch use was also included called Lenovo's IdeaTouch, with an interactive user login through VeriTouch software. The desktop included Intel Atom 330 Dual Core processors, up to 4GB RAM, and the ATI Mobility Radeon HD 4530 512MB discrete graphics card. The IdeaCentre K320 was described as a “performance gaming desktop” by Daily Connect. The desktop was equipped with up to Intel Core i7 processors, up to ATI Radeon HD 5970 2GB discrete graphics, up to 8GB DDR3 memory, and up to 1TB hard disk drive. The desktop also included the front-mounted Lenovo Power Control Switch found on the K300 desktop. This allowed users to choose between energy efficiency and greater CPU power. Bright Vision Technology was available, automatically adjusting brightness according to the user's distance from the screen and the intensity of surrounding light. 2009 In August 2009, two new series of IdeaCentre desktops were announced: the Q Series and the D Series. The first desktops in the Q Series were the Q700, Q100, and Q110. The Q700 was Lenovo's first home theatre PC, with high definition 1080p playback, digital surround sound and compatibility with an HDTV. The Q100 and Q110 were extremely thin desktops, dubbed ‘nettops’ by Lenovo, with dimensions of 6”x6.3”x0.7”. These desktops were slim enough to be mounted on the back of a monitor. The Q100 was also energy efficient, using only 14 watts of power when idle and 40 watts when in full use. The first desktop in the D Series was the D400. The D400 desktop was designed as a home server, offering up to 8TB of storage space, support for multiple external storage devices with five USB ports. An eSATA port allowed high speed data transfer. Additional features of the desktop included the ability to duplicate data on multiple hard disks and remote access to the server. In October 2009, three IdeaCentre desktops were announced: the B500, K300, and H230. The B500 desktop was equipped with an Intel Core 2 Quad processor, up to 8GB of DDR3 RAM, up to 1TB hard disk drive, a 23” full HD screen, and JBL integrated speakers. The desktop also included a 4-in-1 remote control that could be used as a motion controller for games, a VoIP handset, an air mouse, and a media remote. A feature that was described as unique by Lenovo was the CamSuite software, designed to keep users in the center of the web camera's focus area. The IdeaCentre K300 desktop was described by Lenovo as a “performance desktop”. The desktop included an Intel Core 2 Quad processor and hard disk drives configured for RAID. Another feature on the desktop was the Lenovo Power Control Switch, allowing users to adjust power utilization between energy efficiency and superior performance. The IdeaCentre H230 desktop was described by Lenovo as “the perfect mix of performance and value”. The desktop offered the Intel Core 2 Duo E7500 processor, up to 8GB RAM, and a 500GB SATA hard disk drive. The desktop was also equipped with Lenovo Rescue System for data recovery. References External links IdeaCentre Desktops on Lenovo.com Computer-related introductions in 2008 Lenovo personal computers
1865828
https://en.wikipedia.org/wiki/BitTorrent%20tracker
BitTorrent tracker
A BitTorrent tracker is a special type of server that assists in the communication between peers using the BitTorrent protocol. In peer-to-peer file sharing, a software client on an end-user PC requests a file, and portions of the requested file residing on peer machines are sent to the client, and then reassembled into a full copy of the requested file. The "tracker" server keeps track of where file copies reside on peer machines, which ones are available at time of the client request, and helps coordinate efficient transmission and reassembly of the copied file. Clients that have already begun downloading a file communicate with the tracker periodically to negotiate faster file transfer with new peers, and provide network performance statistics; however, after the initial peer-to-peer file download is started, peer-to-peer communication can continue without the connection to a tracker. Since the creation of the distributed hash table (DHT) method for "trackerless" torrents, BitTorrent trackers have largely become redundant; however, they are still often included with torrents to improve the speed of peer discovery. Public vs private trackers Public trackers Public or open trackers can be used by anyone by adding the tracker address to an existing torrent, or they can be used by any newly created torrent, like OpenBitTorrent. The Pirate Bay operated one of the most popular public trackers until disabling it in 2009 due to legal trouble, and thereafter offered only magnet links. Private trackers A private tracker is a BitTorrent tracker that restricts use by requiring users to register with the site. The method for controlling registration used among many private trackers is an invitation system, in which active and contributing members are given the ability to grant a new user permission to register at the site, or a new user goes through an interview process. Legal issues Legal uses There are several circumstances under which it is legal to distribute copyrighted material or parts thereof. Free distribution. Copyright holders may choose to allow free distribution of their works. Dedicated copyright licenses—usable by anyone who wants to upload their own material—are available for that purpose. Such licenses are often used in situations with large numbers of copyright holders, like in online communities. For example, the Creative Commons license family for free cultural works in text, audio, video or image format; or software licenses for Free Software / Open-source software like the BSD License and others. Wikipedia itself can be distributed via BitTorrent for the same reason. Public domain. Works that are in the public domain and therefore not (or no longer) subject to copyright law can also be legally distributed. For instance, Project Gutenberg regularly collects and publishes classical cultural works after their copyright has expired (which depends on the country in which the work was previously published). Fair use. Some countries also have fair use provisions in copyright law, which allow people the right to access and use certain classes of copyrighted material without breach of the law. There are also experiments that legally sell content that is distributed over BitTorrent using a "secure" tracker system. Improving torrent reliability Trackers are the primary reason for a damaged BitTorrent "swarm". (Other reasons are mostly related to damaged or hacked clients uploading corrupt data.) The reliability of trackers has been improved through two main innovations in the BitTorrent protocol. Multi-tracker torrents Multi-tracker torrents contain multiple trackers in a single torrent file. This provides redundancy in the case that one tracker fails, the other trackers can continue to maintain the swarm for the torrent. One disadvantage to this is that it becomes possible to have multiple unconnected swarms for a single torrent where some users can connect to one specific tracker while being unable to connect to another. This can create a disjoint set which can impede the efficiency of a torrent to transfer the files it describes. Additional extensions such as Peer exchange and DHT mitigate this effect by rapidly merging otherwise disjoint graphs of peers. Trackerless torrents Vuze (formerly Azureus) was the first BitTorrent client to implement such a system through the distributed hash table (DHT) method. An alternative and incompatible DHT system, known as Mainline DHT, was developed simultaneously and later adopted by the BitTorrent (Mainline), μTorrent, Transmission, rTorrent, KTorrent, BitComet, and Deluge clients. Current versions of the official BitTorrent client, μTorrent, BitComet, Transmission and BitSpirit all share compatibility with Mainline DHT. Both DHT implementations are based on Kademlia. As of version 3.0.5.0, Vuze also supports Mainline DHT in addition to its own distributed database through use of an optional application plugin MainlineDHT Plugin. This potentially allows the Vuze client to reach a bigger swarm. Most BitTorrent clients also use Peer exchange (PeX) to gather peers in addition to trackers and DHT. Peer exchange checks with known peers to see if they know of any other peers. With the 3.0.5.0 release of Vuze, all major BitTorrent clients now have compatible peer exchange. IPv6 support One of the options for this HTTP based tracker protocol is the "compact" flag. This flag, as defined in BEP 23, specifies that the tracker can compact the response by encoding IPv4 addresses as a set of 4 bytes (32 bits). IPv6 though are 128 bits long, and as such, the "compact" would break IPv6 support. To handle that situation clients and trackers must either avoid using compact announces over IPv6 or implement BEP 07 Software opentracker from Dirk Engling powered one of the biggest BitTorrent trackers, The Pirate Bay tracker. qBittorrent is an open source BitTorrent client with a built-in tracker support. Atrack is a high performance open source tracker designed to run on Google App Engine. BitStorm is a small tracker written in PHP which does not require a database server and runs on any PHP compatible web server. BitStorm-sql is the same tracker but with MySQL support. BitTorious is an open source, commercially supported tracker with integrated web-based management portal. Hefur is a standalone BitTorrent tracker written in C++, under the MIT license. Ocelot is a BitTorrent tracker written in C++ for the Gazelle project. See also ArenaBG BitTorrent (protocol) BitTorrent client Comparison of BitTorrent tracker software Comparison of BitTorrent sites Bram Cohen Distributed hash table UDP tracker XBT Tracker – C++ BitTorrent tracker designed for performance (does not serve .torrent files or other web pages); requires MySQL References Application layer protocols BitTorrent Servers (computing)
18361622
https://en.wikipedia.org/wiki/DriveSentry
DriveSentry
DriveSentry was an antivirus program, developed by DriveSentry Inc, to protect Microsoft Windows users from malware. It is available free for personal (non commercial) use, though with restricted functionality. Detection methods DriveSentry provides a realtime and on demand virus scanner, and uses the following methods to determine if an application contains a virus before allowing it to run: Whitelisting: Programs are first checked against a list of known trusted and validated applications and files. These "whitelisted" files are allowed to run without restriction. Blacklisting: Only if programs are not present on the whitelist are they then checked against an updated database list of virus signatures; those files whose MD5 signature is on the list are automatically moved to quarantine area if they attempt to gain access to system or data. This is technique as used by practically all antivirus products as the first line of defense. Heuristics If the programme is not on either list its behavior is compared to that of previous encountered malware. Community Statistics: DriveSentry also collects and stores user statistics based on access decisions made by the user, which is shared amongst all other users. DriveSentry partners with Offensive Computing and Frame4 Security Services to collect and analyze malware samples for the database list, partnering in this way ensures that the database is fed by multiple sources and therefore offers redundancy. Although DriveSentry's basic features are available for free, its more advanced features such as automatically updating its white and blacklists have to be paid for via a one-off payment. White/Blacklisting Articles in computing publications discussing new malware protection technologies – such as whitelisting – claim that traditional antivirus technologies are having an increasingly hard time keeping up with the latest virus, trojans and other malicious threats. The popularity of the Internet and the ease at which data can now spread, allows threats to propagate faster, requiring traditional antivirus products to play "catch-up" with new zero day threats. The techniques of using white/blacklisting and community feedback, may offer greater security However, this functionality does come at a cost – specifically, whitelisting only allows pre-vetted software to be executed, and prevents all other software from running, even if it is harmless. DriveSentry avoids this issue by allowing the user to be prompted if programs don't appear in the black or whitelist. This then forces responsibility on the end user to determine what is good or bad. DriveSentry attempts to help the user by monitoring the action of the program and calculating and displaying a threat rating. Furthermore, malicious software which has been included on the whitelist can still be executed. References External links Antivirus software Windows security software Freeware
6842540
https://en.wikipedia.org/wiki/Lee%20Hays
Lee Hays
Lee Hays (March 14, 1914 – August 26, 1981) was an American folksinger and songwriter, best known for singing bass with the Weavers. Throughout his life, he was concerned with overcoming racism, inequality, and violence in society. He wrote or cowrote "Wasn't That a Time?", "If I Had a Hammer", and "Kisses Sweeter than Wine", which became Weavers' staples. He also familiarized audiences with songs of the 1930s labor movement, such as "We Shall Not Be Moved". Childhood Hays came naturally by his interest in folk music since his uncle was the eminent Missouri and Arkansas folklorist Vance Randolph, author of, among other works, the bestselling Pissing in the Snow and Other Ozark Folktales and Who Blewed Up the Church House?. Hays' social conscience was ignited when at age five he witnessed public lynchings of African-Americans. He was born in Little Rock, Arkansas, the youngest of the four children of William Benjamin Hays, a Methodist minister, and Ellen Reinhardt Hays, who before her marriage had been a court stenographer. William Hays's vocation of ministering to rural areas took him from parish to parish, so, as a child, Lee lived in several towns in Arkansas and Georgia. He learned to sing sacred harp music in his father's church. Both his parents valued learning and books. Mrs. Hays taught her four children to type before they began learning penmanship in school, and all were excellent students. There was a gap in age of ten years between Lee and next oldest sibling, his brother Bill. In 1927, when Lee was thirteen, his childhood came to an abrupt end as tragedy struck the family. The Reverend Hays was killed in an automobile accident on a remote road and soon afterward Lee's mother had to be hospitalized for a mental breakdown from which she never recovered. Lee's sister, who had begun teaching at Hendrix-Henderson College, also broke down temporarily and had to quit her job to move in with their oldest brother in Boston, Massachusetts. Teenage years The period immediately following his father's death was so painful that Lee Hays could not bring himself to talk much about it, even to Doris Willens, the writer he selected to be his biographer. His brothers, both recently married, sent him to Emory Junior College in Georgia from which he graduated in 1930 at sixteen (but already over six feet tall and looking much older than his years). He traveled alone to enroll at Hendrix-Henderson College (now Henderson State University) in Arkansas, the Methodist school that his father and siblings had attended, but the expense of their mother's institutionalization and the effects of the Wall Street Crash of 1929 meant that college tuition money was not available for Lee. Instead he moved to Cleveland, Ohio, where his oldest brother, Reuben, who worked in banking, was now located. Reuben found Lee a job as a page in a public library. There the rebellious Hays embarked on an extensive program of self-education, becoming radicalized in the process: Every book that was considered unfit for children to read was marked with a black rubber stamp. So I'd go through the stacks and look for these black stamps. Always the very best books. They weren't locked-up books, just books that would not normally issued to children—D. H. Lawrence, a number of European novels. Reading those books was like doors opening. Don't forget that the fundamentalist South was a closed, fixed society. The world was made in six days; everything was foreordained and fixed in the universe. ... This was the time of the Great Depression ... the whole country was in the grip of a terrible sickness, which troubled me as it did everyone else. And I didn't understand it until I started reading Upton Sinclair and the little mag[azines]. ... Somewhere along in there I became some kind of Socialist, just what kind, I have never figured out. In 1932, Hays moved out of his brother's house into a room at the Cleveland YMCA, where he stayed for two years. Hearing about the activities of the radical white Presbyterian minister Claude C. Williams, a Christian Marxist who had become converted to the cause of racial equality and was trying to organize a coal miners' union in Paris, Arkansas, Hays decided to return to Arkansas and join Williams in his work. He enrolled at the College of the Ozarks, a Presbyterian school that allows students to work in lieu of tuition, intending to study for the ministry and devote his life to the poor and dispossessed. There he met a fellow student, Zilphia Johnson (later Zilphia Horton), another acolyte of Williams, who was to become almost as important in Hays' life as Williams himself. An accomplished musician and singer, Zilphia had broken with her father, who was the owner of the Arkansas coal mine that Williams was trying to organize, and had become a union organizer herself. Hays moved in with Williams and his family: "I got to be his [Williams'] chief helper for quite a while", he later wrote. From 1934 to 1940, writes Doris Willens, "Williams was the dominant figure in Hays' life—a surrogate father—a man of the cloth but with a radical difference". The following year, Williams was dismissed by the elders of his Paris, Arkansas, church for being too radical and was subsequently jailed, beaten, and almost killed when he tried to organize an interracial hunger march of tenant farmers in Fort Smith, Arkansas, near the Oklahoma border. His life was saved only because his activities attracted newspaper publicity and the attention of northerners. One of these was Willard Uphaus, a professor of divinity at Yale University, who had recently been appointed executive secretary of the National Religion and Labor Foundation, and who became Williams' admirer and supporter. After his release from jail, Williams moved his family away from Fort Smith to Little Rock to get them out of harm's way. Hays dropped out of school in order to follow them, living on odd jobs for a time. He then went to visit Zilphia, who had married Myles Horton, a founder and the director of the Highlander Folk School, an adult education and labor organizing school in Monteagle, Tennessee. At Highlander, Zilphia Horton directed music, theater, and dance workshops. During a miners' union meeting in Tennessee, she recruited Hays as a song leader: "When Zilphia got up and said, 'Brother Lee Hays will now lead us in singing', I damn near dropped through the floor. There was no backing out; I had to take the plunge and I've been doing it ever since." Later, he wrote that "Claude [Williams] and Zilphia [Horton] did more to change and shape my life than any people I can recall." In her drama classes at Highlander Zilphia borrowed the techniques of the New Theater League in New York, which encouraged participants to create plays out of their own experience, which would then be staged at labor conferences. It was a revelation for Hays to see how the arts could serve to empower people for social action. He decided to go to New York and study playwrighting himself. Armed with a letter of introduction from Claude Williams and Willard Uphaus, Hays became a resident at a student program at New York City's progressive Judson Memorial Church. There, he and a friend, Alan Hacker, a photojournalist, raised funds to make a documentary film about the plight of Southern sharecroppers and about efforts at Highlander and elsewhere to organize the Southern Tenant Farmers Union (STFU), one of the first racially integrated labor unions in the United States. In preparation, Hays and Hacker took classes with photographer Paul Strand, among others. They shot the film in Mississippi at an experimental Quaker-run cooperative inter-racial cotton farm. Even so, they were harassed by local planters and their scripts and notebooks were stolen and had to be recreated from memory. The film, America's Disinherited, which due to limited funds was quite brief, premiered at the Judson Church in May 1937 and was shown in schools and other venues (a copy is now in the film archives of the Museum of Modern Art). It demonstrates the use of singing in building a movement: "The turning point in the film is when an image of clenched black and white hands is followed by one of biracial strikers marching and singing 'Black and white together / We shall not be moved'". Shortly after it was completed, Alan Hacker died of an illness he had contracted during the filming. During this period Hays also wrote a play about the STFU, Gumbo (a word used by the sharecroppers for their soil), which was produced at Highlander. Commonwealth College In 1937, when Claude Williams was appointed director of Commonwealth College in Mena Arkansas, a labor organizing school, he hired Lee Hays to direct a theater program. The school newspaper, the Commonwealth Fortnightly, announced that: Lee Hays, a native of Little Rock, will join Commonwealth's faculty at the beginning of the fall quarter ... to teach Workers' Dramatics and to supervise Commonwealth's drama groups. The announcement noted that as former assistant to the drama director at Highlander Folk School and a member of the Sharecropper Film Committee which produced America's Disinherited: "Lee [Hays] brings with him to Commonwealth valuable experience and ability." While at Commonwealth, Hays and his drama group wrote and produced numerous plays, of which one by Hays, One Bread, One Body, toured with considerable success. He also compiled a 20-page songbook of union organizing songs based on hymns and spirituals. Playwright and fellow student Eli Jaffe said that Hays "was deeply religious and extremely creative and imaginative and firmly believed in the Brotherhood of Man." Waldemar Hille, who was the dean of music at Elmhurst College near Chicago and who had spent Christmas of 1937 at Commonwealth, thought that Hays was the most talented person at the college and was particularly enchanted with the folk songs and singing he encountered there. By the next year, however, another observer noted that the "brilliant" and hitherto energetic Hays appeared "disheveled" and was "sick all the time". Doris Willens, his biographer, speculates that Hays's physical and mental states were possibly a response to the ongoing tribulations of his mentor and of Commonwealth College. Long subject to the virulent hostility of its neighbors and in dire financial straits, the embattled school was riven by internecine struggles between its more radical members and the more moderate socialists on its board. In 1940 the board expelled the avowedly Marxist Claude Williams for allegedly allowing Communist infiltration and for being excessively preoccupied with the issue of racial discrimination, and soon after, the institution was disbanded. The Almanacs and World War II As the clouds gathered around Commonwealth College, Hays headed north to New York, taking with him his collection of labor songs, which he planned to turn into a book. But a short stayover in Philadelphia with the poet Walter Lowenfels and his hospitable family turned into a long visit. The German-born Lowenfels, a highly cultured man and a modernist poet who was fascinated by Walt Whitman and edited a book of his poetry, became another surrogate father to Hays, influencing him deeply. (Together the two men later wrote the politically-charged song "Wasn't That a Time?") Under Lowenfels' influence, Hays also began to write modernist poems, one of which was published in Poetry Magazine in 1940. He also had pieces based on Arkansas folklore published in The Nation. Publication of these pieces led to his forming a friendship with another Nation contributor, Millard Lampell. Arriving in New York, Hays and Lampell became roommates. They were soon joined by Pete Seeger, who like Hays was also contemplating putting together an anthology of labor songs. Together the trio began to sing at left-wing functions and to call themselves the Almanac Singers. It was a somewhat fluid group that included Josh White and Sam Gary and later Sis Cunningham (a fellow Commonwealth College alumna), Woody Guthrie (with whom Hays collaborated on his 1940 debut album, Dust Bowl Ballads), and Bess Lomax Hawes, among others. The Almanac's first album, issued in May 1941, was the controversial Songs for John Doe, comprising six pacifist songs, two of them co-written by Hays and Seeger and four by Lampell. The songs attacked the peacetime draft and the big U.S. corporations which were then receiving lucrative defense contracts from the federal government while practicing racial segregation in hiring. Since at that time isolationism was associated with right-wing conservatives and business interests, the pro-business but interventionist Time Magazine lost no time in accusing the left-wing Almanacs of "scrupulously echoing" what it called "the mendacious Moscow tune" that "Franklin Roosevelt is leading an unwilling people into a J. P. Morgan war" (Time, June 16, 1941). Concurrently, in the Atlantic Monthly Carl Joachim Friedrich, a German-born but anti-Nazi professor of political science at Harvard, deemed the Almanacs treasonous and their album "a matter for the Attorney General" because it seemed to him to be subversive of military recruitment and morale. On June 22, Hitler unexpectedly broke the Hitler-Stalin non-aggression pact and attacked Russia. Three days later, Franklin Roosevelt, threatened by black labor leaders with a huge march on Washington protesting segregation in defense hiring and the army, issued Executive Order 8802 banning racial and religious discrimination in hiring by recipients of federal defense contracts. The army, however, refused to desegregate. Somewhat mollified, nevertheless, labor leaders canceled the march and ordered union members to get behind the war and to refrain from strikes; copies of the isolationist Songs for John Doe were destroyed (a month after being issued). Asked by an interviewer in 1979 about his support of the Hitler-Stalin Pact, Hays said: "I do remember that the signing of the Hitler-Stalin pact was a very hard pill to swallow. . . . To this day I don't quite follow the line of reasoning behind that one, except to give Stalin more time." According to Hays's biographer, Doris Willens: That the pact gave Stalin more time was the story then put out; millions around the world didn't buy it [in part because of Stalin's 1939 attack on Finland] and at that point lost faith in the Soviet Union . . . (Many others had lost faith earlier, during the Moscow purge trials.) But as a disciple of Claude [Williams], Lee in 1940 held firm with those who continued to believe that America and Britain were maneuvering not to defeat Nazi Germany, or rather, not just yet, but first to turn Hitler to their desired end of destroying the Soviet Union...In short, 1940 was a bad time to say a good word for "peace." Worse, the only other voices opposing the war emanated from the extreme right, particularly America Firsters, a group suspected of harboring the hope that Hitler would eventually triumph . . . . Whatever uneasiness the Hitler-Stalin pact churned up, Lee hoped to submerge by throwing his vast energies into the service of the dynamic Congress of Industrial Organizations [(CIO)]—the challenger to the fat and lazy and bureaucratic old American Federation of Labor. A singing labor movement, that was the goal. If you got the unions singing, peace and brotherhood had to follow. It seemed so clear and simple. The Almanacs, who now included Sis Cunningham, Woody Guthrie, Cisco Houston, and Bess Lomax Hawes discarded their anti-war material with no regrets and continued to perform at union halls and at hootenanies. In June 1941 they embarked on a CIO tour of the United States, playing in Detroit, Chicago, and Seattle. They also issued several additional albums, including one, Dear Mr. President (recorded c. January 1942, issued in May), strongly supporting the war. Bad publicity, however, pursued them because of their reputation as former isolationists who had become pro-war "prematurely" (i.e., six months before Pearl Harbor). As key members, Pete Seeger, Cisco Houston, and Woody Guthrie joined the war effort (Seeger in the army and Guthrie and Houston in the Merchant Marine) the group disbanded. Hays was rejected from the Armed Forces because of a mild case of tuberculosis and he indeed felt sick all the time, missed performances, and developed a reputation for hypochondria. Even before this, Seeger and the other Almanacs found Hays difficult to work with and so erratic that they had asked him to leave the group. People's Songs When the war ended, however, a group of songwriters gathered in Pete Seeger's in-laws' apartment in Greenwich Village and founded People's Songs, "organized to create, promote and distribute songs of labor and the American people". They elected Pete Seeger president and Lee Hays executive secretary. Corporate counsel was Joseph R. Brodsky. In his new position Hays found some of his old energy returning. He wrote to friends, old and new (a new one was Fred Hellerman, later of the Weavers), who he thought might be interested. He brought in his old friend Waldemar Hille to be music editor of the People's Songs Bulletin and solicited songs and stories from Zilphia Horton, who sent in her new favorite, "We Shall Overcome". In its first year every issue of the People's Songs Bulletin featured a new song by Hays. One, written with Walter Lowenfels after a disastrous accident in a coal mine contained this verse: Do you know how the coalminers die To bring you coal from the earth? They die by the hundreds and they die by the thousands And that is what your coal is worth. Bernard Asbell, a member of People's Songs, who in 1961 wrote the best-selling book, When FDR Died, recalled: When I think of that period I think of Pete and Lee. Lee and Pete. Lee's deep bass singing "Roll the Union On". He and Pete are the two guys who made folk music serve political purposes. .. . Lee was the one with the sense of history, who tied it all together. He was the one who brought the sharecroppers in, and the union songs based on hymns. His images inspired us... convinced us that the Left was the great continuum of the American tradition, or at least that it was part of the mainstream of the American tradition. Lee thought in terms of events, history; he saw large, and that rubbed off on the rest of us. He was the philosopher of the folk music movement. He stretched the canvas. And he was funny—and God, we needed that. There wasn't much humor around. Although the first year of People's Songs was very successful, once again his co-workers found Hays "difficult" and indecisive. At a board meeting in late 1946, Pete Seeger proposed Hays be replaced as executive secretary with energetic young friend of his, Felix Landau, whom Pete had met during his army days in Saipan. In retrospect, Pete confessed "I think it was a mistake. Lee's perceptions were probably truer than mine." Crushed, Hays returned to Philadelphia to stay with Walter Lowenfels and family. From there he began contributing a weekly column to the People's Songs Bulletin aiming to educate younger people about Claude Williams and the labor and civil rights struggles of the 1930s. In 1948, People's Songs put all of its efforts into supporting the 1948 presidential campaign of Henry Wallace on the Progressive Party ticket. Not long after Wallace's decisive defeat, People's Songs went bankrupt and disbanded. A spinoff, however, People's Artists, showed somewhat more vitality. The Thanksgiving after Wallace's defeat, People's Songs decided to put on a fundraising hootenanny that included folk dances from many lands. A group of People's Artists, comprising Seeger, Hays, Fred Hellerman, and Ronnie Gilbert, worked up a musical accompaniment to the dances, which they called (in the "One World" spirit of the Progressive movement) "Around the World". It featured an Israeli song, the Appalachian "Flop-eared mule", and "Hey-lally-lally-lo" from the Bahamas. The audience went wild. In 1949 the new quartet began appearing at leftist functions and soon they were featured on Oscar Brand's WNYC radio show as "The No Name Quartet". Four months later they settled on a name: the Weavers. People's Artists sponsored the concert given by Paul Robeson and classical pianists Leonid Hambro and Ray Lev in Peekskill, New York, that sparked the Peekskill Riots on September 4, 1949. The Weavers were present. Hays escaped in a car with Guthrie and Seeger after a mob claiming to be anti-communist patriots attacked the cars of audience and performers after the show. Hays wrote a song, "Hold the Line", about the experience, that the Weavers recorded on Charter records with Robeson and writer Howard Fast. If I Had a Hammer", written with Pete Seeger and also recorded on the Charter label, dates from this embattled period. A few months later, in December, the Weavers began an incredibly successful run at the Village Vanguard. One fan, Gordon Jenkins, a bandleader who had had numerous hits under his belt and was a director of Decca records, returned night after night. Born in Missouri, Jenkins was especially entranced with Lee Hays' folksy stage patter, laced with colorful Ozark anecdotes. Jenkins convinced his reluctant fellow executives at Decca to record the group. Jenkins backed them up with his own lush string orchestra and huge chorus, but tactfully and with care, so as not to obscure the words and musical personalities of the groups' personnel. To everyone's surprise, the Weavers, who seemed to fit into no musical category, produced billboard hit after billboard hit, selling millions of singles. However, the Korean War had begun and the red scare was in full swing. In September 1950, Time magazine reviewed them this way: The Weavers and the Red Scare In 1950, Pete Seeger was listed as a probable subversive in the anti-communist pamphlet Red Channels and was placed on the entertainment industry blacklist along with other members of the Weavers. Lee Hays was denounced as a member of the Communist Party during testimony to the House Committee on Un-American Activities by Harvey Matusow, a former Communist Party member (he later recanted). Their records dropped from Decca's catalog and from radio broadcasts, and unable to perform live on television, radio, or in most music venues, the Weavers broke up in 1952. Subsequently, Hays liked to maintain that another entertainer, called Lee Hayes, spelled with an "e", was also banned from entertaining because of the similarity of his name. "Hayes couldn't get a job the whole time I was blacklisted," he claimed. Hays spent the blacklist years rooming with the family of fellow blacklist victim Earl Robinson (composer of "The House I Live In", "Ballad for Americans", and "Joe Hill"), in a brownstone in Brooklyn Heights. He wrote reviews and short stories, one of which, "Banquet and a Half", published in Ellery Queen's Mystery Magazine and drawing on his experiences in the South in the 1930s, was the recipient of a prize and was reprinted in the U.S. and Britain. In 1953, Hays' mother, whom he had seen only once since her entry into custodial care, died. In 1955 he was subpoenaed by the House Committee on Un-American Activities: he declined to testify, pleading the Fifth Amendment. 1955 was also the year of a sold-out Weavers Carnegie Hall reunion concert. The Weavers had not lost their audience appeal—the LP of the concert (The Weavers at Carnegie Hall) issued two years later by Vanguard, was one of the three top-selling albums of the year. This led to a tour (made difficult by Hays' invalidism and anxieties), another album, and more tours, including one to Israel. Later life In 1958, Hays began recording a series of children's albums with the Baby Sitters, a group that included a young Alan Arkin, the son of a family friend of the Robinsons. After the great financial success of Peter, Paul and Mary's cover of "If I Had a Hammer" in the mid-1960s, Hays, whose mental and physical health had been shaky for years, lived mostly on income from royalties. In 1967, he moved to Croton-on-Hudson, New York where he devoted himself to tending his organic vegetable garden, cooking, writing, and socializing. He wrote to a friend that in his new surroundings he had no idea how to earn new money but that, "Having a listed number with no fear of Trotskyite crank calls is a huge relief". At the insistence of his old friend Woody's son, Arlo Guthrie, however, he did appear, playing himself as a preacher at a 1960 evangelical meeting, in the film Alice's Restaurant (1969), based on Arlo's hit song of that name. Hays, who had always been overweight, had been diagnosed in 1960 with diabetes, a condition the doctors thought he had probably suffered from, along with TB, for many years previously. This led to a heart condition and he was fitted with a pacemaker. Both his legs eventually had to be amputated. Younger friends, among them Lawrence Lazare and Jimmy Callo, helped to take care of him. His bad health notwithstanding, Hays performed in several Weavers reunion concerts, the last of which was in November 1980 at New York City's Carnegie Hall. His last public performance with the group took place in June 1981 at the Hudson River Revival in Croton Point Park. Two months later he was dead. The documentary film The Weavers: Wasn't That a Time!, for which Hays had written the script, was released in 1982. Near the end of his life Hays, wrote a farewell poem, "In Dead Earnest", inspired perhaps by Wobbly organizer Joe Hill's lyrical "Last Testament" but with an earthy Ozark frankness: In Dead Earnest If I should die before I wake, All my bone and sinew take: Put them in the compost pile To decompose a little while. Sun, rain, and worms will have their way, Reducing me to common clay. All that I am will feed the trees And little fishes in the seas. When corn and radishes you munch, You may be having me for lunch. Then excrete me with a grin, Chortling, "There goes Lee again!" 'Twill be my happiest destiny To die and live eternally. He died on August 26, 1981, from diabetic cardiovascular disease at home in Croton, and, in accordance with his wishes, his ashes were mixed with his compost pile. References External sources Coogan, Harold. "Lee Elhardt Hays (1914–1981)", Encyclopedia of Arkansas History and Culture. Courtney, Steve. "So long to Lee Hays" (Obituary). North County News, September 2–8, 1981. P. 7. Hays, Lee and Koppelman, Robert Steven, Editor. Sing out, warning! Sing out, Love!: The Writings of Lee Hays. Amherst, Mass., University of Massachusetts Press, 2003. Smithsonian Center for Folklife and Cultural Heritage: Lee Hays Collection Houston, Cisco. Interviewed by Lee Hays in 1961. Website. Stambler, Irwin, and Grelun Landon, eds. The Encyclopedia of Folk, Country and Western Music. New York: St. Martin's Press, 1983. [Wilson, John S.] "Singer Lee Hays, Founder of the Weavers Quartet" (Obituary). Pittsburgh Post Gazette. (New York Times News Service, August 27, 1981. p.27) Willens, Doris. The Lonesome Traveler: A Biography of Lee Hays. Introduction by Pete Seeger. New York: W. W. Norton & Company, Inc., 1988. The Weavers: Wasn't That a Time! Warner Brothers, 1982. Film. 1914 births 1981 deaths Musicians from Little Rock, Arkansas The Weavers members Folk musicians from Arkansas American folk singers American pacifists Hollywood blacklist American socialists People from Croton-on-Hudson, New York 20th-century American singers Singer-songwriters from Arkansas 20th-century American male singers People from Brooklyn Heights American male singer-songwriters
2005331
https://en.wikipedia.org/wiki/UMAC
UMAC
In cryptography, a message authentication code based on universal hashing, or UMAC, is a type of message authentication code (MAC) calculated choosing a hash function from a class of hash functions according to some secret (random) process and applying it to the message. The resulting digest or fingerprint is then encrypted to hide the identity of the hash function used. As with any MAC, it may be used to simultaneously verify both the data integrity and the authenticity of a message. A specific type of UMAC, also commonly referred to just UMAC, is specified in RFC 4418, it has provable cryptographic strength and is usually a lot less computationally intensive than other MACs. UMAC's design is optimized for 32-bit architectures with SIMD support, with a performance of 1 CPU cycle per byte (cpb) with SIMD and 2 cpb without SIMD. A closely related variant of UMAC that is optimized for 64-bit architectures is given by VMAC, which has been submitted to the IETF as a draft () but never gathered enough attention for becoming a standardized RFC. Background Universal hashing Let's say the hash function is chosen from a class of hash functions H, which maps messages into D, the set of possible message digests. This class is called universal if, for any distinct pair of messages, there are at most |H|/|D| functions that map them to the same member of D. This means that if an attacker wants to replace one message with another and, from his point of view the hash function was chosen completely randomly, the probability that the UMAC will not detect his modification is at most 1/|D|. But this definition is not strong enough — if the possible messages are 0 and 1, D={0,1} and H consists of the identity operation and not, H is universal. But even if the digest is encrypted by modular addition, the attacker can change the message and the digest at the same time and the receiver wouldn't know the difference. Strongly universal hashing A class of hash functions H that is good to use will make it difficult for an attacker to guess the correct digest d of a fake message f after intercepting one message a with digest c. In other words, needs to be very small, preferably 1/|D|. It is easy to construct a class of hash functions when D is field. For example, if |D| is prime, all the operations are taken modulo |D|. The message a is then encoded as an n-dimensional vector over D (a1, a2, ..., an). H then has |D|n+1 members, each corresponding to an (n + 1)-dimensional vector over D (h0, h1, ..., hn). If we let we can use the rules of probabilities and combinatorics to prove that If we properly encrypt all the digests (e.g. with a one-time pad), an attacker cannot learn anything from them and the same hash function can be used for all communication between the two parties. This may not be true for ECB encryption because it may be quite likely that two messages produce the same hash value. Then some kind of initialization vector should be used, which is often called the nonce. It has become common practice to set h0 = f(nonce), where f is also secret. Notice that having massive amounts of computer power does not help the attacker at all. If the recipient limits the amount of forgeries it accepts (by sleeping whenever it detects one), |D| can be 232 or smaller. Example The following C function generates a 24 bit UMAC. It assumes that secret is a multiple of 24 bits, msg is not longer than secret and result already contains the 24 secret bits e.g. f(nonce). nonce does not need to be contained in msg. /* DUBIOUS: This does not seem to have anything to do with the (likely long) RFC * definition. This is probably an example for the general UMAC concept. * Who the heck from 2007 (Nroets) chooses 3 bytes in an example? * * We gotta move this along with a better definition of str. uni. hash into * uni. hash. */ #define uchar uint8_t void UHash24 (uchar *msg, uchar *secret, size_t len, uchar *result) { uchar r1 = 0, r2 = 0, r3 = 0, s1, s2, s3, byteCnt = 0, bitCnt, byte; while (len-- > 0) {    /* Fetch new secret for every three bytes. */   if (byteCnt-- == 0) { s1 = *secret++; s2 = *secret++; s3 = *secret++; byteCnt = 2; } byte = *msg++;   /* Each byte of the msg controls whether a bit of the secrets make it into the hash.、     * * I don't get the point about keeping its order under 24, because with a 3-byte thing     * it by definition only holds polynominals order 0-23. The "sec" code have identical * behavior, although we are still doing a LOT of work for each bit */ for (uchar bitCnt = 0; bitCnt < 8; bitCnt++) { /* The last bit controls whether a secret bit is used. */ if (byte & 1) {       r1 ^= s1; /* (sec >> 16) & 0xff */ r2 ^= s2; /* (sec >> 8) & 0xff */ r3 ^= s3; /* (sec ) & 0xff */ } byte >>= 1; /* next bit. */ /* and multiply secret with x (i.e. 2), subtracting (by XOR) the polynomial when necessary to keep its order under 24 (?!) */ uchar doSub = s3 & 0x80;       s3 <<= 1; if (s2 & 0x80) s3 |= 1; s2 <<= 1; if (s1 & 0x80) s2 |= 1; s1 <<= 1;     if (doSub) { /* 0b0001 1011 --> */ s1 ^= 0x1B; /* x^24 + x^4 + x^3 + x + 1 [16777243 -- not a prime] */ } } /* for each bit in the message */ } /* for each byte in the message */ *result++ ^= r1; *result++ ^= r2; *result++ ^= r3; } #define uchar uint8_t #define swap32(x) ((x) & 0xff) << 24 | ((x) & 0xff00) << 8 | ((x) & 0xff0000) >> 8 | (x) & 0xff000000) >> 24) /* This is the same thing, but grouped up (generating better assembly and stuff). It is still bad and nobody has explained why it's strongly universal. */ void UHash24Ex (uchar *msg, uchar *secret, size_t len, uchar *result) { uchar byte, read; uint32_t sec = 0, ret = 0, content = 0; while (len > 0) { /* Read three in a chunk. */ content = 0; switch (read = (len >= 3 ? 3 : len)) { case 2: content |= (uint32_t) msg[2] << 16; /* FALLTHRU */ case 1: content |= (uint32_t) msg[1] << 8; /* FALLTHRU */ case 0: content |= (uint32_t) msg[0]; } len -= read; msg += read;     /* Fetch new secret for every three bytes. */ sec = (uint32_t) secret[2] << 16 | (uint32_t) secret[1] << 8 | (uint32_t) secret[0]; secret += 3; /* The great compressor. */ for (bitCnt = 0; bitCnt < 24; bitCnt++) { /* A hard data dependency to remove: output depends * on the intermediate. * Doesn't really work with CRC byte-tables. */ if (byte & 1) {       ret ^= sec; } byte >>= 1; /* next bit. */ /* Shift register. */ sec <<= 1; if (sec & 0x01000000) sec ^= 0x0100001B; sec &= 0x00ffffff; } /* for each bit in the message */ } /* for each 3 bytes in the message */ result[0] ^= ret & 0xff; result[1] ^= (ret >> 8) & 0xff; result[2] ^= (ret >> 16) & 0xff; } NH and the RFC UMAC NH Functions in the above unnamed strongly universal hash-function family uses n multiplies to compute a hash value. The NH family halves the number of multiplications, which roughly translates to a two-fold speed-up in practice. For speed, UMAC uses the NH hash-function family. NH is specifically designed to use SIMD instructions, and hence UMAC is the first MAC function optimized for SIMD. The following hash family is -universal: . where The message M is encoded as an n-dimensional vector of w-bit words (m0, m1, m2, ..., mn-1). The intermediate key K is encoded as an n+1-dimensional vector of w-bit words (k0, k1, k2, ..., kn). A pseudorandom generator generates K from a shared secret key. Practically, NH is done in unsigned integers. All multiplications are mod 2^w, all additions mod 2^w/2, and all inputs as are a vector of half-words (-bit integers). The algorithm will then use multiplications, where was the number of half-words in the vector. Thus, the algorithm runs at a "rate" of one multiplication per word of input. RFC 4418 RFC 4418 does a lot to wrap NH to make it a good UMAC. The overall UHASH ("Universal Hash Function") routine produces a variable length of tags, which corresponds to the number of iterations (and the total lengths of keys) needed in all three layers of its hashing. Several calls to an AES-based key derivation function is used to provide keys for all three keyed hashes. Layer 1 (1024 byte chunks -> 8 byte hashes concatenated) uses NH because it is fast. Layer 2 hashes everything down to 16 bytes using a POLY function that performs prime modulus arithmetics, with the prime changing as the size of the input grows. Layer 3 hashes the 16-byte string to a fixed length of 4 bytes. This is what one iteration generates. In RFC 4418, NH is rearranged to take a form of: Y = 0 for (i = 0; i < t; i += 8) do Y = Y +_64 ((M_{i+0} +_32 K_{i+0}) *_64 (M_{i+4} +_32 K_{i+4})) Y = Y +_64 ((M_{i+1} +_32 K_{i+1}) *_64 (M_{i+5} +_32 K_{i+5})) Y = Y +_64 ((M_{i+2} +_32 K_{i+2}) *_64 (M_{i+6} +_32 K_{i+6})) Y = Y +_64 ((M_{i+3} +_32 K_{i+3}) *_64 (M_{i+7} +_32 K_{i+7})) end for This definition is designed to encourage programmers to use SIMD instructions on the accumulation, since only data with four indices away are likely to not be put in the same SIMD register, and hence faster to multiply in bulk. On a hypothetical machine, it could simply translate to: movq $0, regY ; Y = 0 movq $0, regI ; i = 0 loop: add     reg1, regM, regI ; reg1 = M + i add     reg2, regM, regI vldr.4x32 vec1, reg1 ; load 4x32bit vals from memory *reg1 to vec1 vldr.4x32 vec2, reg2 vmul.4x64   vec3, vec1, vec2 ; vec3 = vec1 * vec2 uaddv.4x64 reg3, vec3 ; horizontally sum vec3 into reg3 add regY, regY, reg3 ; regY = regY + reg3 add regI, regI, $8 cmp regI, regT jlt loop See also Poly1305 is another fast MAC based on strongly universal hashing. References External links Message authentication codes
1151084
https://en.wikipedia.org/wiki/ANSI.SYS
ANSI.SYS
ANSI.SYS is a device driver in the DOS family of operating systems that provides extra console functions through ANSI escape sequences. It is partially based upon a subset of the text terminal control standard proposed by the ANSI X3L2 Technical Committee on Codes and Character Sets (the "X3 Committee"). As it was not installed by default, and was notoriously slow, little software took advantage of it and instead resorted to directly manipulating the IBM PC hardware. A number of third-party alternatives that ran at reasonable speed were created, such as , and to attempt to change this. Usage To use under DOS, a line is added to the (or under Windows NT based versions of Windows) file that reads: DEVICE=drive:\path\ANSI.SYS options where drive: and path are the drive letter and path to the directory in which the file is found, and options can be a number of optional switches to control the behaviour. may also be loaded into upper memory via /. use extended keyboard BIOS functions (INT 16h) rather than standard ones force number of lines adjust line scrolling to support screen readers or set screensize support redefinition of extended key codes independent of standard codes Functionality Once loaded, enables code sequences to apply various text formatting features. Using this driver, programs that write to the standard output can make use of the 16 text foreground colors and 8 background colors available in VGA-compatible text mode, make text blink, change the location of the cursor on the screen, and blank the screen. It also allows for the changing of the video mode from standard 80×25 text mode to a number of different graphics modes (for example, 320×200 graphics mode with text drawn as pixels, though ANSI.SYS does not provide calls to turn individual pixels on and off). The standard is relatively slow as it maps escape sequences to the equivalent BIOS calls. Several companies made third-party replacements that interface directly with the video memory, in a similar way to most DOS programs that have a full-screen user interface. By default, the internal DOS command works by directly calling the corresponding BIOS function to clear the screen, thereby prominently violating the hardware abstraction model otherwise maintained. However, if an ANSI driver is detected by the DR-DOS , it will instead send the control sequence defined in the reserved environment variable to the attached console device. If the environment variable is undefined, it falls back to send the sequence instead. Specifying other sequences can be used to control various screen settings after a . Due to the difficulties to define environment variables containing binary data COMMAND.COM also accepts a special notation for octal numbers. For example, to send an alternative control sequence like (for as used by ASCII terminals), one could define the variable as follows: SET $CLS=\033+ These features are supported by in all versions of DOS Plus and DR-DOS, but not in MS-DOS or PC DOS. They are also supported by the command interpreters in Concurrent DOS, Multiuser DOS and REAL/32, although they use VT52 rather than ANSI control sequences by default (e.g. ). Keyboard remapping An interesting feature of is the ability to remap any key on the keyboard in order to perform shortcuts or macros for complex instructions. Using special escape sequences, the user can define any keystroke that has a character-code mapping to simulate an arbitrary sequence of such keystrokes. This feature was also used by evildoers to create simple trojans out of text files laced with nefarious keyboard remaps, known as "ANSI bombs". A number of products were released to protect users against this: Some versions of ANSI.SYS support a command line switch to disable the key remapping feature, f.e. the option (Secure) in ANSI.SYS of Datalight ROM-DOS or NANSI.SYS of FreeDOS. Other ANSI drivers like ANSIPLUS can be configured to disable the redefinition of keys as well. Setting CONFIG.SYS in PTS-DOS provides a built-in ANSI driver not supporting the keyboard remapping functions. Some of the third-party ANSI.SYS replacements were deliberately designed never to support the keyboard remapping functions. PKWARE produced a TSR program, PKSFANSI (PK Safe ANSI), which filters out keyboard remapping escape codes as they are written to the standard output. This has the advantage that the user can load some useful remappings from a text file and then run PKSFANSI to prevent further, possibly malicious remappings. Occurrence appeared in MS-DOS 2.0, the first version of the operating system supporting device drivers. It was supported by all following versions of MS-DOS. It is also present in many non-Microsoft DOS systems, e.g. IBM PC DOS and DR-DOS. was required to run some software that used its cursor and color control functions. It could also be used to enable elaborate color codes in the prompt. These uses were overshadowed by the use of in BBSes; ANSI escape sequences were used to enable BBSes to send text graphics more elaborate than ASCII art, and to control the cursor in ways that were used in a number of online games and similar features. Most versions of Windows did not support ANSI escape codes in any useful way (it could be used by MSDOS emulation in some versions). In Windows 10 support for similar escape sequences was built into the Win32 console (the text terminal window), but must be activated using the Windows API function by setting the flag. Features CSI (Control Sequence Introducer) is a placeholder for the common two-byte escape lead-in sequence "" (that is, ). The ANSI standard also defines an alternative single-byte CSI code , which is not supported by ANSI.SYS. Standard DOS drivers support only the following sub-set of ANSI escape sequences: There are also some escape sequences specific to the implementation of . They are not generally supported by ANSI consoles in other operating systems. In some DOS implementations, video modes above 7 are not documented. Under Multiuser DOS, the only valid argument in conjunction with PCTERM is 7. See also ANSI escape sequence Notes References External links DOS drivers DOS files DOS technology
52164321
https://en.wikipedia.org/wiki/Joan%20Ball
Joan Ball
Joan Ball is a computer dating pioneer who started the first computer dating service in England, in 1964. Ball's computer dating service also pre-dated the earliest American computer dating services, like Operation Match at Harvard. Early life Joan Ball was born in 1934 and was the 6th child in her family. She was an unwanted child born to a poor, working-class family. She was briefly abandoned by her mother when she was very young. World War II started when she was only five years old, resulting in her being evacuated from London to the countryside to escape the aerial bombardments of London three times during the war. Although this may have saved her life, each foster family differed greatly and she was sexually harassed by one of the foster families with whom she lived. When the war was over she was able to go home to her family in London again. Joan Ball was dyslexic and struggled in school. She went through most of her life with dyslexia, before it was known as such. She was not officially diagnosed until 1973, at the age of 39. As a coping mechanism she became the class clown when she was in school, so that she could make sure people "laughed with [her]" and not at her. During her school years she had a difficult home life: her mother often called her "a pig-headed bitch", and blamed Joan for her failing marriage. In 1949, Ball finished her last year of school and got a job as a shop assistant at The London Co-operative Society. Because of her dyslexia she had problems with writing and counting money. Career and later life In 1953, Ball was hospitalized after a suicide attempt and when she got out she went to live with her aunt Maud and uncle Ted. The same year, at the age of 19, she got hired at Bourne & Hollingsworth. In 1954, she left and started working in a store's dress department. She found this string of jobs unfulfilling and difficult: at the time, the most interesting parts of the fashion industry—in Ball's view—were still a man's world and she could not do the kind of work she was interested in, like design. Shortly thereafter, however, she was able to start working for Berkertex, a leading fashion house in London. Eros Friendship Bureau Ltd In 1961, when she was 27, she decided to leave Berkertex. Though she had intended to manage a shop in Cambridge she found herself out of a job until the shop was ready to open. Needing to pay rent, she took a job at a marriage bureau. It was here that she decided to start her own marriage bureau. She founded the Eros Friendship Bureau Ltd in 1962 and discovered she had a knack for helping people make connections. Though her company would go on to be successful for a decade, she had trouble advertising her service early on because of the fact marriage bureaus were seen as slightly suspect at the time: There was a widespread belief that marriage bureaus were actually fronts for prostitution. Because she could not advertise in print easily Ball relied on placing radio ads with the "Pop Pirates"—the pirate radio stations that operated just off the coast of Britain in the 1960s playing rock and roll music that the BBC had banned. Ball's company focused on long term match-ups and relationships—primarily trying to achieve marriages for clients—and catered to an older crowd who were looking to settle down or who had been previously divorced. In 1961, she met a man she refers to in her memoir as Kenneth. Kenneth would later become her sexual partner and would help her in her business, though they were never married. St. James Computer Dating Service Joan changed the name of her marriage bureau to the St. James Computer Dating Service in 1964 and the bureau ran its first set of computer match ups in 1964. This made Ball's service the first commercially successful computer dating service in either the UK or the US, as historian Marie Hicks points out in a recent article on the history of computer dating. Com-Pat In 1965, Ball merged her company with another marriage bureau run by a woman and together they formed Com-Pat, or Computer Dating Services Ltd. Shortly after the merger, the owner of the other marriage bureau sold out her share in the company to Joan and Joan became the sole proprietor of Com-Pat. Competition from rival companies Dateline, founded by John Richard Patterson in 1966, was a rival to Com-Pat. With this new rivalry, Ball saw the greater need for more advertising. Looking at questionnaires from Dateline and Operation Match in the US, she learned that they emphasized questions about sex, which she and her employees thought would not lead to good matches. By 1969, her company was receiving a good response to their ads in News of the World, but Ball still felt she had to overcome the mindset that people had about computer dating being slightly odd or untoward. Newspapers at the time implied the people who used these services were lonely, sad or dysfunctional. Joan believed that this type of computer dating service was, on the contrary, a fun and intelligent way to meet people. Though newspapers sometimes painted a negative picture, Joan's company was generally well received by the public. This led her to advertise in The Sunday Express, Evening Standard, and The Observer—all major British newspapers at the time. At this time, Ball was running both Com-Pat and Eros. Soon she decided to sell Eros and focus on Com-Pat. She realized how important the future of computerized dating was and saw the potential growth of a service like Com-Pat. Com-Pat II In 1970, Com-Pat Two was launched. Joan and her company were ahead of the game, because they were using the most advanced matching system created at the time. They were able to change the whole system with 50,000 members in a single weekend without any problems. The system used a questionnaire, and gave a list of four of the top matches at the end. Though Joan had success with Com-Pat Two, she and her partner Ken began to run into economic and personal problems. Because Joan and Ken weren't married, Ball felt she had no sense of security with him. She eventually moved into her own flat after living with Ken for eight years. This new place gave her a sense of independence, security, and pride because of what she had needed to accomplish to get it. Problems with advertisements Unfortunately troubled times were ahead for her company. Ball realized that their telephone number and address had been printed incorrectly in one of her major advertisements. Their telephone number had also been removed from the directory. This forced her to get a new phone number. At the same time, Dateline was becoming more and more successful and was able to leverage the fact the newspapers already took Com-Pat's advertisements to place its own ads in the same papers without the difficulties that Ball had faced earlier. In 1971, there was a Post Office strike which halted all mail. It lasted almost eight weeks and Ball's business couldn't do anything during that time. Everything hit an all-time low when the Daily Telegraph, the company's most successful advertising venue, refused to continue printing ads for Com-Pat because the paper had changed their advertising policy. Joan became depressed and felt unable to cope. At the same time, the UK was wracked with major strikes and economic problems: The miners' strike was causing chaos nationwide by disrupting the country's ability to produce electricity and power the government, businesses and industries that kept the economy functioning. Sale of Com-Pat to Dateline In 1973, when she was 39, she was finally diagnosed with dyslexia. Not many people knew the word, so she stopped using it when no one else knew what she was talking about. She had trouble coming to terms with her own illness. By this time, the recession had worsened. She had been fighting to keep her company afloat but by 1974 she was in debt and decided to sell her company. She called John Paterson of Dateline and offered Com-Pat to him, if her would agree to pay all of the company's debts as part of the purchase. Seeing that this was a way to monopolize the computer dating market in the UK by doing away with Dateline's only major competitor, Patterson quickly agreed. Later life After a series of personal difficulties, Ball converted to Buddhism and began to come to terms with her illnesses and setbacks. Ball found herself with many regrets and came to the conclusion that she had locked herself away in her own emotional dungeon even though she had run a company focused on making new emotional connections between other people. Impact and importance in the history of technology Ball was a successful entrepreneur who was the first person of any gender to run a commercially viable computer dating service in either the UK or the USA. Her company predated Harvard's "Operation Match" by a year and preceded the other major British computer dating company, Dateline (run by John Patterson). Com-Pat operated under Ball's management for nearly a decade, until she was eventually bought out by Dateline in 1974. Ball's experience shows that, contrary to popular narratives on the web, women were in fact early pioneers in the field of computer dating and social networking by computer. The fact that Ball has remained mostly unknown until now also reflects how gendered stereotypes have resulted in the historical submersion of women's contributions in computing. Janet Abbate, a historian of computing and professor at Virginia Tech, theorized in her book Recoding Gender--a history of women in computing—that "women who did make significant contributions were not always inclined, by temperament or socialization, to trumpet their accomplishments". Historians Nathan Ensmenger, Mar Hicks, Margot Lee Shetterly, and Jennifer Light have all shown, in their scholarship on gender in the history of computing, how structural inequality in both the present and the past has altered our view of historical reality when it comes to computing. The existence of these social dynamics in both the past and present has left many parts of computing history untold. Currently, historians are beginning to correct these oversights and to show how women like Joan Ball were important in the history of computer dating, and computing more generally. References Women Internet pioneers Online dating services of the United Kingdom 1934 births Living people People with dyslexia Businesspeople from London 20th-century English businesswomen
4210106
https://en.wikipedia.org/wiki/Abraham%20Silberschatz
Abraham Silberschatz
Avi Silberschatz (born in Haifa, Israel) is an Israeli computer scientist and researcher. He graduated in 1976 with a Ph.D. in Computer Science from the State University of New York (SUNY) at Stony Brook. He became the Sidney J. Weinberg Professor of Computer Science at Yale University, USA in 2005. He was the chair of the Computer Science department at Yale from 2005 to 2011. Prior to coming to Yale in 2003, he was the Vice President of the Information Sciences Research Center at Bell Labs. He previously held an endowed professorship at the University of Texas at Austin, where he taught until 1993. His research interests include database systems, operating systems, storage systems, and network management. Silberschatz was elected an ACM Fellow in 1996 and received the Karl V. Karlstrom Outstanding Educator Award in 1998. He was elected an IEEE fellow in 2000 and received the IEEE IEEE Taylor L. Booth Education Award in 2002 for " teaching, mentoring, and writing influential textbooks in the operating systems and database systems areas". He was elected an AAAS fellow in 2009. Silberschatz is a member of the Connecticut Academy of Science and Engineering. His work has been cited over 34,000 times. Books Operating System Concepts, 10th Edition, published in 2019 by Avi Silberschatz, Peter Galvin and Greg Gagne Operating System Concepts Essentials, 2nd Edition, published in 2013 by Avi Silberschatz, Peter Galvin and Greg Gagne Database System Concepts, 7th Edition, published in 2020 by Avi Silberschatz, Henry F. Korth and S.Sudarshan References External links Biography Yale University faculty American computer scientists Fellows of the Association for Computing Machinery Fellow Members of the IEEE Stony Brook University alumni Living people Year of birth missing (living people)
63758101
https://en.wikipedia.org/wiki/Pop%21%20OS
Pop! OS
Pop!_OS is a free and open-source Linux distribution, based upon Ubuntu, and featuring a GTK-based desktop environment known as COSMIC, which is based on GNOME. The distribution is developed by American Linux computer manufacturer System76. Pop!_OS is primarily built to be bundled with the computers built by System76, but can also be downloaded and installed on most computers. Pop!_OS provides full out-of-the-box support for both AMD and Nvidia GPUs. It is regarded as an easy distribution to set up for gaming, mainly due to its built-in GPU support. Pop!_OS provides default disk encryption, streamlined window and workspace management, keyboard shortcuts for navigation as well as built-in power management profiles. The latest releases also have packages that allow for easy setup for TensorFlow and CUDA. Pop!_OS is maintained primarily by System76, with the release version source code hosted in a GitHub repository. Unlike many other Linux distributions, it is not community-driven, although outside programmers can contribute, view and modify the source code. They can also build custom ISO images and redistribute them under another name. Features Pop!_OS primarily uses free software, with some proprietary software used for hardware drivers for Wi-Fi, discrete GPU and media codecs. It comes with a wide range of default software, including LibreOffice, Firefox and Geary. Additional software can be downloaded using the package manager the Pop!_Shop. Pop!_OS uses APT as its package manager and initially did not use Snaps or Flatpak, but Flatpak support was added in version 20.04 LTS. Software packages are available from the Ubuntu repositories, as well as Pop!_OS's own repositories. Pop!_OS features a customized GNOME Shell interface, with a Pop!_OS theme. There is a GUI toggle in the GNOME system menu for switching between different video modes on dual GPU laptops. There are three display modes: hybrid, discrete and iGPU only. There is a power management package developed from the Intel Clear Linux distribution. Pop!_OS uses Xorg as its display manager, with Wayland available optionally, as Ubuntu has done. Wayland lacks support for proprietary device drivers, in particular Nvidia, while Xorg is supported. To enable use of Nvidia proprietary drivers for most performance and GPU switching, Pop!_OS uses only Xorg to date. TensorFlow and CUDA enabled programs can be added by installing packages from the Pop!_OS repositories without additional configuration required. It provides a Recovery Partition that can be used to 'refresh' the system while preserving user files. It can be used only if it is set up during initial installation. From the 21.04 release, Pop!_OS included a new GNOME-based desktop environment called COSMIC, an acronym for "Computer Operating System Main Interface Components" developed by System76. It features separate views for workspaces and applications, a dock included by default, and supports both mouse-driven and keyboard-driven workflows. System76 stated it will be creating a new desktop environment not based on GNOME. This desktop environment will be written in Rust and developed to be similar to the COSMIC desktop used since version 21.04. System76 cites limitations with GNOME extensions, as well as disagreements with GNOME developers on the desktop experience as reasons to build a new desktop environment. Installation Pop!_OS provides two ISO images for download: one with AMD video drivers and another with Nvidia drivers. The appropriate ISO file may be downloaded and written to either a USB flash drive or a DVD using tools such as Etcher or UNetbootin. Pop!_OS initially used an Ubuntu-themed installer. Later it switched to a custom software installer built in partnership with elementary OS called Pop Shop which comes pre-installed with Pop!_OS. Release history 17.10 Prior to offering Pop!_OS, System76 had shipped all its computers with Ubuntu pre-installed. Development of Pop!_OS was commenced in 2017, after Ubuntu decided to halt development of Unity and move back to GNOME as its desktop environment. The first release of Pop!_OS was 17.10, based upon Ubuntu 17.10. In a blog post explaining the decision to build the new distribution, the company stated that there was a need for a desktop-first distribution. The first release was a customized version of Ubuntu GNOME, with mostly visual differences. Some different default applications were supplied and some settings were changed. The initial Pop theme was a fork of the Adapta GTK theme, plus other upstream projects. 17.10 also introduced the Pop!_Shop software store, which is a fork of the elementary OS app store. Bertel King of Make Use Of reviewed version 17.10, in November 2017 and noted, "System76 isn’t merely taking Ubuntu and slapping a different name on it." King generally praised the release, but did fault the "visual inconsistencies" between applications that were optimized for the distribution and those that were not and the application store, Pop!_Shop, as incomplete. For users who may want to try it on existing hardware he concluded, "now that Ubuntu 17.10 has embraced GNOME, that’s one less reason to install Pop!_OS over Ubuntu." 18.04 LTS Version 18.04 added power profiles; providing easy GPU switching, especially for Nvidia Optimus equipped laptops; HiDPI support; full disk encryption and access to the Pop!_OS repository. In 2018, reviewer Phillip Prado described Pop!_OS 18.04 as "a beautiful looking Linux distribution". He concluded, "overall, I think Pop!_OS is a fantastic distribution that most people could really enjoy if they opened up their workflow to something they may or may not be used to. It is clean, fast, and well developed. Which I think is exactly what System 76 was going for here." 18.10 Release 18.10 was released in October 2018. It included a new Linux kernel, graphic stack, theme changes and updated applications, along with improvements to the Pop!_Shop software store. 19.04 Version 19.04 was mostly an incremental update, corresponding to the same Ubuntu version. It incorporated a "Slim Mode" option to maximize screen space, through reducing the height of application window headers, a new dark mode for nighttime use and a new icon set. Joey Sneddon of OMG! Ubuntu! reviewed Pop!_OS 19.04 in April 2019 and wrote, "I don’t see any appreciable value in Pop OS. Certainly nothing that would make me recommend it over regular Ubuntu 19.04 ..." 19.10 In addition to incremental updates, version 19.10 introduced Tensorman, a custom TensorFlow toolchain management tool, multilingual support and a new theme based on Adwaita. In a 2019 comparison between Pop!_OS and Ubuntu, Ankush Das of It's FOSS found that while both distributions have their advantages, "the overall color scheme, icons, and the theme that goes on in Pop!_OS is arguably more pleasing as a superior user experience." 20.04 LTS Pop!_OS 20.04 LTS was released on 30 April 2020 and is based upon Ubuntu 20.04 LTS. It introduced selectable auto-tiling, expanded keyboard shortcuts and workspaces management. It also added Pop!_Shop application store support for Flatpak and introduced a "hybrid graphics mode" for laptops, allowing operation using the power-saving Intel GPU and then providing switching to the NVidia GPU for applications that require it. Firmware updates became automatic and operating system updates could be downloaded and later applied while off-line. In examining Pop!_OS 20.04 beta, FOSS Linux editor, Divya Kiran Kumar noted, "with its highly effective workspaces, advanced window management, ample keyboard shortcuts, out-of-the-box disk encryption, and myriad pre-installed apps. It would be an excellent pick for anyone hoping to use their time and effort effectively." Jason Evangelho reviewed Pop!_OS in FOSS Linux January 2020 and pronounced it the best Ubuntu-based distribution. A review of Pop!_OS 20.04 by Ankush Das in It's FOSS in May 2020 termed it "the best Ubuntu-based distribution" and concluded, "with the window tiling feature, flatpak support, and numerous other improvements, my experience with Pop!_OS 20.04 has been top-notch so far." OMG! Ubuntu! reviewer Joey Sneddon wrote of Pop!_OS 20.04, "it kinda revolutionises the entire user experience". He further noted, "The fact this distro doesn't shy away from indulging power users, and somehow manages to make it work for everyone, underlines why so-called 'fragmentation' isn't a bad thing: it's a chameleonic survival skill that allows Linux to adapt to whatever the task requires. It is the T-1000 of computing, if you get the reference. And I can't lie: Ubuntu could really learn a few things from this approach." In a 19 October 2020 review in FOSS Bytes by Mohammed Abubakar termed it, "The Best Ubuntu-based Distro!" and said it is, "an Ubuntu-based Linux distro that strikes a perfect balance between being beginner-friendly and professional or gaming use". 20.10 Pop!_OS 20.10 was released on 23 October 2020 and is based upon Ubuntu 20.10. It introduced stackable tiled windows and floating window exceptions in auto-tiling mode. Fractional scaling was also introduced, as well as external monitor support for hybrid graphics. Beta News reviewer Brian Fagioli in particular praised the availability of fractional scaling and stacking and noted "what the company does with Pop!_OS, essentially, is improve upon Ubuntu with tweaks and changes to make it even more user friendly. Ultimately, Pop!_OS has become much better than the operating system on which it is based." 21.04 Pop!_OS 21.04 was released on 29 June 2021 and is based upon Ubuntu 21.04. It included the COSMIC (Computer Operating System Main Interface Components) desktop, based on GNOME, but with a custom dock and shortcut controls. Writing in OMG Ubuntu, Joey Sneddon noted, "COSMIC puts a dock on the desktop; separates workspace and applications into individually accessible screens; adds a new keyboard-centric app launcher (that isn’t trying to search all the things™ by default); plumbs in some much-needed touchpad gestures; and — as if all of that wasn’t enough — makes further refinements to its unique window tiling extension (which you’re free to toggle on/off at any point)." He continued, "Pop!_OS 21.04 is sort of what Ubuntu could — some might say ‘should’ — be: a distro that doesn’t patronise its potential users by fixating on an idealised use case drawn up in a meeting. COSMIC wants to help its users work more efficiently on their terms, not impose a predetermined workflow upon them." 21.10 Pop!_OS 21.10 was released on 14 December 2021 and is based upon Ubuntu 21.10. It includes GNOME 40, a new "Vertical Overview" extension, a new Applications menu and support for Raspberry Pi. Release table Pop!_OS is based upon Ubuntu and its release cycle is same as Ubuntu, with new releases every six months in April and October. Long term support releases are made every two years, in April of even-numbered years. Each non-LTS release is supported for three months after the release of the next version, similar to Ubuntu. Support for LTS versions is provided until the next LTS release. This is considerably shorter than Ubuntu which provides 5 year support for LTS releases. See also Debian List of Ubuntu-based distributions References External links Official website Pop!_OS at DistroWatch 2017 software Computer-related introductions in 2017 Free software operating systems Linux distributions Ubuntu derivatives X86-64 Linux distributions
17115475
https://en.wikipedia.org/wiki/Anastasios%20Venetsanopoulos
Anastasios Venetsanopoulos
Anastasios (Tas) Venetsanopoulos (June 19, 1941 - November 17, 2014) was a Professor of Electrical and Computer Engineering at Ryerson University in Toronto, Ontario and a Professor Emeritus with the Edward S. Rogers Department of Electrical and Computer Engineering at the University of Toronto. In October 2006, Professor Venetsanopoulos joined Ryerson University and served as the Founding Vice-President of Research and Innovation. His portfolio included oversight of the university's international activities, research ethics, Office of Research Services, and Office of Innovation and Commercialization. He retired from that position in 2010, but remained a distinguished advisor to the role. Tas Venetsanopoulos continued to actively supervise his research group at the University of Toronto, and was a highly sought-after consultant throughout his career. Education Tas Venetsanopoulos received a Bachelor of Electrical and Mechanical Engineering degree from the National Technical University of Athens, and an M.S., M. Phil, and a PhD in Electrical Engineering from Yale University, New Haven, Connecticut. Tas Venetsanopoulos was a Fulbright Scholar and a Schmitt Scholar, and in 1994 was awarded an Honorary Doctorate from his alma mater, the National Technical University of Athens. Personal life In 1986, Tas Venetsanopoulos married Vasiliki Koronakis in a Greek Orthodox service in Toronto, Ontario. They have two daughters: Elizabeth Venetsanopoulos (born 1987) and Dominique Venetsanopoulos (born 1988). Research interests Tas Venetsanopoulos' research interests included: biometrics research; multimedia (image compression, image and video retrieval); digital signal/image processing (multichannel image processing, nonlinear, adaptive and M-D filtering, knowledge-based image processing and recognition, 3-D imaging, biomedical applications); pattern classification and telecommunications. Research record Professor Anastasios (Tas) Venetsanopoulos had a long and productive career in research, education and university administration. He was an internationally renowned researcher in the fields of multimedia systems, digital signal and image processing, digital communications, biometrics and neural networks. Over a period of four decades, he established himself in the worldwide telecommunications and signal processing community as an outstanding researcher, scholar, professor and consultant. He made contributions to telecommunications, signal and image processing, multimedia and biometrics research by authoring and co-authoring many journal papers and books. His pioneering and fundamental research contributions, along with the writing of numerous graduate-level books, opened up new vistas in several fields, including telecommunications; multidimensional filter theory and design; the design of non-linear filters; multimedia neural networks; biometrics applications and WLAN positioning systems. According to a 2020 Google Scholar count, his work has been cited in over 22,600 research papers and 400 textbooks. He was a mentor for over 160 graduate students and post-doctoral fellows. He motivated a generation of engineers in North America and around the world to take up careers in research and teaching in the areas of signal and image processing, telecommunications, multimedia, and biometrics. Telecommunications Professor Venetsanopoulos' early work dealt with the problem of optimal detection and signal design, to facilitate communication over purely random, general, linear, time-varying, very noisy, undersea acoustic channels. His results contributed to the improvement of SONAR systems for undersea communications over fading dispersive channels and was later applied to ionospheric and tropospheric channels. Subsequent publications focused on the issue of image and video compression and made contributions in the area of progressive image transmission (PIT). PIT refers to the coding of still images at increasing levels of precision. Through PIT, it is possible to expedite activities such as browsing through remote databases of images. Professor Venetsanopoulos developed and tested a number of first and second generation morphological pyramidal techniques, which achieved compression ratios of around 100:1 for good quality, lossy, still image transmission. He contributed to the study of vector quantization for lossy image compression and developed a number of hierarchical coding techniques for still images. Wavelet techniques for still image compression were also addressed by him, as well as fractal-based techniques for compressing and coding still images and video sequences. His later contributions in telecommunications were in the area of mobility management and he developed cost-effective algorithms for mobile terminal location and determination and WLAN positioning systems. This area has attracted interest for its applications in emergency communications, location-sensitive browsing, and resource allocation. Signal and image processing Professor Venetsanopoulos was one of the first Canadian researchers to make a contribution to the foundations of two-dimensional and multi-dimensional digital filtering. These techniques are widely used in image and video processing. His early contributions in these areas provided the basis for a variety of techniques that led to efficient two-dimensional filter design. In the eighties, his interest was focused on the area of nonlinear filters. Nonlinear filters are more complex than linear filters but allow additional flexibility and speed in complex applications. In the area of nonlinear filters, Professor Venetsanopoulos contributed theoretical results, including the introduction of new filter families. The "Nonlinear Order Statistics Filters" were a special case of linear median, order statistics, homomorphic, a-trimmed median, generalized mean, nonlinear mean and fuzzy nonlinear filters. New versions of polynomial filters, such as quadratic filters, were also studied by Professor Venetsanopoulos. He designed new morphological filters, which lead to various detection and recognition applications. Finally, he conducted extensive research in the area of Adaptive filters. Professor Venetsanopoulos developed Adaptive Order Statistics filters, Adaptive LMS/RLS filters, Adaptive L-filters and Adaptive morphological filter algorithms. These filters are extensively used in numerous biomedical applications, such as in radiology, mammography and tomography. Among other applications, they are also applied to financial data processing and remote sensing. In the nineties, Professor Venetsanopoulos contributed to the field of color image processing and analysis, where he introduced a number of techniques for color image enhancement filtering and analysis. He also introduced the so-called vector directional filter family, which operates along the direction of the color vectors. A new class of adaptive nonlinear filters was developed. Fuzzy membership functions based on different distance measures were adopted to determine the weights of new nonlinear, adaptive filters. The new filters encompassed different classes of existing nonlinear filters as special cases. For the first time, the color image was treated as a vector field and edge information carried directly by the color vectors was exploited using vector order statistics. Multimedia signal processing In 1999 Professor Venetsanopoulos became the Inaugural Chair of the Bell Canada Multimedia Systems Laboratory at the University of Toronto. From that year, he contributed to the area of multimedia data mining and information retrieval by addressing two key technical challenges: a) the problem of similarity determination within the visual data domain, b) the interactive learning of user intentions and automatic adjustment of system parameters for improved retrieval accuracy. He developed still image and video retrieval systems that utilized color content queries. The system implemented a new vector-based approach to image retrieval using an angular-based similarity measure. The scheme he developed addresses the drawbacks of the histogram techniques, it is flexible, and outperforms established retrieval systems. He also developed an interactive learning algorithm for resolving ambiguities arising due to the mismatch between machine-representation of images and human context-dependent interpretation of visual content. His proposed solution exploited feedback from users during retrieval sessions, to adapt their query intentions and improve the accuracy of the retrieved results. Biometrics research For thousands of years, humans have used visually-perceived body characteristics such as face and gait to recognize one another. This remarkable ability of human visual system led Professor Venetsanopoulos to build automated systems to recognize individuals from digitally captured facial images and gait sequences. Face and gait recognition belong to the field of biometrics, a very active area of research in computer science, mainly motivated by government and security-related considerations. Face and gait are two typical physiological and behavioral biometrics. Venetsanopoulos contributed to both areas and his research has been extensively cited. There are two general approaches to the subject: the appearance-based approach and the model-based approach. Appearance-based face recognition processes a 2-D facial image as 2-D holistic patterns. The whole face region is the raw input to a recognition system and each face image is commonly represented by a high-dimensional vector consisting of the pixel intensity values in the image. Thus, face recognition is transformed to a multivariate, statistical pattern recognition problem. In a similar fashion to appearance-based face recognition, an appearance-based gait recognition approach considers gait as a holistic pattern and uses a full-body representation of a human subject as silhouettes or contours. Gait video sequences are naturally three-dimensional objects, formally named tensor objects, and they are very difficult to deal with using traditional vector-based learning algorithms. In order to deal with these tensor objects effectively, Venetsanopoulos and his research team developed a framework of multilinear subspace learning, so that computation and memory demands are reduced, natural structure and correlation in the original data are preserved, and more compact and useful features can be obtained. The Model-based gait recognition approach considers a human subject as an articulated object, represented by various body poses. Professor Venetsanopoulos proposed a full-body, layered deformable model (LDM) inspired by the manually labeled body-part-level silhouettes. The LDM has a layered structure to model self-occlusion between body parts and it is deformable, so simple limb deformation is taken into consideration. In addition, it also models shoulder swing. The LDM parameters can be recovered from automatically extracted silhouettes and then used for recognition. Publications and grants Tas Venetsanopoulos authored or co-authored nine books; contributed chapters to 35 books; and published over 870 academic papers in refereed journals and conference proceedings. Venetsanopoulos' best known contributions to electrical engineering are: "Nonlinear Digital Filters: Principles and Applications", "Artificial Neural Networks: Learning Algorithms, Performance Evaluation, and Applications", "Color Image Processing and Applications", "WLAN Positioning Systems", "Multilinear Subspace Learning: Dimensionality Reduction of Multidimensional Data". He was supported by grants from the Natural Sciences and Engineering Research Council of Canada (NSERC); the Centers of Excellence of the Province of Ontario; the Ontario Research Fund; the Canadian Space Agency; Spar Aerospace; Ontario Hydro; the Department of Fisheries and Oceans, Canada; the Department of Communications, Canada; and the Province of Ontario. Career Professor Venetsanopoulos joined the Department of Electrical and Computer Engineering (ECE) at the University of Toronto in September 1968 as a Lecturer. He was promoted to Assistant Professor in 1970, Associate Professor in 1973, and Professor in 1981. Venetsanopoulos served as Chair of the Communications Group and Associate Chair of the Department of Electrical Engineering. Between July 1997 and June 2001, he was Associate Chair of the Graduate Studies of the Department of Electrical and Computer Engineering and was Acting Chair during the spring term of 1998–99. In 1999, a Chair in Multimedia was established in the ECE Department, made possible by a donation of $1.25 Million from Bell Canada, matched with $1 Million of university funds. Venetsanopoulos served as Inaugural Chairholder between 1999 and 2005 and two Assistant Professors were hired in the same field. During the period 2001–2006, he served as the twelfth Dean of the Faculty of Applied Science and Engineering at the University of Toronto. Venetsanopoulos' five-year term as the twelfth Dean of the University of Toronto Applied Science and Engineering –the largest and most prominent Faculty of Engineering in Canada– was characterized by an ambitious record of achievement. During his tenure, the "Great Minds" campaign of the Faculty raised $124 Million in external donations, matched by an equal amount of funds from granting agencies and foundations. There were two major buildings constructed: the Bahen Centre for Information Technology with the Faculty of Arts and Sciences, and the Terrance Centre for Cellular and Biomolecular Research with the Faculty of Medicine. A Strategic Plan for 2004-2010 set the direction for Faculty-wide revitalization. The undergraduate curriculum was revised to offer greater flexibility and enrichment. The Office of the Vice-Dean, Research and Graduate Studies was introduced to enhance the research of the Faculty. An exceptional number of citations of professors of the Faculty resulted, while streamlining of administrative units across the Faculty. The Faculty of Engineering increased its focus on teaching and on the quality of the student experience. There was greater multi-disciplinary collaboration. Professor Venetsanopoulos went on research leave at the Imperial College of Science and Technology, the National Technical University of Athens, the Swiss Federal Institute of Technology, the University of Florence, the Federal University of Rio de Janeiro and the University of Grenoble, France. He also served as Adjunct Professor at Concordia University. During 2003-06 he served as a member of the advisory board of the Faculty of Engineering of the National University of Singapore. In April 2009, he was appointed as the Distinguished Guest Professor of the Communications University of China. He served as lecturer in 138 short courses to industry and continuing education programs and as Consultant to numerous organizations. On 1 October 2006, Professor Venetsanopoulos joined Ryerson University as the Founding Vice-President of Research and Innovation. In that position, Venetsanopoulos accepted oversight of Ryerson's international activities, research ethics and the Office of Research Services and the Office of Innovation and Commercialization. In this role, he implemented four strategies to transform the Ryerson University, which was a Polytechnic Institution only a few years before. First, the provision of stimulus and support to the quality and quantity of scholarly research and creative activity (with the stated goal of delivering a research enterprise of over $20 Million by the 2010-11 fiscal year). Second, the facilitation of the transfer of new knowledge to the community, industry and the marketplace. Third, the pursuit of partnerships and collaborations that supported the overall scholarly research and creative activities plan. Fourth, the provision of research opportunities to both undergraduate and graduate students throughout the University. Under his leadership, the research trajectory at Ryerson included international competitions for outstanding postdoctoral fellows; a focus on increasing innovation; international and commercial activity; and university support for excellence and ingenuity among graduate and undergraduate students. On 30 June 2010, Professor Venetsanopoulos retired from the position of Vice-President Research and Innovation at Ryerson and took a one-year administrative leave and subsequently joined the Department of Electrical and Computer Engineering. In the words of Ryerson's President Sheldon Levy, "As the first ever Vice-President, Research and Innovation at Ryerson, Tas brought to the position an immediate credibility and presence based on his own international research record. He advanced research in ways that established the university as active and competitive in Scholarly Research and Creative Activity. Under his leadership Ryerson has attracted scholars and postdoctoral fellows with unprecedented momentum, and made great progress in visibility, perception and objective rankings related to research... Externally funded research has more than doubled in the past four years, and Ryerson now ranks in the top half of non-medical universities in Canada for research... Under Tas' leadership the research trajectory at Ryerson has been one of extraordinary growth and success." In December 2011, Venetsanopoulos was appointed, "Distinguished Advisor to the Vice President Research and Innovation" and continued his full-time academic duties as a Professor of Electrical and Computer Engineering at Ryerson University until his death. Professional service and awards Professor Venetsanopoulos served as chairperson on numerous boards, councils and technical conference committees of the Institute of Electrical and Electronics Engineers (IEEE). He served as the Chair of the Toronto Section from 1977 to 1979 and the IEEE Central Canada Council from 1980 to 1982. He was President of the Canadian Society for Electrical Engineering and Vice President of the Engineering Institute of Canada from 1983 to 1986. He was a Guest Editor or Associate Editor for several IEEE journals and the Editor of the Canadian Electrical Engineering Journal (1981–1983). He was a member of the Communications, Circuits and Systems, Computer, and Signal Processing Societies of IEEE, as well as a member of Sigma Xi, the Association for Computer Machinery, the American Society for Engineering Education, the Technical Chamber of Greece, and the Association of Professional Engineers of Ontario (APEO) and Greece. In 1994, Professor Venetsanopoulos was awarded an Honorary Doctorate from the National University of Technology in Athens, Greece. In 1996, he was awarded the "Excellence in Innovation" Award from the Information Technology Research Centre of Ontario and the Royal Bank of Canada for his work in image processing. Venetsanopoulos was also awarded the "Millennium Medal of IEEE", and the "MacNaughton Medal". In March 2006, he was a joint recipient of the IEEE Transactions on Neural Networks Outstanding Paper Award. He was a Fellow of the Engineering Institute of Canada, the IEEE, and the Canadian Academy of Engineering. In 2008, A.N. Venetsanopoulos along with Rastislav Lukac, Bogdan Smolka and Konstantinos N. Plataniotis were awarded the "Most Cited Paper Award" by the Journal of Visual Communication and Image Representation for their work in artificial neural networks. In 2010, Dr. Venetsanopoulos was elected as Fellow of the Royal Society of Canada. References External links Venetsanopoulos CV (October, 2012) Digital Image and Signal Processing Lab, Ryerson University Multimedia Lab, University of Toronto University of Toronto faculty 1941 births 2014 deaths National Technical University of Athens alumni National Technical University of Athens faculty Fellows of the Engineering Institute of Canada Fellows of the Royal Society of Canada IEEE Centennial Medal laureates Fellow Members of the IEEE Ryerson University faculty Engineers from Ontario Yale University alumni 20th-century Canadian engineers 21st-century Canadian engineers Engineers from Toronto Scientists from Toronto Greek electrical engineers Canadian electrical engineers Canadian mechanical engineers Greek computer scientists Canadian computer scientists 20th-century Greek scientists 21st-century Greek scientists 20th-century Canadian scientists 21st-century Canadian scientists
44992633
https://en.wikipedia.org/wiki/I.CX
I.CX
I.CX is a messaging and file sharing web application providing end-to-end encryption without any download or installation. It was developed by the Toronto firm EveryBit, and relies on the open-source EveryBit.js framework. All encryption is done client-side in the users web browser. All files and messages stored and sent using I.CX are protected with 256-bit AES encryption. References Cryptographic software Free software Internet privacy software
394780
https://en.wikipedia.org/wiki/Physics%20Analysis%20Workstation
Physics Analysis Workstation
The Physics Analysis Workstation (PAW) is an interactive, scriptable computer software tool for data analysis and graphical presentation in High Energy Physics (HEP). The development of this software tool started at CERN in 1986, it was optimized for the processing of very large amounts of data. It was based on and intended for inter-operation with components of CERNLIB, an extensive collection of Fortran libraries. PAW had been a standard tool in high energy physics for decades, yet was essentially unmaintained. Despite continuing popularity as of 2008, it has been losing ground to the C++-based ROOT package. Conversion tutorials exist. In 2014, development and support were stopped. Sample script PAW uses its own scripting language. Here is sample code (with its actual output), which can be used to plot data gathered in files. * read data vector/read X,Y input_file.dat * eps plot fort/file 55 gg_ggg_dsig_dphid_179181.eps meta 55 -113 opt linx | linear scale opt logy | logarithmic scale * here goes plot set plci 1 | line color set lwid 2 | line width set dmod 1 | line type (solid, dotted, etc.) graph 32 X Y AL | 32 stands for input data lines in input file * plot title and comments set txci 1 atitle '[f] (deg)' 'd[s]/d[f]! (mb)' set txci 1 text 180.0 2e1 '[f]=179...181 deg' 0.12 close 55 References External links PAW (at CERN) The PAW History Seen by the CERN Computer News Letters CERNLIB (at CERN) ROOT (at CERN) Free science software Free software programmed in Fortran Physics software CERN software
28290
https://en.wikipedia.org/wiki/Slackware
Slackware
Slackware is a Linux distribution created by Patrick Volkerding in 1993. Originally based on Softlanding Linux System, Slackware has been the basis for many other Linux distributions, most notably the first versions of SUSE Linux distributions, and is the oldest distribution that is still maintained. Slackware aims for design stability and simplicity and to be the most "Unix-like" Linux distribution. It makes as few modifications as possible to software packages from upstream and tries not to anticipate use cases or preclude user decisions. In contrast to most modern Linux distributions, Slackware provides no graphical installation procedure and no automatic dependency resolution of software packages. It uses plain text files and only a small set of shell scripts for configuration and administration. Without further modification it boots into a command-line interface environment. Because of its many conservative and simplistic features, Slackware is often considered to be most suitable for advanced and technically inclined Linux users. Slackware is available for the IA-32 and x86_64 architectures, with a port to the ARM architecture. While Slackware is mostly free and open-source software, it does not have a formal bug tracking facility or public code repository, with releases periodically announced by Volkerding. There is no formal membership procedure for developers and Volkerding is the primary contributor to releases. Name The name "Slackware" stems from the fact that the distribution started as a private side project with no intended commitment. To prevent it from being taken too seriously at first, Volkerding gave it a humorous name, which stuck even after Slackware became a serious project. Slackware refers to the "pursuit of Slack", a tenet of the Church of the SubGenius, a parody religion. Certain aspects of Slackware graphics reflect this—the pipe that Tux is smoking, as influenced by the image of J. R. "Bob" Dobbs' head. A humorous reference to the Church of the SubGenius can be found in many versions of the install.end text files, which indicate the end of a software series to the setup program. In recent versions, including Slackware release 14.1, the text is ROT13 obfuscated. History Birth Slackware was originally derived from the Softlanding Linux System (SLS), the most popular of the original Linux distributions and the first to offer a comprehensive software collection that comprised more than just the kernel and basic utilities, including X11 graphical interface, TCP/IP and UUCP networking and GNU Emacs. Patrick Volkerding started with SLS after needing a LISP interpreter for a school project at the then named Moorhead State University (MSU). He found CLISP was available for Linux and downloaded SLS to run it. A few weeks later, Volkerding was asked by his artificial intelligence professor at MSU to show him how to install Linux at home and on some of the computers at school. Volkerding had made notes describing fixes to issues he found after installing SLS and he and his professor went through and applied those changes to a new installation. However, this took almost as long as it took to just install SLS, so the professor asked if the install disks could be adjusted so the fixes could be applied during installation. This was the start of Slackware. Volkerding continued making improvements to SLS: fixing bugs, upgrading software, automatic installation of shared libraries and the kernel image, fixing file permissions, and more. In a short time, Volkerding had upgraded around half the packages beyond what SLS had available. Volkerding had no intentions to provide his modified SLS version for the public. His friends at MSU urged him to put his SLS modifications onto an FTP server, but Volkerding assumed that "SLS would be putting out a new version that included these things soon enough", so he held off for a few weeks. During that time, many SLS users on the internet were asking SLS for a new release, so eventually Volkerding made a post titled "Anyone want an SLS-like 0.99pl11A system?", to which he received many positive responses. After a discussion with the local sysadmin at MSU, Volkerding obtained permission to upload Slackware to the university's FTP server. This first Slackware release, version 1.00, was distributed on July 17, 1993, at 00:16:36 (UTC), and was supplied as twenty-four 3½" floppy disk images. After the announcement was made, Volkerding watched as the flood of FTP connections continually crashed the server. Soon afterwards, Walnut Creek CDROM offered additional archive space on their FTP servers. Development The size of Slackware quickly increased with the addition of included software, and by version 2.1, released October 1994, it had more than tripled to comprise seventy-three 1.44M floppy disk images. In 1999, Slackware saw its version jump from 4 to 7. Slackware version numbers were lagging behind other distributions, and this led many users to believe it was out of date even though the bundled software versions were similar. Volkerding made the decision to bump the version as a marketing effort to show that Slackware was as up-to-date as other Linux distributions, many of which had release numbers of 6 at the time. He chose 7, estimating that most other distributions would soon be at this release number. In April 2004, Patrick Volkerding added X.Org Server packages into the testing/ directory of -current as a replacement for the XFree86 packages currently being used, with a request for comments on what the future of the X Window System in Slackware should be. A month later, he switched from XFree86 to X.Org Server after stating that the opinions were more than 4 to 1 in favor of using the X.org release as the default version of X. He stated the decision was primarily a technical one, as XFree86 was proving to cause compatibility problems. Slackware 10.0 was the first release with X.Org Server. In March 2005, Patrick Volkerding announced the removal of the GNOME desktop environment in the development ChangeLog. He stated this had been under consideration for more than four years and that there were already projects that provided a more complete version of GNOME for Slackware than what Slackware itself provided. Volkerding stated future GNOME support would rely on the community. The community responded and as of October 2016, there are several active GNOME projects for Slackware. These include Cinnamon, Dlackware, Dropline GNOME, MATE, and SlackMATE. The removal was deemed significant by some in the Linux community due to the prevalence of GNOME in many distributions. In May 2009, Patrick Volkerding announced the public (development) release of an official x86_64 variant, called Slackware64, maintained in parallel with the IA-32 distribution. Slackware64 is a pure 64-bit distribution in that it does not support running or compiling 32-bit programs, however, it was designed as "multilib-ready". Eric Hameleers, one of the core Slackware team members, maintains a multilib repository that contains the necessary packages to convert Slackware64 to multilib to enable running of 32-bit software. Hameleers started the 64-bit port as a diversion from the pain of recovering from surgery in September 2008. Volkerding tested the port in December 2008, and was impressed when he saw speed increases between 20 and 40 percent for some benchmarks compared to the 32-bit version. To minimize the extra effort of maintaining both versions in parallel, Slackware's build scripts, called SlackBuilds, were slowly transitioned to supporting either architecture, allowing for one set of sources for both versions. Slackware64 saw its first stable release with version 13.0. Between the November 2013 release of 14.1 and June 2016, Slackware saw a 31-month gap between releases, marking the longest span in release history. During this time the development branch went without updates for 47 days. However, on April 21, 2015, Patrick Volkerding apologized on the ChangeLog for the absence of updates and stated that the development team used the time to get "some good work done." There were over 700 program changes listed on that ChangeLog entry, including many major library upgrades. In January 2016, Volkerding announced the reluctant addition of PulseAudio, primarily due to BlueZ dropping direct ALSA support in v5.x. while various other projects were in turn dropping support for BlueZ v4.x. Knowing some users would not be happy with the change, he stated that "Bug reports, complaints, and threats can go to me." These changes culminated in the release of Slackware 14.2 in June 2016. Design philosophy The design philosophy of Slackware is oriented toward simplicity, software purity, and a core design that emphasizes lack of change to upstream sources. Many design choices in Slackware can be seen as a heritage of the simplicity of traditional Unix systems and as examples of the KISS principle. In this context, "simple" refers to the simplicity in system design, rather than system usage. Thus, ease of use may vary between users: those lacking knowledge of command line interfaces and classic Unix tools may experience a steep learning curve using Slackware, whereas users with a Unix background may benefit from a less abstract system environment. In keeping with Slackware's design philosophy, and its spirit of purity, most software in Slackware uses the original configuration mechanisms supplied by the software's authors; however, for some administrative tasks, distribution-specific configuration tools are delivered. Development model There is no formal issue tracking system and no official procedure to become a code contributor or developer. The project does not maintain a public code repository. Bug reports and contributions, while being essential to the project, are managed in an informal way. All the final decisions about what is going to be included in a Slackware release strictly remain with Slackware's benevolent dictator for life, Patrick Volkerding. The first versions of Slackware were developed by Patrick Volkerding alone. Beginning with version 4.0, the official Slackware announce files list David Cantrell and Logan Johnson as part of the "Slackware team". Later announce statements, up to release version 8.1, include Chris Lumens. Lumens, Johnson and Cantrell are also the authors of the first edition of "Slackware Linux Essentials", the official guide to Slackware Linux. The Slackware website mentions Chris Lumens and David Cantrell as being "Slackware Alumni", who "worked full-time on the Slackware project for several years." In his release notes for Slackware 10.0 and 10.1 Volkerding thanks Eric Hameleers for "his work on supporting USB, PCI, and Cardbus wireless cards". Starting with version 12.0 there is, for a second time, a team building around Volkerding. According to the release notes of 12.2, the development team consists of seven people. Future versions added people. Since version 13.0, the Slackware team seems to have core members. Eric Hameleers gives an insight into the core team with his essay on the "History of Slackware Development", written on October 3–4, 2009 (shortly after the release of version 13.0). Packages Management Slackware's package management system, collectively known as pkgtools, can administer (), install (), upgrade (), and remove () packages from local sources. It can also uncompress () and create () packages. The official tool to update Slackware over a network or the internet is . It was originally developed by Piter Punk as an unofficial way to keep Slackware up-to-date. It was officially included in the main tree in Slackware 12.2, having been included in since Slackware 9.1. When a package is upgraded, it will install the new package over the old one and then remove any files that no longer exist in the new package. When running , it only confirms that the version numbers are different, thus allowing downgrading the package if desired. Slackware packages are tarballs compressed using various methods. Starting with 13.0, most packages are compressed using xz (based on the LZMA compression algorithm), utilizing the filename extension. Prior to 13.0, packages were compressed using gzip (based on the DEFLATE compression algorithm), using the extension. Support for bzip2 and lzip compression was also added, using the filename extensions and respectively, although these are not commonly used. Packages contain all the files for that program, as well as additional metadata files used by the package manager. The package tarball contains the full directory structure of the files and is meant to be extracted in the system's root directory during installation. The additional metadata files, located under the special directory within the tarball, usually include a file, which is a specifically formatted text file that is read by the package manager to provide users with a description of the packaged software, as well as a file, which is a post-unpacking shell script allowing creation of symbolic links, preserving permissions on startup files, proper handling of new configuration files, and any other aspects of installation that can't be implemented via the package's directory structure. During the development of 15.0, Volkerding introduced support for a uninstall script that can be launched when removing or upgrading a package. This allows package maintainers to run commands when a package is uninstalled. The package manager maintains a local database on the computer, stored in multiple folders. On 14.2 and older systems, the main database of installed packages was maintained in , however, during the development of 15.0, Volkerding moved two of the directories to a dedicated location under to prevent accidental deletion when clearing system logs. Each Slackware installation will contain a and directory in the main database location. The former is where each package installed will have a corresponding install log file (based on the package name, version, arch, and build) that contains the package size, both compressed and uncompressed, the software description, and the full path of all files that were installed. If the package contained an optional post-installation script, the contents of that script will be added to a file in the directory matching the filename of the corresponding package in the directory, allowing the administrator to view the post-installation script at a future point. When a package is removed or upgraded, the old install logs and scripts found under and are moved to and , making it possible to review any previous packages and see when they were removed. These directories can be found in on 14.2 and earlier, but were moved to during the development of 15.0. On systems supporting the uninstall script, those scripts will be stored in the directory while the package is installed. Once removed, the script will be moved to . Dependency resolution The package management system does not track or manage dependencies; however, when performing the recommended full install, all dependencies of the stock packages are met. For custom installations or 3rd-party packages, Slackware relies on the user to ensure that the system has all the supporting system libraries and programs required by the program. Since no official lists of dependencies for stock packages are provided, if users decide to install a custom installation or install 3rd-party software, they will need to work through any possible missing dependencies themselves. Since the package manager doesn't manage dependencies, it will install any and all packages, whether or not dependencies are met. A user may find out that dependencies are missing only when attempting to use the software. While Slackware itself does not incorporate official tools to resolve dependencies, some unofficial, community-supported software tools do provide this function, similar to the way APT does for Debian-based distributions and yum does for Red Hat-based distributions. They include slapt-get is a command line utility that functions in a similar way to APT. While slapt-get does provide a framework for dependency resolution, it does not provide dependency resolution for packages included within the Slackware distribution. However, several community package sources and Slackware based distributions take advantage of this functionality. Gslapt is a graphical interface to slapt-get. Swaret is a package management tool featuring dependency resolution. It was originally included in Slackware version 9.1 as an optional package, but did not contain dependency resolution at that time. It was removed from the distribution with Slackware 10.0 and turned over to the community. It eventually added dependency resolution and roll-back functionality; however, as of May 2014, there are no active developers. NetBSD's pkgsrc provides support for Slackware, among other Unix-like operating systems. pkgsrc provides dependency resolution for both binary and source packages. Repositories There are no official repositories for Slackware. The only official packages Slackware provides are available on the installation media. However, there are many third-party repositories for Slackware; some are standalone repositories and others are for distributions that are Slackware-based but retain package compatibility with Slackware. Many of these can be searched at once using pkgs.org, which is a Linux package search engine. However, mixing and matching dependencies from multiple repositories can lead to two or more packages that require different versions of the same dependency, which is a form of dependency hell. Slackware itself won't provide any dependency resolution for these packages, however some projects will provide a list of dependencies that are not included with Slackware with the files for the package, commonly with a extension. Due to the possibility of dependency issues, many users choose to compile their own programs using community-provided SlackBuilds. SlackBuilds are shell scripts that will create an installable Slackware package from a provided software tarball. Since SlackBuilds are scripts, they aren't limited to just compiling a program's source; they can also be used to repackage pre-compiled binaries provided by projects or other distributions' repositories into proper Slackware packages. SlackBuilds that compile sources have several advantages over pre-built packages: since they build from the original author's source code, the user does not have to trust a third-party packager; furthermore the local compilation process allows for machine-specific optimization. In comparison to manual compilation and installation of software, SlackBuilds provide cleaner integration to the system by utilizing Slackware's package manager. Some SlackBuilds will come with an additional file with metadata that allows automated tools to download the source, verify the source is not corrupt, and calculate additional dependencies that are not part of Slackware. Some repositories will include both SlackBuilds and the resulting Slackware packages, allowing users to either build their own or install a pre-built package. The only officially endorsed SlackBuilds repository is SlackBuilds.org, commonly referred to as SBo. This is a community-supported project offering SlackBuilds for building software not included with Slackware. Users are able to submit new SlackBuilds for software to the site and, once approved, they become the "package maintainer". They are then responsible for providing updates to the SlackBuild, either to fix issues or to build newer versions provided by upstream. To ensure all programs can be compiled and used, any required dependencies of the software not included with Slackware are required to be documented and be available on the site. All submissions are tested by the site's administrators before being added to the repository. The administrators intend for the build process to be nearly identical to the way Slackware's official packages are built, mainly to ensure Volkerding was "sympathetic of our cause". This allows SlackBuilds that Volkerding deems worthy to be pulled into regular Slackware with minimal changes to the script. It also prevent users from suggesting Volkerding to change his scripts to match SBo's. SBo provides templates for SlackBuilds and the additional metadata files and they encourage package maintainers to not deviate unless necessary. Two Slackware team members, Eric Hameleers and Robby Workman each have their own repository of pre-compiled packages along with the SlackBuilds and source files used to create the packages. While most packages are just additional software not included in Slackware that they felt was worth their time to maintain, some packages are used as a testbed for future upgrades to Slackware, most notably, Hameleers provides "Ktown" packages for newer versions of KDE. He also maintains Slackware's "multilib" repository, enabling Slackware64 to run and compile 32-bit packages. Releases Slackware's release policy follows a feature and stability based release cycle, in contrast to the time-bound (e.g., Ubuntu) or rolling release (e.g., Gentoo Linux) schemes of other Linux distributions. This means there is no set time on when to expect a release. Volkerding will release the next version after he feels a suitable number of changes from the previous version have been made and those changes lead to a stable environment. As stated by Patrick Volkerding, "It's usually our policy not to speculate on release dates, since that's what it is — pure speculation. It's not always possible to know how long it will take to make the upgrades needed and tie up all the related loose ends. As things are built for the upcoming release, they'll be uploaded into the -current tree." Throughout Slackware's history, they generally tried to deliver up-to-date software on at least an annual basis. From its inception until 2014, Slackware had at least one release per year. Release activity peaked in 1994, 1995, 1997 and 1999, with three releases each year. Starting with version 7.1 (June 22, 2000) the release progression became more stable and typically occurred once per year. After that point, the only years with two releases were 2003, 2005 and 2008. However, since the release of Slackware 14.1 in 2013, new releases have slowed down drastically. There was a more than 2-year gap between 14.1 and 14.2 and over a 5 year gap to 15.0. Upon the release of 15.0, Volkerding stated that Slackware 15.1 will hopefully have a far shorter development cycle since the "tricky parts" were resolved during the development of 15.0. Slackware's latest 32-bit x86 and 64-bit x86_64 stable releases are at version 15.0 (released on February 2, 2022), which include support for Linux 5.15.19. Volkerding also maintains a testing/developmental version of Slackware called "-current" that can be used for a more bleeding edge configuration. This version will eventually become the next stable release, at which point Volkerding will start a new -current to start developing for the next release of Slackware. While this version is generally known to be stable, it is possible for things to break, so -current tends to not be recommended for production systems. Support Currently, Slackware has no officially stated support term policy. However, on June 14, 2012, notices appeared in the changelogs for versions 8.1, 9.0, 9.1, 10.0, 10.1, 10.2, 11.0, and 12.0 stating that, effective August 1, 2012, security patches would no longer be provided for these versions. The oldest release, version 8.1, was released on June 18, 2002 and had over 10 years of support before reaching EOL. Later, on August 30, 2013, announcements were made on the changelogs of 12.1 and 12.2 stating their EOL on December 9, 2013. It was stated in the changelog entries that they had at least 5 years of support. On April 6, 2018, versions of 13.0, 13.1 and 13.37 were declared reaching their EOL on July 5, 2018. It was stated in the changelog entries that they had at least 7 years of support (13.0 had been supported almost 9 years). , there have been no announcements from the Slackware team on when any versions of Slackware from 14.0 and up will be EOL. While there have been no official announcements for versions prior to 8.1, they are no longer maintained and are effectively EOL. Hardware architectures Historically, Slackware concentrated solely on the IA-32 architecture and releases were available as 32-bit only. However, starting with Slackware 13.0, a 64-bit x86_64 variant is available and officially supported in symmetrical development with the 32-bit platform. Prior to the release of Slackware64 users wanting 64-bit were required to use unofficial ports such as slamd64. Slackware is also available for the IBM S/390 architecture in the form of Slack/390 and for the ARM architecture under Slackware ARM (originally known as 'ARMedslack'). Both ports have been declared "official" by Patrick Volkerding. However, the S/390 port is still at version 10.0 for the stable version and 11.0 for the testing/developmental version, and has had no updates since 2009. Also, on May 7, 2016, the developer of Slackware ARM announced 14.1 will be EOL on September 1, 2016 and development of -current will cease with the release of 14.2, however support for 14.2 will be maintained for the foreseeable future. The EOL announcement for 14.1 was added to the changelog on June 25, 2016. In July 2016, the developer of Slackware ARM announced that the development and build tools had been enhanced to reduce the manual effort involved in maintaining the ARM port, and proceeded to announce that a 32-bit hardware floating port was in development. The port was released in August 2016 in "current" form. Slackintosh is a port of Slackware Linux for the Macintosh New World ROM PowerPC architecture, used by Apple's Power Macintosh, PowerBook, iMac, iBook, and Xserve lines from 1994 until 2006. The last version of Slackintosh was 12.1, released on Jun 7, 2008. Slackintosh's website is still active and version 12.1 is available for download for those who have older PowerPC Macintosh computers. The project developers announced in February 2012 that development was frozen and 12.1 would be able to receive security patches for one month. The next month, it was announced that the stable release is frozen and won't receive any further updates unless someone else decides to take over. This never happened and Volkerding officially declared the project dead in July 2021. Distribution Slackware 14.2 CD sets, single DVDs, and merchandise were available from the third-party-controlled Slackware store, but due to underpayment, Patrick Volkerding, "told them to take it down or I'd suspend the DNS for the store". Slackware ISO images (2.6 GB) for installation can be downloaded for free at the Slackware website via BitTorrent, FTP mirrors, and HTTP mirrors. Slackware port for IBM S/390 (EOL: 2009)) can be downloaded, and installs from a DOS Partition or from floppy disk. Slackware port for ARM architecture can be downloaded, and installed via a network, using Das U-Boot and a TFTP boot server or from a mini-root filesystem. Slackware ARM can also be installed on a PC running QEMU using the same technique. Use As of 2019, DistroWatch ranks Slackware at 29th. Interest appears to have peaked in 2002, when Slackware's rank reached 7th. It had gradually slipped from the top 10 by 2010, and appears to have stabilized around its current rank in 2015. However, since DistroWatch only tracks visitors to the various distributions' pages, they state that their ranking does not always correlate with the usage of a distribution; rather, it measures the popularity of that distribution on their site. Because of this, their rankings "should not be used to measure the market share of distributions." As with most Linux distributions, Slackware has no official system for tracking total unique installs or active users. References External links ARM Linux distributions Articles which contain graphical timelines IA-32 Linux distributions IBM ESA/390 Linux distributions X86-64 Linux distributions 1994 software Linux distributions without systemd Linux distributions
383687
https://en.wikipedia.org/wiki/NEC
NEC
is a Japanese multinational information technology and electronics corporation, headquartered in Minato, Tokyo. The company was known as the Nippon Electric Company, Limited, before rebranding in 1983 as NEC. It provides IT and network solutions, including cloud computing, artificial intelligence (AI), Internet of things (IoT) platform, and telecommunications equipment and software, to business enterprises, communications services providers and to government agencies, and has also been the biggest PC vendor in Japan since the 1980s, when it launched the PC-8000 series. NEC was the world's fourth-largest PC manufacturer by 1990. Its semiconductors business unit was the world's largest semiconductor company by annual revenue from 1985 to 1992, the second largest in 1995, one of the top three in 2000, and one of the top 10 in 2006. NEC spun off its semiconductor business to Renesas Electronics and Elpida Memory. Once Japan's major electronics company, NEC has largely withdrawn from manufacturing since the beginning of the 21st century. NEC was #463 on the 2017 Fortune 500 list. NEC is a member of the Sumitomo Group. History NEC Kunihiko Iwadare and Takeshiro Maeda established Nippon Electric Limited Partnership on August 31, 1898 by using facilities that they had bought from Miyoshi Electrical Manufacturing Company. Iwadare acted as the representative partner; Maeda handled company sales. Western Electric, which had an interest in the Japanese phone market, was represented by Walter Tenney Carleton. Carleton was also responsible for the renovation of the Miyoshi facilities. It was agreed that the partnership would be reorganized as a joint-stock company when the treaty would allow it. On July 17, 1899, the revised treaty between Japan and the United States went into effect. Nippon Electric Company, Limited was organized the same day with Western Electric Company to become the first Japanese joint-venture with foreign capital. Iwadare was named managing director. Ernest Clement and Carleton were named as directors. Maeda and Mototeru Fujii were assigned to be auditors. Iwadare, Maeda, and Carleton handled the overall management. The company started with the production, sales, and maintenance of telephones and switches. NEC modernized the production facilities with the construction of the Mita Plant in 1901 at Mita Shikokumachi. It was completed in December 1902. The Japanese Ministry of Communications adopted a new technology in 1903: the common battery switchboard supplied by NEC. The common battery switchboards powered the subscriber phone, eliminating the need for a permanent magnet generator in each subscriber's phone. The switchboards were initially imported, but were manufactured locally by 1909. NEC started exporting telephone sets to China in 1904. In 1905, Iwadare visited Western Electric in the U.S. to see their management and production control. On his return to Japan, he discontinued the "oyakata" system of sub-contracting and replaced it with a new system where managers and employees were all direct employees of the company. Inefficiency was also removed tfrom the production process. The company paid higher salaries with incentives for efficiency. New accounting and cost controls were put in place, and time clocks installed. Between 1899 and 1907 the number of telephone subscribers in Japan rose from 35,000 to 95,000. NEC entered the China market in 1908 with the implementation of the telegraph treaty between Japan and China. They also entered the Korean market, setting up an office in Seoul in January 1908. During the period of 1907 to 1912 sales rose from 1.6 million yen to 2 million yen. The expansion of the Japanese phone service had been a key part of NEC's success during this period. This expansion was about to take a pause. The Ministry of Communications delayed a third expansion plan of the phone service in March 1913, despite having 120,000 potential telephone-subscribers waiting for phone installations. NEC sales fell sixty percent between 1912 and 1915. During the interim, Iwadare started importing appliances, including electric fans, kitchen appliances, washing machines, and vacuum cleaners. Electric fans had never been seen in Japan before. The imports were intended to prop up company sales. In 1916, the government resumed the delayed telephone-expansion plan, adding 75,000 subscribers and 326,000 kilometers of new toll lines. Thanks to this third expansion plan, NEC expanded at a time when much of the rest of Japanese industry contracted. 1919 to 1938 In 1919, NEC started its first association with Sumitomo, engaging Sumitomo Densen Seizosho to manufacture cables. As part of the venture, NEC provided cable manufacturing equipment to Sumitomo Densen. Rights to Western Electrics duplex cable patents were also transferred to Sumitomo Densen. The Great Kantō earthquake struck Japan in 1923. 140,000 people were killed and 3.4 million were left homeless. Four of NEC's factories were destroyed, killing 105 of NEC's engineers and workers. Thirteen of Tokyo's telephone offices were destroyed by fire. Telephone and telegraph service was interrupted by damage to telephone cables. In response, the Ministry of Communications accelerated major programs to install automatic telephone switching systems and enter radio broadcasting. The first automatic switching systems were the Strowger-type model made by Automatic Telephone Manufacturing Co. (ATM) in the United Kingdom. NEC participated in the installation of the automatic switching systems, ultimately becoming the general sales agent for ATM. NEC developed its own Strowger-type automatic switching system in 1924, a first in Japan. One of the plants almost leveled during the Kanto earthquake, the Mita Plant, was chosen to support expanding production. A new three-story steel-reinforced concrete building was built, starting in 1925. It was modeled after the Western Electric Hawthorne Works. NEC started its radio communications business in 1924. Japan's first radio broadcaster, Radio Tokyo was founded in 1924 and started broadcasting in 1925. NEC imported the broadcasting equipment from Western Electric. The expansion of radio broadcasting into Osaka and Nagoya marked the emergence of radio as an Industry. NEC established a radio research unit in 1924. NEC started developing electron tubes in 1925. By 1930, they were manufacturing their first 500 W radio transmitter. They provided the Chinese Xinjing station with a 100 kW radio broadcasting system in 1934. Photo-telegraphic equipment developed by NEC transmitted photos of the accession ceremony of Emperor Hirohito. The ceremony was held in Kyoto in 1928. The Newspapers Asahi Shimbun and Mainichi Shimbun were competing to cover the ceremony. The Asahi Shimbun was using a Siemens device. The Mainichi was planning to use French photo-telegraphic equipment. In the end, both papers acquired and used the NEC product, due to its faster transmission rate and higher picture quality. In 1929 Nippon Electric provided Japan's Ministry of Communications with the A-type switching system, the first of these systems to be developed in Japan. Nippon supplied Japan's Ministry of Communications with nonloaded line carrier equipment for long-distance telephone channels in 1937. 1938 to 1945 World War II was described by the company as being the blackest days of its history. In 1938 the Mita and Tamagawa plants were placed under military control, with direct supervision by military officers. In 1939, Nippon Electric established a research laboratory in the Tamagawa plant. It became the first Japanese company to successfully test microwave multiplex communications. On December 22, 1941, the enemy property control law was passed. NEC shares owned by International Standard Electric Corporation (ISE), an ITT subsidiary, and Western Electric affiliate were seized. Capital and technical relations were abruptly severed. The "Munitions Company Law" was passed in October 1943, placing overall control of NEC plants under military jurisdiction. The Ueno plant was leveled by the military attack in March 1945. Fire bombings in April and May heavily damaged the Tamagawa Plant, reducing its capacity by forty percent. The Okayama Plant was totally destroyed by a bombing attack in June of the same year. At the end of the war, NEC's production had been substantially reduced by damage to its facilities, and by material and personnel shortages. 1945 to 1980 After the war, production was slowly returned to civilian use. NEC re-opened its major plants by the end of January 1946 NEC began transistor research and development in 1950. It started exporting radio-broadcast equipment to Korea under the first major postwar contract in 1951. NEC received the Deming Prize for excellence in quality control in 1952. Computer research and development began in 1954. NEC produced the first crossbar switching system in Japan. It was installed at Nippon Telegraph and Telephone Public Corporation (currently Nippon Telegraph and Telephone Corporation; NTT) in 1956. NEC began joint research and development with NTT of electronic switching systems the same year. NEC established Taiwan Telecommunication Company as their first postwar overseas joint venture in 1958. They completed the NEAC-1101 and NEAC-1102 computers the same year. In September 1958, NEC built their first fully transistorized computer, the NEAC-2201, with parts made solely in Japan. One year later, they demonstrated it at the UNESCO AUTOMATH show in Paris. The company began integrated circuit research and development in 1960. In 1963 NEC started trading as American Depositary Receipts, ten million shares being sold in the United States. Nippon Electric New York (now NEC America Inc.) was incorporated in the same year. NEC supplied KDD with submarine cable systems for laying in the Pacific Ocean in 1964. They supplied short-haul 24 channel PCM carrier transmission equipment to NTT in 1965. NEC de Mexico, S. A. de C. V., NEC do Brasil, S. A., NEC Australia Pty. Ltd. were established between 1968 and 1969. NEC supplied Comsat Corporation with the SPADE satellite communications system in 1971. In 1972, Switzerland ordered a NEC satellite communications earth station. The same year, a small transportable satellite communications earth station was set up in China. Shares of NEC common stock were listed on the Amsterdam Stock Exchange in 1973. NEC also designed an automated broadcasting system for the Japan Broadcasting Corporation the same year. NEC Electronics (Europe) GmbH was also established. In 1974, the ACOS series computer was introduced. The New Central Research Laboratories were completed in 1975. In 1977, Japan's National Space Development Agency launched the NEC geostationary meteorological satellite, named Himawari. During this period NEC introduced the concept of "C&C", the integration of computers and communications. NEC America Inc. opened a plant in Dallas, Texas to manufacture PABX and telephone systems in 1978. They also acquired Electronic Arrays, Inc. of California the same year to start semiconductor chip production in the United States. 1980 to 2000 In 1980, NEC created the first digital signal processor, the NEC µPD7710. NEC Semiconductors (UK) Ltd. was established in 1981, producing VLSIs and LSIs. NEC introduced the 8-bit PC-8800 series personal computer in 1981, followed by the 16-bit PC-9800 series in 1982. In 1983 NEC stock was listed on the Basel, Geneva, and Zurich, Switzerland exchanges. NEC quickly became the dominant leader of the Japanese PC industry, holding 80% market share. NEC changed its English company name to NEC Corporation the same year. NEC Information Systems, Inc. started manufacturing computers and related products in the United States in 1984. NEC also released the V-series processor the same year. In 1986, NEC delivered its SX-2 supercomputer to the Houston Advanced Research Center, The Woodlands, Texas. In the same year, the NEAX61 digital switching system went into service. In 1987, NEC Technologies (UK) Ltd. was established in the United Kingdom to manufacture VCRs, printers, and computer monitors and mobile telephones for Europe. Also that year, NEC licensed technology from Hudson Soft, a video game manufacturer, to create a video game console called the PC-Engine (later released in 1989 as the TurboGrafx-16 in the North American market). Its successor, the PC-FX, was released in Japan in 1994. While the PC-Engine achieved a considerable following, it has been said that NEC held a much stronger influence on the video game industry through its role as a leading semiconductor manufacturer than through any of its direct video game products. NEC USA, Inc. was established in 1989 as a holding company for North American operations. In 1983, NEC Brasil (pt), the Brazilian subsidiary of NEC, was forced to nationalize its corporate stock under orders of the Brazilian military government, whereby shareholder control of NEC Brasil was ceded to the private equity group Brasilinvest of Brazilian investment banker Mário Garnero. Since NEC Brasil's foundation in 1968, it had become the major supplier of telecommunications equipment to the Brazilian government. In 1986, the then Minister of Communications Antônio Carlos Magalhães put NEC Brasil in financial difficulties by suspending all government contract payments to the company, whose main client was the federal government. With the subsidiary in crisis, the NEC Corporation in Japan sold NEC Brasil to Organizações Globo for only onemillion US dollars (US$1,000,000). Shortly thereafter, Magalhães resumed the government contracts and corresponding payments, and NEC Brazil became valued at over 350million US dollars (US$350,000,000). Suspicions regarding the NEC-Globo deal, which included among other things the unilateral breach of contract by Globo founder Roberto Marinho regarding the management of a regional television station in the Brazilian state of Bahia, took to the national stage only in 1992 during the first corruption charges against the impeached Brazilian president Fernando Collor de Mello. Organizações Globo subsequently sold their shares in NEC Brazil, which hit their all-time high during the state monopoly years, back to NEC Corporation in 1999 following the break-up and privatization of the Brazilian state-owned telephone monopoly Telebrás. In 1990, the new head office building, known as the "Super Tower", was completed in Shiba, Tokyo. Additionally, joint-venture agreements were established to manufacture and market digital electronic switching systems and LSIs in China. In 1993 NEC's asynchronous transfer mode (ATM) switching system, the NEAX61 (Nippon Electronic Automatic Exchange) ATM Service Node, went into service in the United States. NEC Europe, Ltd. was established as a holding company for European operations the same year. The NEC C&C Research Laboratories, NEC Europe, Ltd. were opened in Germany in 1994. NEC (China) Co, Ltd. was established as a holding company for Chinese operations in 1996. In 1997 NEC developed 4Gbit DRAM, and their semiconductor group was honored with one of the first Japan Quality Awards. In 1998, NEC opened the world's most advanced semiconductor R&D facility. NEC had been the no. 1 personal computer vendor in Japan during the 1980s, but it faced increasing competition from Fujitsu, Seiko Epson and IBM Japan. Nevertheless, by the early 1990s, NEC was still the largest, having well over 50% market share in the Japanese market. Competition heated up later as rival Fujitsu started to aggressively market its computers, which were industry standard (x86) instead of NEC's indigenous models. In June 1995 NEC purchased the California-based Packard Bell company to produce desktop PCs in a common manufacturing plant for the North American market. As a result, NEC Technologies (USA) was merged with Packard Bell to create Packard Bell NEC Inc. By 1997 NEC's share was reduced to about 35%. NEC celebrated their 100th anniversary in 1999. 2000 to present In 2000, NEC formed a joint-venture with Samsung SDI to manufacture OLED displays. Around this time, NEC also collaborated with the UK Government to provide schools in the country with projectors for use in classrooms, most of which are still in use to this day. NEC Electronics Corporation was separated from NEC in 2002 as a new semiconductor company. NEC Laboratories America, Inc. (NEC Labs) started in November 2002 as a merger of NEC Research Institute (NECI) and NEC USA's Computer and Communications Research Laboratory (CCRL). NEC built the Earth Simulator Computer (ESC), the fastest supercomputer in the world from 2002 to 2004, and since produced the NEC N343i in 2006. In 2003 NEC had a 20.8% market share in the personal computer market in Japan, slightly ahead of Fujitsu. In 2004, NEC abandoned not only the OLED business, but the display business as a whole, by selling off their plasma display business and exiting from the joint-venture with Samsung SDI. Samsung bought all of the shares and related patents owned by NEC, incorporating Samsung OLED, which subsequently merged with Samsung Display. In 2007, NEC and Nissan Co. Corp. started evaluating a joint venture to produce lithium ion batteries for hybrid and electric cars. The two companies established Automotive Energy Supply Corporation as a result. On April 23, 2009, Renesas Technology Corp and NEC Electronics Corp struck a basic agreement to merge by around April 2010. On April 1, 2010 NEC Electronics and Renesas Technology merged forming Renesas Electronics which is set to be fourth largest semiconductor company according to iSuppli published data. By Q3 2010, NEC held a 19.8% market share in the PC market in Japan. On January 27, 2011, NEC formed a joint venture with Chinese PC maker Lenovo, the fourth largest PC maker in the world. As part of the deal, the companies said in a statement they will establish a new company called Lenovo NEC Holdings B.V., which will be registered in the Netherlands. NEC will receive US$175 million from Lenovo through the issuance of Lenovo's shares. Lenovo, through a unit, will own a 51% stake in the joint venture, while NEC will hold a 49% stake. In February 2011, Bloomberg News said the joint venture would allow Lenovo to expand in the field of servers, and NEC's Masato Yamamoto said NEC would be able to grow in China. On January 26, 2012, NEC Corporation announced that it would cut 10,000 jobs globally due to a big loss on NEC's consolidated financial statement in line with the economic crisis in Europe and lagged in the development of smartphones in the domestic market compared to Apple and Samsung. Previously, in January 2009 NEC has cut about 20,000 jobs, mainly in sluggish semiconductor and liquid crystal display related businesses. In 2013 NEC was the biggest PC server manufacturer in Japan, with a 23.6% share. In August 2014, NEC Corporation was commissioned to build a super-fast undersea data transmission cable linking the United States and Japan for a consortium of international companies consisting of China Mobile International, China Telecom Global, Global Transit, Google, KDDI and SingTel. The pipeline went online June 30, 2016. It exited from the smartphone market in 2015 by dissolving NEC Mobile Communications, bailing out the other participants in the smartphone joint-venture. In April 2017, KEMET Corporation announced it would purchase a 61% controlling interest in NEC Tokin from NEC, making NEC Tokin its wholly owned subsidiary. Once the purchase is complete, the company will change its name to "Tokin Corporation". In July 2018, NEC established its subsidiary, NEC X, in Silicon Valley, to fast-track technologies and business ideas selected from inside and outside NEC. NEC X created a corporate accelerator program that works with entrepreneurs, start-ups and existing companies to help them develop new products that leverage NEC's emerging technologies. In August 2018, Envision Energy struck an agreement with Nissan and NEC to acquire their automotive battery joint venture. In December 2018, NEC announced that it would acquire KMD, the largest Danish IT company, for $1.2 billion to strengthen its digital government business. NEC has sold its sixty-year-old lighting business in April 2019. As of September 2019, NEC is the largest supplier of AI surveillance technology in the world. In the first half of 2020, NEC sold a majority stake in NEC Display Solutions, the professional display subsidiary, to Sharp Corporation and decided to gradually curtail the money-losing energy storage business throughout the decade. Upon the suggested banning of Huawei's 5G equipment led by the United States in 2020, being a diminished supplier, NEC was galvanized to ramp up its relatively small 5G network business to fill the void in the telecommunications equipment markets of the United States and the United Kingdom. NTT, the largest carrier in Japan, invested $596 million for a 4.8 percent stake in NEC to assist this move. In December 2020, NEC acquired Swiss digital banking solution developer Avaloq for US$2.2 billion. Operations As of July 2018, NEC has 6 larger business segments - Public, Enterprise, Network Services, System Platform, Global, and Others. It has renamed its Telecom Carrier business to Network Service. Principal subsidiaries of NEC include: NEC Corporation of America Netcracker Technology NEC Display Solutions of America (A Sharp-owned company as of July 2020) NEC Europe KMD Avaloq Products NEC MobilePro - a handheld computer running Windows CE NEC Aspire hybrid small business phone system Electric vehicle batteries (Automotive Energy Supply Corporation, a joint-venture between Nissan, NEC Corporation and NEC TOKIN) NEC mobile phone (see NEC e616) NEC America MultiSync Monitors and Fax devices1 NEC digital cinema projector NEC Home Electronics (USA), Inc. / NEC MultiSpeed laptop PCs, MultiSync series PC monitors and Data Projectors NEC Home Electronics (USA), Inc. / TV, Projection TV, VCRs and Home Audio (CD, Amplifiers, Receivers) NEC Technologies, Inc. / VERSA series notebook PCs NEC Information Systems, Inc. POWERMATE desktop PCs NEC Information Systems, Inc. Valuestar / NEC POWERMATE hybrid computer NEC (Division unknown) Car Stereos and Keyless Entry Systems PC Engine (TurboGrafx-16 in US) and all related hardware and successors; co-produced by Hudson Soft. PC-FX NEC V20 NEC V25 TurboExpress Defense products include: J/TPS-102 Self-propelled ground-based early warning 3D radar (JGSDF) Broadband multipurpose radio system (JGSDF) Advanced Combat Infantry Equipment System [ACIES] (JSDF) - Major subcontractor Howa rifle system (JSDF) - Major subcontractor as part of ACIES Laptops ProSpeed Versa pro type VB December 2016 Supercomputers 1983 Announced the SX-1 and SX-2 supercomputers 1989 Introduction of SX-3 1994 First announcement of SX-4 1999 Delivery of SX-5 2002 Introduced SX-6 2002 Installation of the Earth Simulator, the world's fastest supercomputer from 2002 to 2004 reaching a speed of 35,600 gigaflops 2005 NEC SX-8 in production 2006 Announced the SX-8R 2007 Announced the SX-9 2011 First announcement of the NEC SX-9's successor 2013 Announced the SX-ACE 2017 Announced the SX-Aurora TSUBASA, computing platform that expands the horizons of supercomputing, Artificial Intelligence and Big Data analytics. Achievements Achievements of NEC include: the discovery of single-walled carbon nanotubes by Sumio Iijima the invention of the widely used MUX-scan design for test methodology (contrast with the IBM-developed level-sensitive scan design methodology) the world's first demonstration of the one-qubit rotation gate in solid state devices. As for mobile phones, NEC pioneered key technologies like color displays, 3G support, dual screens and camera modules. Developed a facial recognition system able to detect and distinguish human faces through medical masks. Sponsorships NEC was the main (title) sponsor of the Davis Cup competition until 2002, when BNP Paribas took over the sponsorship. NEC between 1982 and 2012 sponsored the NEC Cup, a Go tournament in Japan. NEC between 1986 and 2003 sponsored the NEC Shun-Ei, a Go tournament for young players in Japan. NEC sponsored the English football club Everton from 1985 to 1995. The 1995 FA Cup Final triumph was Everton's final game of the decade-long NEC sponsorship, and Danka took over as sponsors. NEC signed a deal to sponsor the Sauber F1 Team from the 2011 season until the 2014 season. NEC sponsored the Sahara Force India F1 Team for the 2015 season until its demise during the 2018 season. Since then NEC has sponsored its successor Racing Point. In April 2013, NEC became the umbrella sponsor for PGA Tour Latinoamérica, a third-tier men's professional golf tour. Sports teams These started as works teams, but over the years came to include professional players: NEC Red Rockets (women's volleyball) NEC Green Rockets (men's rugby union) NEC also used to own Montedio Yamagata of the football (soccer) J. League, but just sponsors them along with other local companies. The following team is defunct. NEC Blue Rockets (men's volleyball) See also List of computer system manufacturers Turbografx 16 Footnotes References Mark Mason, Foreign Direct Investment and Japanese Economic Development, 1899–1931, Business and Economic History, Second Series, Volume Sixteen, 1987. NEC Corporation, NEC Corporation, The First 80 Years, 1984, . External links Japanese companies established in 1899 Electronics companies established in 1899 Companies formerly listed on the Nasdaq Companies formerly listed on the London Stock Exchange Companies listed on the Fukuoka Stock Exchange Companies listed on the Osaka Exchange Companies listed on the Tokyo Stock Exchange Computer printer companies Computer security companies Conglomerate companies of Japan Consumer electronics brands Defense companies of Japan Electric vehicle battery manufacturers Electronics companies of Japan Japanese brands Mitsui Mobile phone manufacturers Multinational companies headquartered in Japan Point of sale companies Public safety communications Sumitomo Group
27765443
https://en.wikipedia.org/wiki/Swiss%20National%20Supercomputing%20Centre
Swiss National Supercomputing Centre
The Swiss National Supercomputing Centre (; CSCS) is the national high-performance computing centre of Switzerland. It was founded in Manno, canton Ticino, in 1991. In March 2012, the CSCS moved to its new location in Lugano-Cornaredo. The main function of the Swiss National Supercomputing Centre is a so-called National User Lab. It is open to all Swiss researchers and their assistants, who can get free access to CSCS' supercomputers in a competitive scientific evaluation process. In addition, the centre operates dedicated computing facilities for specific research projects and national mandates, e.g. weather forecasting. It is the national competence centre for high-performance computing and serves as a technology platform for Swiss research in computational science. CSCS is an autonomous unit of the Swiss Federal Institute of Technology in Zurich (ETH Zurich) and closely collaborates with the local University of Lugano (USI). Building The building at the new location Lugano-Cornaredo has a pillar-free machine hall of 2000 m² and can be powered with up to 20 MW electricity. Water for cooling the supercomputers is taken from Lake Lugano in 45m depth and pumped over a distance of 2.8 km to the centre. Thus, little energy is consumed for providing the cooling and the computer centre achieves a high energy efficiency with a PUE < 1.25. Supercomputers Supercomputer procurements at CSCS can be categorised into two phases: In the first phase from 1991 to 2011, the centre focused on proven technologies in order to facilitate user access to its services. This strategy was centred on the SX vector processor architecture of NEC. The IBM SP4, installed 2002, was the first production system of CSCS with a massively-parallel computer architecture. The procurement of the first Cray XT3 in Europe in 2005 marked the beginning of the second phase. Since then, CSCS concentrates on early technologies, preferably before they become a generally available product. Current computing facilities Previous computing facilities National Supercomputing Service Run as a user lab, CSCS promotes and encourages top-notch research. Simulations created on supercomputers yield completely new insights in science. Consequently, CSCS operates cutting-edge computer systems as an essential service facility for Swiss researchers. These computers aid scientists with diverse issues and requirements - from the pure calculation of complex problems to analysis of complex data. The pool of national high-performance computers is available to its users as a so-called user lab: all researchers in and out of Switzerland can use the supercomputer infrastructure. Dedicated HPC Services In addition to the computers of the User Lab, CSCS operates dedicated compute resources for strategic research projects and tasks of national interest. Since 2001, the calculations for the numerical weather prediction of the Swiss meteorological survey MeteoSwiss take place at the Swiss National Supercomputing Centre. In January 2008, the first operational high-resolution weather forecasting suite in Europe was taken in production on a massively-parallel supercomputer at CSCS. Another dedicated computer resource operated by CSCS is the Swiss tier-2 computer cluster for the Computing Grid of the CERN LHC accelerator. CSCS also provides storage services for massive data sets of the Swiss systems biology initiative SystemsX and the Centre for Climate Systems Modelling C2SM at ETH Zurich. Research and development For supporting the further development of its supercomputing services, CSCS regularly evaluates relevant new technologies (technology scouting) and publishes the results as white papers on its website. In 2009, CSCS and the University of Lugano jointly launched the platform HP2C with the goal to prepare the application codes of Swiss researchers for upcoming supercomputer architectures. Notes and references See also Science and technology in Switzerland Supercomputing in Europe TOP500 External links Current Computers Supercomputer sites Science and technology in Switzerland ETH Zurich Buildings and structures in Ticino 1991 establishments in Switzerland Institutes associated with CERN
14071714
https://en.wikipedia.org/wiki/WHAT%20IF%20software
WHAT IF software
WHAT IF is a computer program used in a wide variety of computational (in silico) macromolecular structure research fields. The software provides a flexible environment to display, manipulate, and analyze small and large molecules, proteins, nucleic acids, and their interactions. History The first version of the WHAT IF software was developed by Gert Vriend in 1987 at the University of Groningen, Groningen, Netherlands. Most of its development occurred during 1989–2000 at the European Molecular Biology Laboratory (EMBL) in Heidelberg, Germany. Other contributors include Chris Sander, and Wolfgang Kabsch. In 2000, maintenance of the software moved to the Dutch Center for Molecular and Biomolecular Informatics (CMBI) in Nijmegen, Netherlands. It is available for in-house use, or as a web-based resource. , the original paper describing WHAT IF has been cited more than 4,000 times. Software WHAT IF provides a flexible environment to display, manipulate, and analyze small molecules, proteins, nucleic acids, and their interactions. One notable use was detecting many millions of errors (often small, but sometimes catastrophic) in Protein Data Bank (PDB) files. WHAT IF also provides an environment for: homology modeling of protein tertiary structures and quaternary structures; validating protein structures, notably those deposited in the PDB; correcting protein structures; visualising macromolecules and their interaction partners (for example, lipids, drugs, ions, and water), and manipulating macromolecules interactively. WHAT IF is compatible with several other bioinformatics software packages, including YASARA and Jmol. See also List of molecular graphics systems Molecule editor External links References Molecular modelling software Bioinformatics software Protein structure
23687503
https://en.wikipedia.org/wiki/Imc%20FAMOS
Imc FAMOS
FAMOS (short for fast analysis and monitoring of signals) is a graphical data analysis program for image analysis, evaluating and visually displaying measurement results. The program was introduced in 1987 by the German company imc Test & Measurement GmbH (integrated measurement & control) in Berlin for Windows 3.11. According to its manufacturer, FAMOS offers high speed display and processing of data sets of any size. Import of a wide variety of data formats FAMOS can import data from different file formats, e.g. Excel-, Binary-, or ASCII-files. With a file assistant it is also possible to create different import filters. It is possible to present the data in different graphical ways. The information can be combined, labeled and processed. FAMOS is able to store data in a proprietary as well as in ASCII or Excel format. Data analysis Imported data can be processed with a variety of mathematical operations, either manually or in automated procedures. FAMOS offers expansion modules for special operations such as electronic filters, for spectral analysis and for synchronized display of data and video sequences and for the ASAM-ODS data model. It is also possible to play data back audibly with the PC sound card. Documentation By means of its "Report Generator", FAMOS enables creation of documentations / lab reports consisting of a variety of dialog elements and plots as well as graphics with controls which can be automatically hidden when printing. The reports generated can be subject to post-processing using various input data, and there are templates for partially or fully automated composition of reports. Literature (German language) 20 Jahre imc und ADDITIVE, sensor report 4/2008 FAMOS 6.0 - Mehr als nur Signalanalyse, Physik Journal 6/2008 Neue Software-Version für die Analyse von Messsignalen, ATZ 4/2008 Signalanalyse für den Messtechniker, TECHNICA 23-24/2005 Signalanalysesoftware mit neuen Funktionen, Maschinenmarkt 17/2008 Wie ein Taschenrechner, MSR Magazin 11/2002 FAMOS - Taschenrechner für die Meßtechnik, Addison-Wesley, 1997, References External links - FAMOS website - FAMOS Script for Strain Rosette calculations Science software Physics software Science software for Windows Computer-related introductions in 1987 Data analysis software
22779232
https://en.wikipedia.org/wiki/Red%20Bend%20Software
Red Bend Software
Red Bend Software is a software company providing mobile software management technology to mobile phone manufacturers and operators. Red Bend's software has been deployed by handset manufacturers, including Kyocera, LG Electronics, Motorola, Sharp, Sony Mobile, and ZTE, as well as companies in the mobile industry, M2M, automotive and enterprise markets. The company's initial funding was provided by Carmel Ventures. History The company was first established in April 1999, in Rosh Ha'ayin, Israel, and by the summer of 2000, the company finalised the beta of its future technology, vCurrent, and signed its first agreement with BackWeb Technologies. Red Bend completed its first round of venture funding in the autumn of 2000. The corporate headquarters in December 2003 were then located in Boston, Massachusetts, and research and development continued in Israel. Red Bend Software acquired developers of mobile virtualisation software VirtualLogix in September 2010. Red Bend was acquired by Harman International Industries in January 2015. Red Bend holds membership of the Open Mobile Alliance since March 2010. Locations Red Bend is headquartered in the United States, this being in Waltham, MA. The company has offices in Israel, United Kingdom, France, China, Japan, and South Korea. Products Red Bend's vRapid Mobile(R) supports FOTA updating as well as software component management, also known as SCOTA (Software Components OTA). Red Bend's Mobile Software Management solutions have shipped in devices worldwide since December 2014. vRapid Mobile is the most widely deployed software of FOTA, with a market share of 71% according to technology analyst firm Ovum Ltd. by November 2009. Red Bend's vDirect Mobile(TM) is an OMA DM based device management client. References External links Red Bend Software Website Software companies based in Massachusetts Companies based in Waltham, Massachusetts Software companies of Israel Software companies established in 1999 Software companies of the United States 1999 establishments in Massachusetts
35186218
https://en.wikipedia.org/wiki/Wealth%20Lab
Wealth Lab
Wealth Lab is a technical analysis software as well as an electronic trading platform owned by Fidelity Investments. It was created by Dion Kurczek, who founded the original Wealth-Lab, Inc. corporation in 2000. Fidelity acquired the Wealth-Lab software assets in 2004. The client runs on Microsoft Windows .NET Framework v4.0 and requires internet access to function properly. Licensed users can program and backtest trading strategies for stocks and futures. Fidelity premium account holders can use the platform to place trades produced by their trading strategies directly to their brokerage accounts and even setup auto-trading systems. Software Wealth-Lab has an integrated programming environment based on C# syntax with added versatility derived from using its own pascal-like programming language, Wealthscript. Although it is geared toward programmers, it has a drag & drop feature that allows non-programmers to create their own trading strategies based on technical analysis without the necessity to edit or even view any source code. This ability to custom build trading strategies by dragging & dropping basic entry and exit modes and their indicators, not only allows non-programmers to create and use strategy scripts, it expedites the process of programming for both experienced and novice developers. Wealth-Lab requires Market data in order to perform the majority of its operations. In its standard installation, several market data sources are provided such as Yahoo! Finance's free End Of Day data. Users can also lease real-time market data from reputable sources such as Commodity System Inc. Displaying the market data in meaningful ways, i.e. charting, is one of the user's primary activity. Wealth-Lab displays market data in all the typical formats, namely, candlesticks, line, and OHLC; and even the non-typical formats such as kagi chart, Renko, equicandle to name a few. It also allows users to simply drag & drop one or more indicators, from its vast library, right onto a chart subsequently creating panels and annotations for each indicator. In addition to standard technical analysis indicators, users will also find a substantial amount of fundamental analysis indicators to apply to their charts. Backtesting strategies is at the heart of Wealth-Lab. User can either "Explore & Backtest" or "Build & Backtest". ´Build´ refers to the drag & drop method of strategy creation where as ´Explore´ refers to prebuilt strategies that are pre-installed or can be downloaded from the support website, Wealth-Lab.com managed by WL Systems, Inc.. The advantage of the prebuilt strategies is the ability to optimize the parameters. The installation comes with two optimization methods: exhaustive or Monte Carlo which uses a unique method of employing random numbers to create simulations. There are many more Wealth-Lab optimizers that can be installed from Wealth-Lab's extension manager. Developers can also program and share their own indicators, optimizers and strategy scripts, and it has been this open platform philosophy that has contributed to the establishment of a supportive developer's community. Client availability Wealth-Lab 7 is available by subscription to customers worldwide without exception, including U.S. and Canada. Version 6 Prior to August 2020, two legacy versions of Wealth-Lab existed. Wealth-Lab "Pro" was available to Fidelity premium account holders in the US (only). Consumers outside of the US (and Canada), however, could obtain a version of the software known as Wealth-Lab "Developer" via the support website. The difference between the two versions resided in their use of market data streams, custom software extensions, and technical support. In August 2020, Fidelity discontinued the "Pro" version and transitioned customers to use Wealth-Lab Developer 6 and in March 2021 Wealth-Lab 7 was launched, supporting customers worldwide. Technical support Technical support is given to Wealth-Lab Pro users via Fidelity phone support. Wealth-Lab Developer's technical support is provide through the support website. Additionally all Wealth-Lab software users can get supplemental help designing and debugging their strategies via the developers community which is accessible via the forums and wiki found on the support website. Extensions Wealth-Lab is noted for its extensibility, allowing seamless integration of custom broker- and historical/realtime data providers, optimization techniques, position sizing methods, compiled strategies, reusable method libraries, performance visualizers, commission plans, chart drawing tools, Strategy Wizard rules, and more. Data providers: Google, CBOE, Quandl, Zacks, QuoteMedia, Finam, AlfaDirect, QUIK, ASCII, Forexite, Excel, Market Sentiment, Morningstar, Dukascopy Bank, IQFeed, YCharts, Multi Quote, Yahoo, Random, COTCollector, Taipan, TeleChart, Reuters and Google News, Norgate Data, and more Indicator libraries: Community Indicators, TASC Magazine Indicators Strategy and Rule libraries: ActiveTrader Strategy Pack, Community.Rules Performance Visualizers libraries: MS123 Performance Visualizers, HeatMap Various addins: Community Components Library, MS123 PosSizer Library, MS123 IndexDefinitions Library, Data Tool, CandlePattern Rules, Community Commissions Optimizers: Genetic Optimizer, Particle Swarm Optimizer, Tools: Neuro-Lab, Monte Carlo-Lab Past versions Versions previous to 6.0 were freely available for purchase to US and Canadian citizens up until Fidelity bought the rights from Wealthlab, Inc. Starting with Version 6.0, Wealthlab is only available to US and Canadian citizens if they open an account with Fidelity. Wealth-Lab Version 6.0 Included native integration of legacy add-on product, Index-Lab, 64-bit compatibility, and a "Multi-Condition" rules dimension. Wealth-Lab Version 6.1 Primarily a maintenance release with minor enhancements and behind-the-scenes improvements (API and other transparent changes). Wealth-Lab Version 6.2 Combination Strategies and Regular Expressions for the Symbol Info Manager. Wealth-Lab Version 6.3 Prepared for Wealth-Lab Pro streaming provider integration for the Strategy Monitor tool Wealth-Lab Version 6.4 Completed Wealth-Lab Pro streaming provider integration with the Fidelity back end data center to feed the Strategy Monitor for much more responsive and reliable intraday operations. Also provided the ability to save strategies on a network drive. Wealth-Lab Version 6.5 Introduced the WealthSignals Trader tool, which downloads trading signals from Wealth-Lab.com's WealthSignals service to your Wealth-Lab desktop client. This version is built on the .NET 4.0 framework. Wealth-Lab Version 6.6 Released in late November 2013, added an integrated tool for Walk-Forward Optimization backtest and analysis. Wealth-Lab Version 6.8 Released in late 2014, it is a maintenance release fixing known issues. This version is built on the .NET 4.5 framework and for this reason is incompatible with older operating systems i.e. Windows XP and Vista. Wealth-Lab Version 6.9 Released in late 2015, version 6.9 brings ability to backtest synthetic option contracts. Includes other usability enhancements and minor fixes. Current version Wealth-Lab Version 7 Released on 9 March 2021, Version 7 is a modern platform built for Windows 10 on .NET Core 3.1. Wealth-Lab 7 improved on Version 6's Strategy Builder,  now called Building Block Strategies, streamlining its drag-and-drop interface making it more versatile to use all indicators, events data, candlestick patterns, and other condition qualifiers. The backtesting engine was overhauled to process bar-by-bar, which allows strategies to dynamically access and interact with the equity curve and other simulation aspects. Automated strategy trading is possible by installing extensions for one or more brokerages.  Although many of the core tools have the same functions as in prior versions, two unique tools are also available as extensions: the Candlestick Genetic Evolver and Indicator Profiler. References External links Technical analysis software Online brokerages
49419562
https://en.wikipedia.org/wiki/Hexadecimal%20floating%20point
Hexadecimal floating point
Hexadecimal floating point may refer to: IBM hexadecimal floating point in the IBM System 360 and 370 series of computers and others since 1964 Hexadecimal floating-point arithmetic in the Illinois ILLIAC III computer in 1966 Hexadecimal floating-point arithmetic in the SDS Sigma 7 computer in 1966 Hexadecimal floating-point arithmetic in the SDS Sigma 5 computer in 1967 Hexadecimal floating-point arithmetic in the Xerox Sigma 9 computer in 1970 Hexadecimal floating-point arithmetic in the Interdata 8/32 computer in the 1970s Hexadecimal floating-point arithmetic in the Manchester MU5 computer in 1972 Hexadecimal floating-point arithmetic in the Data General Eclipse S/200 computer in ca. 1974 Hexadecimal floating-point arithmetic in the Gould Powernode 9080 computer in the 1980s Hexadecimal floating-point arithmetic in the HEP computer in 1982 Hexadecimal floating-point arithmetic in the SEL System 85 computer Hexadecimal floating-point arithmetic in the SEL System 86 computer See also Hexadecimal Floating-point arithmetic References Floating point 16
31118692
https://en.wikipedia.org/wiki/Justin%20Wilcox%20%28American%20football%29
Justin Wilcox (American football)
Justin Draper Wilcox (born November 12, 1976) is an American football coach and former player. Since 2017, he has been the head football coach of the California Golden Bears. Early years Born in Eugene, Oregon, Wilcox grew up as the younger of two sons on a family farm (wheat and cherries) in nearby Junction City. He played quarterback at Junction City High School and led the team to the 3A state title as a junior in 1993. He graduated in 1995 and considered Stanford and Arizona but followed family tradition and accepted a scholarship to Oregon under head coach Mike Bellotti. Playing career After redshirting his first year at Oregon, Wilcox found himself buried on the depth chart and switched to defensive back. A nickel back as a redshirt freshman, he lost most of the 1996 season to a knee injury. Wilcox became a fixture at safety until his senior season of 1999, when he was asked to fill a void at cornerback. He was invited to an NFL training camp with the Washington Redskins in 2000, but did not make the final roster. Wilcox graduated from Oregon in 1999 with a degree in anthropology. Coaching career Assistant coaching career Wilcox began his career as a college football coach in 2001 as a graduate assistant at Boise State, under new head coach Dan Hawkins. After two seasons as a graduate assistant, he left for the Bay Area to coach the linebackers at California under head coach Jeff Tedford. After three seasons at Cal, Wilcox returned to Boise State in 2006 as the defensive coordinator under new head coach Chris Petersen. In four years the teams lost only four games, with a record, and his defenses were statistically among the highest-rated in the nation. Following the 2009 season, Wilcox accepted the defensive coordinator job at Tennessee under new head coach Derek Dooley. In late December 2010, it was reported that Wilcox was a candidate to replace Will Muschamp, who left Texas for Florida. On New Year's Day, Wilcox announced that he would return to Tennessee for the 2011 season. Early on January 2, 2012, reports emerged that Wilcox was to become the new defensive coordinator at Washington in Seattle, under head coach Steve Sarkisian. The position was vacant due to Nick Holt's termination days earlier, and the announcement was made official later that night. The Huskies were 7–6 in 2012 and lost in the Las Vegas Bowl. Washington was 9–4 in 2013 and won the Fight Hunger Bowl; Sarkisian left after the regular season for USC. Wilcox followed Sarkisian to USC and was the defensive coordinator; the Trojans went 9–4 in 2014 and won the Holiday Bowl. After five games in 2015, Sarkisian was fired and succeeded by Clay Helton. The Trojans finished 50th nationally in scoring defense (25.7 points per game) and 65th in total defense (400.8 yards per game) in 2015, and Wilcox was terminated the day after the loss to Stanford in the Pac-12 championship game. On January 28, 2016, Wilcox became the defensive coordinator at Wisconsin, under head coach Paul Chryst. The Badgers went 11–3 and won the Cotton Bowl with a defense ranked in the top ten in a number of categories. California On January 14, 2017, Wilcox was introduced as the 34th head coach of the California Golden Bears. The Bears went 5–7 during Wilcox's first year in 2017, with wins over North Carolina, Ole Miss, and #8 Washington State, and three losses by three points or fewer. The Bears went 7–6 during Wilcox's second year in 2018. The Bears upset #15 Washington 12–10 and defeated USC 15–14 at the Coliseum in Los Angeles to snap a 14-year losing streak to the Trojans. The Bears lost 10–7 in overtime to TCU in the 2018 Cheez-It Bowl. In contrast to his predecessor, Sonny Dykes, Wilcox emphasized a strong defense, cutting Cal's points allowed per game from 42.6 (2016) to 20.4 (2018). However, the Bears’ offensive efficiency ranked as the second worst among all Power Five teams. After the regular season, Wilcox signed a new five-year contract to coach the Bears through the 2023 season. The Bears improved to an 8–5 record under Wilcox in 2019. They achieved their highest ranking since 2009 when they were ranked No. 15 after a 4–0 start to the season. After defeating Stanford in the Big Game for the first time since 2009, the Bears earned bowl-eligibility two years in a row, again for the first time since 2009. The Bears defeated Illinois 35–20 in the 2019 Redbox Bowl. The Bears finished 1–3 in a COVID-shortened 2020 season, with their lone win coming against #21 Oregon. In 2021, the Bears went 5–7, including wins over USC and Stanford. Cal notched a Big Game record 636 total yards of offense in a 41–11 victory over Stanford. Following the season, Wilcox signed a new contract extension keeping him at Cal through the 2027 season. Family Wilcox is the son of Dave Wilcox, an All-Pro linebacker for the San Francisco 49ers and a member of the Pro Football Hall of Fame. Inducted in 2000, he played 11 seasons in the National Football League (NFL), from 1964 to 1974, all with the 49ers. From Vale in eastern Oregon, Dave played college football at Boise Junior College, then transferred to Oregon in 1962. Justin's brother, Josh Wilcox, was three years ahead in school and played tight end for the Ducks and two seasons in the NFL with the New Orleans Saints. Justin's uncle John Wilcox also played in the NFL, in the early 1960s. Head coaching record References External links California profile 1976 births Living people American football defensive backs Boise State Broncos football coaches California Golden Bears football coaches Oregon Ducks football coaches Oregon Ducks football players Tennessee Volunteers football coaches USC Trojans football coaches Washington Huskies football coaches Sportspeople from Eugene, Oregon People from Junction City, Oregon Coaches of American football from Oregon Players of American football from Oregon
7412287
https://en.wikipedia.org/wiki/Channel%20allocation%20schemes
Channel allocation schemes
In radio resource management for wireless and cellular networks, channel allocation schemes allocate bandwidth and communication channels to base stations, access points and terminal equipment. The objective is to achieve maximum system spectral efficiency in bit/s/Hz/site by means of frequency reuse, but still assure a certain grade of service by avoiding co-channel interference and adjacent channel interference among nearby cells or networks that share the bandwidth. Channel-allocation schemes follow one of two types of strategy: Fixed: FCA, fixed channel allocation: manually assigned by the network operator Dynamic: DCA, dynamic channel allocation DFS, dynamic frequency selection Spread spectrum Static Channel Allocation In Fixed Channel Allocation or Fixed Channel Assignment (FCA) each cell is given a predetermined set of frequency channels. FCA requires manual frequency planning, which is an arduous task in time-division multiple access (TDMA) and frequency-division multiple access (FDMA) based systems since such systems are highly sensitive to co-channel interference from nearby cells that are reusing the same channel. Another drawback with TDMA and FDMA systems with FCA is that the number of channels in the cell remains constant irrespective of the number of customers in that cell. This results in traffic congestion and some calls being lost when traffic gets heavy in some cells, and idle capacity in other cells. If FCA is combined with conventional FDMA and perhaps or TDMA, a fixed number of voice channels can be transferred over the cell. A new call can only be connected by an unused channel. If all the channel are occupied than the new call is blocked in this system. There are however several dynamic radio-resource management schemes that can be combined with FCA. A simple form is traffic-adaptive handover threshold, implying that calls from cell phones situated in the overlap between two adjacent cells can be forced to make the handover to the cell with the lowest load for the moment. If FCA is combined with spread spectrum, the maximum number of channels is not fixed in theory, but in practice a maximum limit is applied, since too many calls would cause too high co-channel interference level, causing the quality to be problematic. Spread spectrum allows cell breathing to be applied, by allowing an overloaded cell to borrow capacity (maximum number of simultaneous calls in the cell) from a nearby cell that is sharing the same frequency. FCA can be extended into a DCA system by using a borrowing strategy in which a cell can borrow channels from the neighboring cell which is supervised by Mobile Switching Center (MSC). Dynamic Frequency Selection Dynamic Frequency Selection (DFS) is a mechanism specified for wireless networks with non-centrally controlled access points, such as wireless LAN (commonly Wi-Fi). It is designed to prevent interference with other usages of the frequency band, such as military radar, satellite communication, and weather radar. The access points would automatically select frequency channels with low interference levels. In case of wireless LAN standard, DFS was standardized in 2003 as part of IEEE 802.11h. Actual frequency band for DFS vary by jurisdiction. It is often enforced for the frequency bands used by Terminal Doppler Weather Radar and C-Band satellite communication. The misconfiguration of DFS had caused significant disruption in weather radar operation during early deployments of 5 GHz Wi-Fi in a number of countries in the world. For example, DFS is also mandated in the 5470-5725 MHz U-NII band for radar avoidance in United States. Dynamic Channel Allocation A more efficient way of channel allocation would be Dynamic Channel Allocation or Dynamic Channel Assignment (DCA) in which voice channel are not allocated to cell permanently, instead for every call request base station request channel from MSC. The channel is allocated following an algorithm which accounts the following criteria: Future blocking probability in neighboring cells and Reuse distance Usage frequency of the candidate channel Average blocking probability of the overall system Instantaneous channel occupancy distribution It requires the MSC to collect real time data on channel occupancy, traffic distribution and Received Signal Strength Indications (RSSI). DCA schemes are suggested for TDMA/FDMA based cellular systems such as GSM, but are currently not used in any products. OFDMA systems, such as the downlink of 4G cellular systems, can be considered as carrying out DCA for each individual sub-carrier as well as each timeslot. DCA can be further classified into centralized and distributed. Some of the centralized DCA schemes are: First available (FA): the first available channel satisfying reuse distance requirement is assigned to the call Locally optimized dynamic assignment (LODA): cost function is based on the future blocking probability in the neighboring cells Selection with maximum usage on the reuse ring (RING): a candidate channel is selected which is in use in the most cells in the co-channel set DCA and DFS eliminate the tedious manual frequency planning work. DCA also handles bursty cell traffic and utilizes the cellular radio resources more efficiently. DCA allows the number of channels in a cell to vary with the traffic load, hence increasing channel capacity with little costs. Spread spectrum Spread spectrum can be considered as an alternative to complex DCA algorithms. Spread spectrum avoids cochannel interference between adjacent cells, since the probability that users in nearby cells use the same spreading code is insignificant. Thus the frequency channel allocation problem is relaxed in cellular networks based on a combination of spread spectrum and FDMA, for example IS95 and 3G systems. Spread spectrum also facilitate that centrally controlled base stations dynamically borrow resources from each other depending on the traffic load, simply by increasing the maximum allowed number of simultaneous users in one cell (the maximum allowed interference level from the users in the cell), and decreasing it in an adjacent cell. Users in the overlap between the base station coverage area can be transferred between the cells (called cell-breathing), or the traffic can be regulated by admission control and traffic-shaping. However, spread spectrum gives lower spectral efficiency than non-spread spectrum techniques, if the channel allocation in the latter case is optimized by a good DCA scheme. Especially OFDM modulation is an interesting alternative to spread spectrum because of its ability to combat multipath propagation for wideband channels without complex equalization. OFDM can be extended with OFDMA for uplink multiple access among users in the same cell. For avoidance of inter-cell interference, FDMA with DCA or DFS is once again of interest. One example of this concept is the above mentioned IEEE 802.11h standard. OFDM and OFDMA with DCA is often studied as an alternative for 4G wireless systems. DCA on a packet-by-packet basis In packet based data communication services, the communication is bursty and the traffic load rapidly changing. For high system spectrum efficiency, DCA should be performed on a packet-by-packet basis. Examples of algorithms for packet-by-packet DCA are Dynamic Packet Assignment (DPA), Dynamic Single Frequency Networks (DSFN) and Packet and resource plan scheduling (PARPS). See also Cellular traffic Cognitive radio Dynamic bandwidth allocation (DBA) References External links Channel Assignment Schemes, JPL's Wireless Communication Reference Website Radio resource management
4005528
https://en.wikipedia.org/wiki/Common%20Vulnerabilities%20and%20Exposures
Common Vulnerabilities and Exposures
The Common Vulnerabilities and Exposures (CVE) system provides a reference-method for publicly known information-security vulnerabilities and exposures. The United States' National Cybersecurity FFRDC, operated by The Mitre Corporation, maintains the system, with funding from the US National Cyber Security Division of the US Department of Homeland Security. The system was officially launched for the public in September 1999. The Security Content Automation Protocol uses CVE, and CVE IDs are listed on Mitre's system as well as in the US National Vulnerability Database. Background A vulnerability is a weakness in a piece of computer software which can be used to access things one should not be able to gain access to. For example, software which processes credit cards should not allow people to read the credit card numbers it processes, but hackers might use a vulnerability to steal credit card numbers. Talking about one specific vulnerability is hard because there are many pieces of software, sometimes with many vulnerabilities. CVE Identifiers give each vulnerability one different name, so people can talk about specific vulnerabilities by using their names. CVE identifiers MITRE Corporation's documentation defines CVE Identifiers (also called "CVE names", "CVE numbers", "CVE-IDs", and "CVEs") as unique, common identifiers for publicly known information-security vulnerabilities in publicly released software packages. Historically, CVE identifiers had a status of "candidate" ("CAN-") and could then be promoted to entries ("CVE-"), however this practice was ended in 2005 and all identifiers are now assigned as CVEs. The assignment of a CVE number is not a guarantee that it will become an official CVE entry (e.g. a CVE may be improperly assigned to an issue which is not a security vulnerability, or which duplicates an existing entry). CVEs are assigned by a CVE Numbering Authority (CNA). While some vendors acted as a CNA before, the name and designation was not created until February 1, 2005. there are three primary types of CVE number assignments: The Mitre Corporation functions as Editor and Primary CNA Various CNAs assign CVE numbers for their own products (e.g. Microsoft, Oracle, HP, Red Hat, etc.) A third-party coordinator such as CERT Coordination Center may assign CVE numbers for products not covered by other CNAs When investigating a vulnerability or potential vulnerability it helps to acquire a CVE number early on. CVE numbers may not appear in the MITRE or NVD CVE databases for some time (days, weeks, months or potentially years) due to issues that are embargoed (the CVE number has been assigned but the issue has not been made public), or in cases where the entry is not researched and written up by MITRE due to resource issues. The benefit of early CVE candidacy is that all future correspondence can refer to the CVE number. Information on getting CVE identifiers for issues with open source projects is available from Red Hat and GitHub CVEs are for software that has been publicly released; this can include betas and other pre-release versions if they are widely used. Commercial software is included in the "publicly released" category, however custom-built software that is not distributed would generally not be given a CVE. Additionally services (e.g. a Web-based email provider) are not assigned CVEs for vulnerabilities found in the service (e.g. an XSS vulnerability) unless the issue exists in an underlying software product that is publicly distributed. CVE data fields The CVE database contains several fields: Description This is a standardized text description of the issue(s). One common entry is: ** RESERVED ** This candidate has been reserved by an organization or individual that will use it when announcing a new security problem. When the candidate has been publicized, the details for this candidate will be provided. This means that the entry number has been reserved by Mitre for an issue or a CNA has reserved the number. So in the case where a CNA requests a block of CVE numbers in advance (e.g. Red Hat currently requests CVEs in blocks of 500), the CVE number will be marked as reserved even though the CVE itself may not be assigned by the CNA for some time. Until the CVE is assigned, Mitre is made aware of it (i.e., the embargo passes and the issue is made public), and Mitre has researched the issue and written a description of it, entries will show up as "** RESERVED **". References This is a list of URLs and other information Record Creation Date This is the date the entry was created. For CVEs assigned directly by Mitre, this is the date Mitre created the CVE entry. For CVEs assigned by CNAs (e.g. Microsoft, Oracle, HP, Red Hat, etc.) this is also the date that was created by Mitre, not by the CNA. The case where a CNA requests a block of CVE numbers in advance (e.g. Red Hat currently requests CVEs in blocks of 500) the entry date that CVE is assigned to the CNA. Obsolete fields The following fields were previously used in older CVE records, but are no longer used. Phase: The phase the CVE is in (e.g. CAN, CVE). Votes: Previously board members would vote yay or nay on whether or not the CAN should be accepted and turned into a CVE. Comments: Comments on the issue. Proposed: When the issue was first proposed. Changes to syntax In order to support CVE ID's beyond CVE-YEAR-9999 (aka the CVE10k problem) a change was made to the CVE syntax in 2014 and took effect on Jan 13, 2015. The new CVE-ID syntax is variable length and includes: CVE prefix + Year + Arbitrary Digits NOTE: The variable length arbitrary digits will begin at four fixed digits and expand with arbitrary digits only when needed in a calendar year, for example, CVE-YYYY-NNNN and if needed CVE-YYYY-NNNNN, CVE-YYYY-NNNNNN, and so on. This also means there will be no changes needed to previously assigned CVE-IDs, which all include a minimum of four digits. CVE SPLIT and MERGE CVE attempts to assign one CVE per security issue, however in many cases this would lead to an extremely large number of CVEs (e.g. where several dozen cross-site scripting vulnerabilities are found in a PHP application due to lack of use of htmlspecialchars() or the insecure creation of files in /tmp). To deal with this, there are guidelines (subject to change) that cover the splitting and merging of issues into distinct CVE numbers. As a general guideline one should first consider issues to be merged, then issues should be split by the type of vulnerability (e.g. buffer overflow vs. stack overflow), then by the software version affected (e.g. if one issue affects version 1.3.4 through 2.5.4 and the other affects 1.3.4 through 2.5.8 they would be SPLIT) and then by the reporter of the issue (e.g. Alice reports one issue and Bob reports another issue the issues would be SPLIT into separate CVE numbers). Another example is Alice reports a /tmp file creation vulnerability in version 1.2.3 and earlier of ExampleSoft web browser, in addition to this issue several other /tmp file creation issues are found, in some cases this may be considered as two reporters (and thus SPLIT into two separate CVEs, or if Alice works for ExampleSoft and an ExampleSoft internal team finds the rest it may be MERGE'ed into a single CVE). Conversely, issues can be merged, e.g. if Bob finds 145 XSS vulnerabilities in ExamplePlugin for ExampleFrameWork regardless of the versions affected and so on they may be merged into a single CVE. Search CVE identifiers The Mitre CVE database can be searched at the CVE List Search, and the NVD CVE database can be searched at Search CVE and CCE Vulnerability Database. CVE usage CVE identifiers are intended for use with respect to identifying vulnerabilities: Common Vulnerabilities and Exposures (CVE) is a dictionary of common names (i.e., CVE Identifiers) for publicly known information security vulnerabilities. CVE’s common identifiers make it easier to share data across separate network security databases and tools, and provide a baseline for evaluating the coverage of an organization’s security tools. If a report from one of your security tools incorporates CVE Identifiers, you may then quickly and accurately access fix information in one or more separate CVE-compatible databases to remediate the problem. Users who have been assigned a CVE identifier for a vulnerability are encouraged to ensure that they place the identifier in any related security reports, web pages, emails, and so on. CVE assignment issues Per section 7.1 of the CNA Rules, a vendor which received a report about a security vulnerability has full discretion in regards to it. This can lead to a conflict of interest as a vendor may attempt to leave flaws unpatched by denying a CVE assignment at first place – a decision which Mitre can't reverse. See also Common Vulnerability Scoring System (CVSS) Common Weakness Enumeration (CWE) Computer security References External links National Vulnerability Database (NVD) Common Configuration Enumeration (CCE) at NVD vFeed the Correlated and Aggregated Vulnerability Database - SQLite Database and Python API Cyberwatch Vulnerabilities Database, third party What Enterprises need to know about IT Security Audit Services? Computer security exploits Mitre Corporation Security vulnerability databases