id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
7403047
https://en.wikipedia.org/wiki/C-squares
C-squares
C-squares (acronym for the concise spatial query and representation system) is a system of spatially unique, location-based identifiers (geocodes) for areas on the surface of the earth, represented as cells from a latitude-longitude based Discrete Global Grid at a hierarchical set of resolution steps, obtained by progressively subdividing 10×10 degree World Meteorological Organization squares; the term "c-square" is also available for use to designate any component cell of the grid. Individual cell identifiers incorporate literal values of latitude and longitude in an interleaved notation (producing grid resolutions of 10, 1, 0.1 degrees, etc.), together with additional digits that support intermediate grid resolutions of 5, 0.5, 0.05 degrees, etc. The system was initially designed to represent data "footprints" or spatial extents in a more flexible manner than a standard minimum bounding rectangle, and to support "lightweight", text-based spatial querying; it can also provide a set of identifiers for grid cells used for assembly, storage and analysis of spatially organised data, in a unified notation that transcends national or jurisdictional boundaries. Dataset extents expressed in c-squares notation can be visualised using a web-based utility, the c-squares mapper, an online instance of which is currently provided by CSIRO Oceans and Atmosphere in Australia. C-squares codes and associated published software are free to use and the software is released under version 2 of the GNU General Public License (GPL), a licence of the Free Software Foundation. History The c-squares method was developed by Tony Rees at CSIRO Oceans and Atmosphere in Australia (then "CSIRO Marine Research") in 2001-2, initially as a method for spatial indexing, rapid query, and compact storage and visualization of dataset spatial "footprints" in an agency-specific metadata directory (data catalogue); it was first publicly announced at the 2002 "EOGEO" Technical Workshop held at Ispra, Italy in May 2002. A more complete description was published in the scientific literature in 2003, together with a web-accessible mapping utility entitled the "c-squares mapper" for visualisation of data extents expressed in the c-squares notation. Since that time, a number of projects and international collaborations have employed c-squares to support spatial indexing and/or map production, including FishBase (to map stored data points for any species), the Ocean Biogeographic Information System (OBIS), AquaMaps, data analysis to support the designation of marine biogeographic realms, for multi-national fisheries data collation by the Scientific, Technical and Economic Committee for Fisheries (STECF) of the European Commission, and for data reporting by ICES. For its application in displaying and modelling global biodiversity data, c-squares was one of four components cited in the award of the Ebbe Nielsen Prize to Rees by the Global Biodiversity Information Facility (GBIF) in 2014. The concept of representing dataset "footprints" as cells of spatial data of this nature and alignment was stated to have been inspired by the data addressing method in the U.S. National Oceanographic Data Center (NODC) "World Ocean Database" product, which uses 10 degree World Meteorological Organization squares (the starting point for c-squares hierarchical subdivision) for organising its data content, and the set of 1:100,000 topographic maps issued by the national mapping agency for Australia (coverage and index here); each map covers a 0.5 degree square and, with its associated mapsheet labels, can notionally be used as a unit of spatial identification. The method has been discussed further in texts on georeferencing, including those by Hill, 2006 and Guo et al., 2020. The system name "c-squares" was chosen because it can be represented as an acronym (for "concise spatial query and representation system") and also because it signals that this method belongs to a notional group of similarly named, latitude-longitude gridded subdivisions of the Globe that includes World Meteorological Organization Squares and Marsden squares, and contrasts with other tessellations of the Globe that use different shaped basic units such as rectangles, triangles, diamonds, and hexagons (for examples refer e.g. Sahr et al., 2003). It is also intended that any individual component cell of the grid can be referred to as a "c-square" (no initial capitalization required). Rationale Spatial data are inherently (at least) 2-dimensional; without additional indexing, a numeric range query in 2 dimensions (e.g. x and y, or latitude and longitude) is required to retrieve data items within a particular area. Such queries are computationally expensive so it can be beneficial to pre-process (index) the data in some manner that reduces the inherent dimensionality from two to one dimension, for example as labelled cells of a grid. The grid labels can then be indexed by standard, one dimensional methods for rapid search and retrieval, and/or searched by simple alphanumeric text searches. C-squares is an example of such a grid where the cell identifiers are designed to be human- as well as machine-readable, and to be concordant with recognizable and commonly intervals of latitude and longitude. Additional areas where a grid-based approach to spatial indexing can be beneficial can be for the representation of data "footprints" in support of spatial search, data binning to reduce complex and potentially voluminous data into "blocks" which then can be more easily compared and summarised, and the potential for a hierarchical approach wherein finer resolutions of the grid are nested into coarser ones, with a shared notation (common identifiers for the larger portions of the relevant grid cells). A jurisdiction-independent, (global) grid such as c-squares can also be used to integrate data across national boundaries, in contrast to (for example) the national grids of various countries such as those of the United Kingdom, Ireland, etc., which are not the same in their approach and may have differences or gaps where such grids overlap, or fail to meet (for example in marine regions around two areas). A potential disadvantage of "equal angle" grids (the class that includes c-squares), which are based on standardised units of latitude and longitude, is that the length of the "sides" and the shape (and area) of the grid cells is not constant on the ground (the height remains approximately constant but the width varies with latitude), and some particular effects are noticeable at the poles, where the cells become 3- rather than 4-sided in practice (refer illustration). These disadvantages can be offset by the advantages that data transformation in and out of grid notation can be accomplished by relatively straightforward steps, the results are congruent with conventional maps that show intervals of latitude and longitude, and the concepts of (for example) "1-degree squares" and "0.5 degree squares" may have familiarity and meaning to human users, in a way that non-square, purely mathematically derived shapes and sizes (based upon some form of spherical trigonometry) may not. The c-squares global grid notation Initial 10 degree squares 10-degree c-squares are specified as being identical to equivalent World Meteteorological Organization (WMO) square codes, refer illustration at right. These squares are aligned with 10-degree subdivisions of the global latitude–longitude grid, which for c-squares use is specified as employing the WGS84 datum. WMO (10 degree) squares are encoded with four digits, in the series 1xxx, 3xxx, 5xxx and 7xxx. The leading digit indicates the "global quadrant" with 1 for north-east (latitude and longitude are both positive), 3 for south-east (latitude is negative and longitude positive), 5 for south-west (latitude and longitude are both negative) and 7 for north-west (latitude is positive and longitude negative). The next digit, 0 through 8, corresponds to the tens of latitude degrees either north or south; while the remaining 2 digits, 00 through 17, correspond to the tens of longitude degrees either east or west (by specification, 0 is treated as positive). Thus the 10 degree cell with its lower left corner at 0,0 (latitude,longitude) is encoded 1000, and acts as a bin to contain all spatial data between 0 and 10 degrees north (actually, 0 and 9.999...) and 0 and 9.999... degrees east; the 10 degree cell with its lower left corner at 80 N, 170 E is encoded 1817, and acts as a bin to contain all spatial data between 80 and 90 degrees north and 170 and 179.999... degrees east. Subsequent recursive subdivision C-squares extends the initial WMO 10×10 square notation via a recursive series of "cycles", each 3 digits long (the final one may be 1 digit), separated by the colon character, the number of characters (and cycles) indicating the resolution encoded, as per these examples: 1000 ... 10×10 degree square (up to 1000×1000 km nominal) 1000:1 ... 5×5 degree square (up to 500×500 km nominal) 1000:100 ... 1×1 degree square (up to 100×100 km nominal) 1000:100:1 ... 0.5×0.5 degree square (up to 50×50 km nominal) 1000:100:100 ... 0.1×0.1 degree square (up to 10×10 km nominal) 1000:100:100:1 ... 0.05×0.05 degree square (up to 5×5 km nominal) (etc.) Cell size is typically selected to suit the nature (granularity and volume) of the data to be encoded, the overall spatial extent of the area in question (e.g. global to local), the desired spatial resolution of the resulting grid (smallest features/areas that can be differentiated from each other), and the computing resources available (numbers of cells to cover the same area increase by either ×4 or ×25 with each decrease in square size, either requiring an equivalent increase in computing resources or possibly slower addressing times). For example, relatively generalised, global compilations may be best suited to aggregate (index) data by 10- or 5- degree cells, while more local gridded areas may favour 1-, 0.5- or 0.1- degree cells, as appropriate. The nominal sizes given above reflect the fact that at the equator, 1 degree of both latitude and longitude correspond to around 110 km, with the actual value for longitude declining between there and the poles, where it becomes zero (latitude actual: 110.567 km at the equator, 111.699 km at the poles; longitude actual: 111.320 km at the equator, 78.847 km at latitude ±45 degrees, 0 km at the poles); at a sample northern hemisphere latitude e.g. that of London (51.5 degrees north), a 1×1 degree square measures approximately 111×69 km. To produce the 1 or 3 digits in any cycle following the initial 4-digit, 10-degree square identifier, first an "intermediate quadrant", 1 through 4 is designated (refer diagram at right), where 1 indicates low absolute values of both latitude and longitude (regardless of sign), 2 indicates low longitude and high latitude, 3 indicates high latitude and low longitude, and 4 indicates high values for both; "low" and high" being taken from the relevant portion of the data to be gridded (for example within the 10 degree cell extending from 10 to 20 degrees, 10 is treated as low and 19 as high). This leading digit in a cycle is then followed simply by the next applicable digit for first latitude and then longitude: thus an input value of latitude +11.0, longitude +12.0 degrees will be encoded as the 5 degree c-square code 1101:1 and the 1 degree code 1101:112. Inspection of this code will show that the input latitude value can be recovered directly from the digits 1101:112 while the longitude is included as 1101:112; the sign for these is both positive, as indicated by the first digit of the leading 4 (1 in this case, indicating the north east global quadrant). From 2002 onwards (still current at 2020), an online "latlong to c-squares conversion page" is available at the website of CSIRO Marine Research (now CSIRO Oceans and Atmosphere) which will convert input values of latitude and longitude to the equivalent c-square code at user selectable resolutions from 10 to 0.1 degree cell size. Alternatively it is a comparatively simple task to program from first principles (or construct as, for example, a Microsoft Excel worksheet) according to the c-squares specification; an example is available here. C-squares strings, and the c-squares mapper A set of c-squares (contiguous or non contiguous) can be represented as a concatenated list of individual square codes, separated by the "pipe" (|) character, thus: 7500:110:3|7500:110:1|1500:110:3|1500:110:1 (etc.). This set of squares can then serve as an indication of a dataset extent, similar in function (but simpler to specify) to a MultiPolygon in the Well-known text representation of geometry, the functional difference being that defined points forming the boundary of a polygon can be continuously variable, while those for the c-square boundaries are constrained to fixed intervals from the grid square resolution in use. If these strings are stored, for example as "long text" within a field of a conventional text storage system (e.g. spreadsheet, database, etc.) they can be used for the operation of spatial searches (see following section/s). C-squares strings can also be used directly as input to an instance of the "c-squares mapper", a web-based utility in operation since 2002 at CSIRO in Australia (under the domain obis.org.au) and also at other global locations. To visualize the position of any set of squares on a map, the current syntax to address an installation of the "c-squares mapper" is (e.g.): http://www.obis.org.au/cgi-bin/cs_map.pl?csq=3211:123:2|3211:113:4|3211:114:1|3211:206:2|3211:206:1|3111:496:3|3111:495:4|3111:495:1|3111:394:2|3111:495:2|3111:384:3|3111:383:1|3111:382:2|3111:372:3|3111:371:4|3112:371:1|3111:370:2| (etc.). It should be noted here that the above call to the c-squares mapper is a simple one, with only a single parameter (a single c-squares string) which produces a simple "default map"; the mapper is in fact quite highly customizable, capable of accepting up to seven c-squares strings concurrently, plotting them in user-specified colours, with a choice of empty of filled squares, user-selectable base map, etc. etc.; a full list of available input parameters is provided on the mapper "technical information" page. A more sophisticated map produced using a larger number of available parameters is the colour-coded example at right (AquaMap, i.e. modelled distribution, for the ocean sunfish). Commencing in 2006, an upgrade of the mapper incorporating the independently-written Xplanet software also allows the plots of supplied c-squares to be displayed on a user-rotatable and zoomable globe, which can offer a more realistic view for either Pacific Ocean- or polar- centred data than are possible with a flat map (e.g. equirectangular) projection. Th c-squares mapper is one of several options currently (2006–present) available for real time mapping of fish point data records in FishBase, as per this example page for the species Salmo trutta (sea trout); similar options are also available for other (non fish) marine species via SeaLifeBase as per this example. Since 2006, the mapper has also produced in excess of 100,000 species maps for the AquaMaps project (33,500 species x 4 "standard maps" per species as at 2021, additional user-generated maps available on demand). Spatial searching In a system that uses c-squares codes as units of spatial indexing, a text-based search on any of these square identifiers will retrieve data associated with the relevant square. If a wildcard search is supported (for example in the case that the wildcard character is a percent sign), a search on "7500%" will retrieve all data items in that ten degree square, a search on "7500:1%" will retrieve all data items in that five degree square, etc. The asterisk character "*" has a special (reserved) meaning in c-squares notation, being a "compact" notation indicating that all finer cells within a higher level cell are included, to the level of resolution indicated by the number of asterisks. In the example above, "7500:*" would indicate that all 4 five-degree cells within parent ten-degree cell "7500" are filled, "7500:***" would indicate that all 100 one-degree cells within parent ten-degree cell "7500" are filled, etc. This approach enables the filling of contiguous blocks of cells with an economy of characters in many cases (a form of data compression), that is useful for efficient storage and transfer of c-squares codes as required. Spatial data reporting, assembly, and analysis C-squares has been employed at a range of resolutions for data reporting, assembly and analysis on scales ranging from global to local, also incorporating multi-national data compilations where a gridded data system is required that is not tied to the boundaries of any single jurisdiction. Examples include: 5×5 degree squares: production of the first world scale map of marine biogeographic realms based on distributions of 65,000 marine species, by Costello et al., 2017 1×1 degree squares: geographic presentation of marine mammal, seabird and sea turtle data by the OBIS-SEAMAP (Spatial Ecologial Analysis of Megavertebrate Populations) project (also offers additional options i.e. 0.1×0.1 and 0.01×0.01 degree squares) 0.5×0.5 degree squares: modelling of marine (and some freshwater) species distributions by the AquaMaps project, plus associated spatial search; the AquaMaps front page https://www.aquamaps.org/ offers "click on a map" spatial search facility based on 0.5×0.5 degree c-squares, example spatial search result here. AquaMaps have been further employed in subsequent studies such as the Ecological Assessment of the Sustainable Impacts of Fisheries (EASI-Fish) approach. reporting and collation of fishing activity by member states by the Scientific, Technical and Economic Committee for Fisheries (STECF) of the European Commission Data contributed by 23 member States is available as a data product "Fisheries landings & effort: data by c-square (2015-2019)", further discussed in a 2020 STECF Working Group Report (no. 20-10). analysing and forecasting fisheries time series data in the Indian Ocean delineating high priority areas for marine biodiversity conservation in the Coral Triangle, bordered by both the Pacific and Indian Oceans AquaMaps makes available its base data coverages of global marine environmental variables as c-squares gridded data at 0.5 degree resolution 0.1×0.1 degree squares: fish catch reporting for the purpose of stock assessment by the New South Wales Department of Primary Industries/Fisheries New South Wales in Australia; examples: 0.05×0.05 degree squares: VMS (vessel monitoring systems) data and fishing logbook data for International Council for the Exploration of the Sea (ICES) and others, as also implemented in the ICES "FishFrame" regional database identification of vulnerable marine ecosystems (VMEs) in the North-East Atlantic for the EU-funded Horizon 2020 ATLAS Project. According to Turner et al., 2021, "ATLAS partners helped develop a data aggregation approach, the VME Index, to help identify areas where vulnerable marine ecosystems are known or are likely to occur. The VME Index is a single metric based on a multi-criteria assessment method that combines VME indicator records within a C-square (i.e., a spatial unit used by ICES: 0.05 x 0.05 degree grid, equivalent to approximately 15 km² at 60°N latitude) based on the abundance/presence of VME indicator taxa and how reliable the underlying data are. ... ICES has been using the VME Index since 2018 to provide advice concerning the protection of vulnerable marine ecosystems." a 2019 ICES Report "Working Group on Spatial Fisheries Data (WGSFD)" contains a number of example maps plotted using 0.05×0.05 degree c-squares, and also a discussion of whether or not a move towards 0.01×0.01 degree square reporting would be beneficial (higher spatial data resolution) or detrimental (increased number of squares with no content) 0.01×0.01 degree squares: a survey of spatial patterns in deep-sea trawling off the Portuguese continental coast by Campos et al., 2021. C-squares labelled cells were adopted as the underlying grid for analysis by the European Union-funded MINOUW project (MINimisation Of UnWanted catches in European Waters), via their web application (MINOUWApp), in support of spatial data (notably fishing effort and density patches of potential unwanted catches) supplied by project researchers across different European countries in a range of formats, in combination with layers of spatial information from external sources. Target audience/potential users According to its design principles, the principal target audience for c-squares is data custodians who wish to organise spatial data by latitude-longitude grid squares at any of the resolutions supported by the system, namely any decimal subdivision of either 10×10 or 5×5 degree squares, to support associated data query, retrieval, analysis, representation (mapping), and potential external data exchange and aggregation. Fine resolution c-squares may also be used as a general "location encoder", selected desirable attributes of which are discussed further by the developers of the Google Open Location Code method, since the c-squares method satisfies the majority of the criteria set out in that discussion document. As evidenced by the references cited in this article, principal adopters of the method to date have been concerned with marine data in particular; this most likely stems from the fact that the oceans are trans-national in their governance, therefore otherwise established local or national grids are unsuitable for analysis of ocean or fisheries data on anything other than a local scale. Although initially deployed in marine-related systems (as per its description in the journal "Oceanography"), in essence the system is terrain-agnostic (as is the latitude-longitude grid upon which it is based) and is applicable equally to both marine and terrestrial data. An additional aspect of c-squares noted by Larsen et al., 2009 and either implicit or explicit in other equivalent "data aggregation methods" is the use of such frameworks to "allow general level analyses without exposing the precise coordinates of potentially sensitive information". For example, real time data on the exact location of fishing vessels is frequently considered "commercial in confidence" to avoid release to competitors of the best fishing localities according to the nature of the resource, which may be continually moving, while for biodiversity data, the exact location of individuals or (for example) nests of rare species may again not be desirable to release to the public. The use of grid cells or similar methods to accurately represent the general location of data points without revealing their more exact location, while still rendering the data available for statistical analysis, is a recognised useful approach in such situations, refer e.g. Chapman, 2020. Congruence with other latitude-longitude geocoding systems At its maximum scale, 10 degree c-squares are congruent with both World Meteorological Organization squares (whose identifiers are re-used within the c-squares notation) and Marsden squares, which share the same boundaries but use a different notation. Both 1 degree and 0.5 degree c-squares are partially congruent with "standard resolution" ICES Statistical Rectangles, which utilize a grid cell area of 1×0.5 degrees over a restricted portion of the Globe (north Atlantic region): 2 vertically adjacent ICES rectangles are exactly equivalent to a single 1 degree c-square, while if needed, the content of a single ICES rectangle can be apportioned between 2 horizontally adjacent 0.5 degree c-squares for data interchange at that resolution (refer note). A separate system, QDGC or Quarter Degree Grid Cells, has been developed for interchange of some biodiversity data in Africa, and later extended to cope with data across the Equator and Prime Meridian. QDGC cells, at 0.25×0.25 degrees, lie between the 0.5×0.5 and 0.1×0.1 degree resolution steps of the c-squares system, and are thus not exactly compatible with it, although the "parent" squares of the QDGC grid from which they are derived, at 1×1 and 0.5×0.5 degrees, are congruent with equivalent c-squares grid cells, however using a different notation. In their proposal for an "extended" QDGC system, Larsen et al. additionally describe the potential subdivision of 0.25×0.25 degree QDGC cells by a recursive factor of 2, giving cell sizes of 0.125, 0.0625, 0.03125 degrees, etc., which progressively depart further from the "decimal degrees" concept incorporated into c-squares. Licensing and software availability There is no licence required to use the c-squares method, which has been openly published in the scientific literature since 2003. Source code for the mapper, etc., available via the SourceForge website, is released under the GNU General Public License version 2.0 (GPLv2), which provides free use and redistribution, and subsequent modification for any purpose so long as that licence is retained with the product and any subsequent modifications, in other words, that all the released improved versions will also be free software. See also List of geodesic-geocoding systems World Meteorological Organization squares Grid (spatial index) Geocode Geospatial metadata Notes References External links C-squares home page C-squares project page on SourceForge, including: lists of c-squares by ID, at resolutions from 10×10 to 0.5×0.5 degrees ESRI shapefiles containing equivalent information AquaMaps (demonstration of c-squares in real-world use) Tony Rees, 2014: "Selected Innovations in Biodiversity Informatics" 2014 GBIF Ebbe Nielsen Prize Presentation, New Delhi. (includes an introduction to, and overview of c-squares) Geocodes Geographic coordinate systems CSIRO
37802248
https://en.wikipedia.org/wiki/Joe%20Cormier
Joe Cormier
Joseph (Joe) Daily Cormier (born May 3, 1963) is a former American football linebacker and tight end for the Minnesota Vikings and Los Angeles Raiders of the National Football League (NFL). Early life Cormier was born in Los Angeles, California. He attended high school at Junípero Serra High School (Gardena, California), where he excelled in football, basketball, and track and field. Cormier currently serves as Serra's Alumni Relations and Development Director. USC Heavily recruited out of high school, Cormier starred as a three-year letterman for the University of Southern California Trojans football team. In 1984, Cormier caught a touchdown pass during the Trojans' Rose Bowl victory against the Ohio State Buckeyes. In the 1985 season, he caught a team-leading 44 receptions for the Trojans, played in the Aloha Bowl, and was named an All-Pac-10 Selection. NFL Cormier was selected in the 10th Round of the 1985 Draft by the Minnesota Vikings and played in the 1987-1990 seasons for the Los Angeles Raiders. References 1963 births American football linebackers Minnesota Vikings players Living people USC Trojans football players Players of American football from Los Angeles National Football League replacement players Junípero Serra High School (Gardena, California) alumni
2275
https://en.wikipedia.org/wiki/Apple%20II
Apple II
The Apple II (stylized as apple ][) is an 8-bit home computer and one of the world's first highly successful mass-produced microcomputer products. It was designed primarily by Steve Wozniak; Steve Jobs oversaw the development of Apple II's foam-molded plastic case and Rod Holt developed the switching power supply. It was introduced by Jobs and Wozniak at the 1977 West Coast Computer Faire, and marks Apple's first launch of a personal computer aimed at a consumer market—branded toward American households rather than businessmen or computer hobbyists. Byte magazine referred to the Apple II, Commodore PET 2001, and TRS-80 as the "1977 Trinity". Apple II had the defining feature of being able to display color graphics, and this was why the Apple logo was redesigned to have a spectrum of colors. The Apple II is the first model in the Apple II series, followed by Apple II+, Apple IIe, Apple IIc, and the 16-bit Apple IIGS—all of which remained compatible. Production of the last available model, Apple IIe, ceased in November 1993. History By 1976, Steve Jobs had convinced product designer Jerry Manock (who had formerly worked at Hewlett Packard designing calculators) to create the "shell" for the Apple II—a smooth case inspired by kitchen appliances that concealed the internal mechanics. The earliest Apple II computers were assembled in Silicon Valley and later in Texas; printed circuit boards were manufactured in Ireland and Singapore. The first computers went on sale on June 10, 1977 with an MOS Technology 6502 microprocessor running at 1.022,727 MHz ( of the NTSC color carrier), two game paddles (bundled until 1980, when they were found to violate FCC regulations), 4 KiB of RAM, an audio cassette interface for loading programs and storing data, and the Integer BASIC programming language built into ROMs. The video controller displayed 24 lines by 40 columns of monochrome, uppercase-only text on the screen (the original character set matches ASCII characters 20h to 5Fh), with NTSC composite video output suitable for display on a TV monitor or on a regular TV set (by way of a separate RF modulator). The original retail price of the computer with 4 KiB of RAM was $1,298 () and $2,638 () with the maximum 48 KB of RAM. To reflect the computer's color graphics capability, the Apple logo on the casing had rainbow stripes, which remained a part of Apple's corporate logo until early 1998. Perhaps most significantly, the Apple II was a catalyst for personal computers across many industries; it opened the doors to software marketed at consumers. Certain aspects of the system's design were influenced by Atari's arcade video game Breakout (1976), which was designed by Wozniak, who said: "A lot of features of the Apple II went in because I had designed Breakout for Atari. I had designed it in hardware. I wanted to write it in software now". This included his design of color graphics circuitry, the addition of game paddle support and sound, and graphics commands in Integer BASIC, with which he wrote Brick Out, a software clone of his own hardware game. Wozniak said in 1984: "Basically, all the game features were put in just so I could show off the game I was familiar with—Breakout—at the Homebrew Computer Club. It was the most satisfying day of my life [when] I demonstrated Breakout—totally written in BASIC. It seemed like a huge step to me. After designing hardware arcade games, I knew that being able to program them in BASIC was going to change the world." Overview In the May 1977 issue of Byte, Steve Wozniak published a detailed description of his design; the article began, "To me, a personal computer should be small, reliable, convenient to use, and inexpensive." The Apple II used peculiar engineering shortcuts to save hardware and reduce costs, such as: Taking advantage of the way the 6502 processor accesses memory: it occurs only on alternate phases of the clock cycle; video generation circuitry memory access on the otherwise unused phase avoids memory contention issues and interruptions of the video stream. This arrangement simultaneously eliminated the need for a separate refresh circuit for DRAM chips, as video transfer accessed each row of dynamic memory within the timeout period. In addition, it did not require separate RAM chips for video RAM, while the PET and TRS-80 had SRAM chips for video. Rather than use a complex analog-to-digital circuit to read the outputs of the game controller, Wozniak used a simple timer circuit whose period is proportional to the resistance of the game controller, and used a software loop to measure the timer. A single 14.31818 MHz master oscillator (fM) was divided by various ratios to produce all other required frequencies, including microprocessor clock signals (fM/14), video transfer counters, and color-burst samples (fM/4). The text and graphics screens have a complex arrangement. For instance, the scanlines were not stored in sequential areas of memory. This complexity was reportedly due to Wozniak's realization that the method would allow for the refresh of dynamic RAM as a side effect (as described above). This method had no cost overhead to have software calculate or look up the address of the required scanline and avoided the need for significant extra hardware. Similarly, in high-resolution graphics mode, color is determined by pixel position and thus can be implemented in software, saving Wozniak the chips needed to convert bit patterns to colors. This also allowed for subpixel font rendering, since orange and blue pixels appear half a pixel-width farther to the right on the screen than green and purple pixels. The Apple II at first used data cassette storage, like most other microcomputers of the time. In 1978, the company introduced an external -inch floppy disk drive, called Disk II (stylized as Disk ][), attached through a controller card that plugs into one of the computer's expansion slots (usually slot 6). The Disk II interface, created by Wozniak, is regarded as an engineering masterpiece for its economy of electronic components. The approach taken in the Disk II controller is typical of Wozniak's designs. With a few small-scale logic chips and a cheap PROM (programmable read-only memory), he created a functional floppy disk interface at a fraction of the component cost of standard circuit configurations. Case design The first production Apple II computers had hand-molded cases; these had visible bubbles and other lumps in them from the imperfect plastic molding process, which was soon switched to machine molding. In addition, the initial case design had no vent openings, causing high heat buildup from the PCB and resulting in the plastic softening and sagging. Apple added vent holes to the case within three months of production; customers with the original case could have them replaced at no charge. PCB revisions The Apple II's printed circuit board (PCB) underwent several revisions, as Steve Wozniak made modifications to it. The earliest version was known as Revision 0, and the first 6,000 units shipped used it. Later revisions added a color killer circuit to prevent color fringing when the computer was in text mode, as well as modifications to improve the reliability of cassette I/O. Revision 0 Apple IIs powered up in an undefined mode and had garbage on-screen, requiring the user to press Reset. This was eliminated in later board revisions. Revision 0 Apple IIs could display only four colors in hi-res mode, but Wozniak was able to increase this to six hi-res colors on later board revisions. The PCB had three RAM banks for a total of 24 RAM chips. Original Apple IIs had jumper switches to adjust RAM size, and RAM configurations could be 4, 8, 12, 16, 20, 24, 32, 36, or 48 KiB. The three smallest memory configurations used 4kx1 DRAMs, with larger ones using 16kx1 DRAMs, or mix of 4-kilobyte and 16-kilobyte banks (the chips in any one bank have to be the same size). The early Apple II+ models retained this feature, but after a drop in DRAM prices, Apple redesigned the circuit boards without the jumpers, so that only 16kx1 chips were supported. A few months later, they started shipping all machines with a full 48 KiB complement of DRAM. Unlike most machines, all integrated circuits on the Apple II PCB were socketed; although this cost more to manufacture and created the possibility of loose chips causing a system malfunction, it was considered preferable to make servicing and replacement of bad chips easier. The Apple II PCB lacks any means of generating an IRQ, although expansion cards may generate one. Program code had to stop everything to perform any I/O task; like many of the computer's other idiosyncrasies, this was due to cost reasons and Steve Wozniak assuming interrupts were not needed for gaming or using the computer as a teaching tool. Display and graphics Color on the Apple II series uses a quirk of the NTSC television signal standard, which made color display relatively easy and inexpensive to implement. The original NTSC television signal specification was black and white. Color was added later by adding a 3.58-megahertz subcarrier signal that was partially ignored by black-and-white TV sets. Color is encoded based on the phase of this signal in relation to a reference color burst signal. The result is, that the position, size, and intensity of a series of pulses define color information. These pulses can translate into pixels on the computer screen, with the possibility of exploiting composite artifact colors. The Apple II display provides two pixels per subcarrier cycle. When the color burst reference signal is turned on and the computer attached to a color display, it can display green by showing one alternating pattern of pixels, magenta with an opposite pattern of alternating pixels, and white by placing two pixels next to each other. Blue and orange are available by tweaking the pixel offset by half a pixel-width in relation to the color-burst signal. The high-resolution display offers more colors by compressing more (and narrower) pixels into each subcarrier cycle. The coarse, low-resolution graphics display mode works differently, as it can output a pattern of dots per pixel to offer more color options. These patterns are stored in the character generator ROM, and replace the text character bit patterns when the computer is switched to low-res graphics mode. The text mode and low-res graphics mode use the same memory region and the same circuitry is used for both. A single HGR page occupied 8 KiB of RAM; in practice this meant that the user had to have at least 12 KiB of total RAM to use HGR mode and 20 KiB to use two pages. Early Apple II games from the 1977–79 period often ran only in text or low-resolution mode in order to support users with small memory configurations; HGR not being near universally supported by games until 1980. Sound Rather than a dedicated sound-synthesis chip, the Apple II has a toggle circuit that can only emit a click through a built-in speaker or a line-out jack; all other sounds (including two-, three- and, eventually, four-voice music and playback of audio samples and speech synthesis) are generated entirely by software that clicked the speaker at just the right times. Similar techniques are used for cassette storage: cassette output works the same as the speaker, and input is a simple zero-crossing detector that serves as a relatively crude (1-bit) audio digitizer. Routines in machine ROM encode and decode data in frequency-shift keying for the cassette. Programming languages Initially, the Apple II was shipped with Integer BASIC encoded in the motherboard ROM chips. Written by Wozniak, the interpreter enabled users to write software applications without needing to purchase additional development utilities. Written with game programmers and hobbyists in mind, the language only supported the encoding of numbers in 16-bit integer format. Since it only supported integers between -32768 and +32767 (signed 16-bit integer), it was less suitable to business software, and Apple soon received complaints from customers. Because Steve Wozniak was busy developing the Disk II hardware, he did not have time to modify Integer BASIC for floating point support. Apple instead licensed Microsoft's 6502 BASIC to create Applesoft BASIC. Disk users normally purchased a so-called Language Card, which had Applesoft in ROM, and was sat below the Integer BASIC ROM in system memory. The user could switch between either BASIC by typing FP or INT in BASIC prompt. Apple also offered a different version of Applesoft for cassette users, which occupied low memory, and was started by using the LOAD command in Integer BASIC. As shipped, Apple II incorporated a machine code monitor with commands for displaying and altering the computer's RAM, either one byte at a time, or in blocks of 256 bytes at once. This enabled programmers to write and debug machine code programs without further development software. The computer powers on into the monitor ROM, displaying a * prompt. From there, Ctrl+B enters BASIC, or a machine language program can be loaded from cassette. Disk software can be booted with Ctrl+P followed by 6, referring to Slot 6 which normally contained the Disk II controller. A 6502 assembler was soon offered on disk, and later the UCSD compiler and operating system for the Pascal language were made available. The Pascal system requires a 16 KiB RAM card to be installed in the language card position (expansion slot 0) in addition to the full 48 KiB of motherboard memory. Manual The first 1,000 or so Apple IIs shipped in 1977 with a 68-page mimeographed "Apple II Mini Manual", hand-bound with brass paper fasteners. This was the basis for the Apple II Reference Manual, which became known as the Red Book for its red cover, published in January 1978. All existing customers who sent in their warranty cards, were sent free copies of the Red Book. The Apple II Reference Manual contained the complete schematic of the entire computer's circuitry, and a complete source listing of the "Monitor" ROM firmware that served as the machine's BIOS. Operating system The original Apple II provided an operating system in ROM along with a BASIC variant called Integer BASIC. The only form of storage available was cassette tape which was inefficiently slow and, worse, unreliable. In 1977 when Apple decided against the popular but clunky CP/M operating system for Wozniak's innovative disk controller design, it contracted Shepardson Microsystems for $13,000 to write an Apple DOS for the Apple II series. At Shepardson, Paul Laughton developed the crucial disk drive software in just 35 days, a remarkably short deadline by any standard. Apple's Disk II -inch floppy disk drive was released in 1978. The final and most popular version of this software was Apple DOS 3.3. Apple DOS was superseded by ProDOS, which supported a hierarchical filesystem and larger storage devices. With an optional third-party Z80-based expansion card, the Apple II could boot into the CP/M operating system and run WordStar, dBase II, and other CP/M software. With the release of MousePaint in 1984 and the Apple IIGS in 1986, the platform took on the look of the Macintosh user interface, including a mouse. Apple released Applesoft BASIC in 1977, a more advanced variant of the language which users could run instead of Integer BASIC for more capabilities. Some commercial Apple II software booted directly and did not use standard DOS disk formats. This discouraged the copying or modifying of the software on the disks, and improved loading speed. Third-party devices and applications When the Apple II initially shipped in June 1977, no expansion cards were available for the slots. This meant that the user did not have any way of connecting a modem or a printer. One popular hack involved connecting a teletype machine to the cassette output. Wozniak's open-architecture design and Apple II's multiple expansion slots permitted a wide variety of third-party devices, including peripheral cards, such as serial controllers, display controllers, memory boards, hard disks, networking components, and real-time clocks. There were plug-in expansion cards—such as the Z-80 SoftCard—that permitted Apple II to use the Z80 processor and run programs for the CP/M operating system, including the dBase II database and the WordStar word processor. The Z80 card also allowed the connection to a modem, and thereby to any networks that a user might have access to. In the early days, such networks were scarce. But they expanded significantly with the development of bulletin board systems in later years. There was also a third-party 6809 card that allowed OS-9 Level One to be run. Third-party sound cards greatly improved audio capabilities, allowing simple music synthesis and text-to-speech functions. Apple II accelerator cards doubled or quadrupled the computer's speed. Early Apple IIs were often sold with a Sup'R'Mod, which allowed the composite video signal to be viewed in a television. The Soviet Union radio-electronics industry designed Apple II-compatible computer Agat. Roughly 12,000 Agat 7 and 9 models were produced and they were widely used in Soviet schools. Agat 9 computers could run "Apple II" compatibility and native modes. "Apple II" mode allowed to run wider variety of (presumably pirated) Apple II software, but at the expense of less RAM. Because of that Soviet developers preferred native mode over "Apple II" compatibility mode. Reception Jesse Adams Stein wrote, "As the first company to release a 'consumer appliance' micro-computer, Apple Computer offers us a clear view of this shift from a machine to an appliance." But the company also had "to negotiate the attitudes of its potential buyers, bearing in mind social anxieties about the uptake of new technologies in multiple contexts. The office, the home and the 'office-in-the-home' were implicated in these changing spheres of gender stereotypes and technological development." After seeing a crude, wire-wrapped prototype demonstrated by Wozniak and Steve Jobs in November 1976, Byte predicted in April 1977, that the Apple II "may be the first product to fully qualify as the 'appliance computer' ... a completed system which is purchased off the retail shelf, taken home, plugged in and used". The computer's color graphics capability especially impressed the magazine. The magazine published a favorable review of the computer in March 1978, concluding: "For the user that wants color graphics, the Apple II is the only practical choice available in the 'appliance' computer class." Personal Computer World in August 1978 also cited the color capability as a strength, stating that "the prime reason that anyone buys an Apple II must surely be for the colour graphics". While mentioning the "oddity" of the artifact colors that produced output "that is not always what one wishes to do", it noted that "no-one has colour graphics like this at this sort of price". The magazine praised the sophisticated monitor software, user expandability, and comprehensive documentation. The author concluded that "the Apple II is a very promising machine" which "would be even more of a temptation were its price slightly lower ... for the moment, colour is an Apple II". Although it sold well from the launch, the initial market was to hobbyists and computer enthusiasts. Sales expanded exponentially into the business and professional market, when the spreadsheet program VisiCalc was launched in mid-1979. VisiCalc is credited as the defining killer app in the microcomputer industry. During the first five years of operations, revenues doubled about every four months. Between September 1977 and September 1980, annual sales grew from $775,000 to $118 million. During this period the sole products of the company were the Apple II and its peripherals, accessories, and software. References External links Additional documentation in Bitsavers PDF Document archive Apple II on Old-computers.com Online Apple II Resource Apple II computers Computer-related introductions in 1977 6502-based home computers 8-bit computers ca:Apple II
63904523
https://en.wikipedia.org/wiki/Jaya%20Baloo
Jaya Baloo
Jaya Baloo is a cybersecurity expert who is currently the Chief Information Security Officer (CISO) at Avast Software. Baloo was named as one of the top 100 CISO's in 2017, and one of Forbes 100 Women Founders in Europe To Follow in 2018. Career Baloo studied at Tufts University between 1991 and 1995. She was inspired to study computers after receiving one for Christmas at the age of nine. Baloo's first job was working at a bank dealing with export cryptography problems. She was surprised how cryptography was treated as a "weapon", with the USA hiding their security advances from the rest of the world. She had an interest in understanding the difference between mistakes in programming and malicious activity. After moving to The Netherlands Baloo became a network services engineer and consultant at KPN International Consultancy before specialising in fraud and revenue assurance for France Telecom between 2005 and 2009. Baloo then worked at Verizon for nearly 4 years. Baloo believes that the goal of telecommunication attackers is not to bring down services but to shape and intercept traffic without discovery, notably different than attacks on other critical infrastructure like energy or water. In 2012 Baloo became the Chief Information Security Officer (CISO) at KPN Telecom, a Dutch internet service provider, in the same year that KPN was hacked. During this time Baloo was chairman of the Dutch Continuity Board, which is a collaboration tackling denial-of-service (DDos) cyberthreats through exchanging live attack information between competitors. In an interview with the podcast Cyber Security Dispatch, it was highlighted that Baloo's length of tenure at KPN was considerably longer than the 18-month to 2 year average. She was named as one of the top 100 CISO's in 2017, with only 9 other women named. In 2018, Forbes named Baloo as one of 100 Women Founders in Europe To Follow. In October 2019 Baloo took on her current role as CISO for Avast. One reason why she joined Avast is her love of their mission to ensure "that cybersecurity is a fundamental right. It’s not just for people who can afford to pay for a product – it's for everyone". Baloo holds a faculty position at the Singularity University. She is also a quantum ambassador of KPN Telecom and Vice Chair of the Quantum Flagship Strategic Advisory Board of the EU Commission. She considers quantum computers as inevitable tools that will disrupt current computing architectures, recommending that businesses and organisations prepare themselves for the impact of new quantum protocols. Among her recommendations are to increase key length of current algorithms, use quantum key distribution in niche part of the network, and look at post quantum cryptographic algorithms. Baloo projects that the most exciting development in quantum communication will be beyond the current point-to-point into many-to-many, on demand, instantly. This requires quantum repeaters and other architecture in a managed service, which Baloo predicts could be achieved in 5–10 years' time. During the COVID-19 pandemic, Baloo has been providing tips for best home working practices on behalf of Avast. Interests and views Baloo is interested in the future of cyber security and how quantum computing may impact privacy. She is an expert in network architecture, security weaknesses in mobile and voice-over-IP, cyrotography, and quantum communication networks. In 2019, the non-profit Inspiring Fifty selected Baloo as one of the fifty most inspiring women in the Netherlands. Baloo considers inequality and distribution of assets as one of the biggest global cyberthreats, with only a handful of countries able to detect, respond or defend against threats. On quantum computing, Baloo comments:"You see that happening at Microsoft, at Google, at IBM, the United States is investing heavily in it [quantum computing], China has billions of dollars in it...But the rest of the world certainly doesn’t. You’re not hearing of a quantum computer or post quantum cryptography being developed in Brazil or in Kenya. What I’m worried about from an infosec point of view, is that when we have a quantum computer, it’s going to effectively render our current encryption schemes for public key cryptography moot....So if we see an evolution where only certain countries will be able to possess this kind of technology, all of the other countries will be in this ‘digital divide’ that the UN always talks about."Baloo's advice for women in cybersecurity is to "Hold onto your passion, and don't shut yourself down. We need you in this industry. Help us keep the world safe". Personal life Baloo has three children. In her spare time she enjoys diving, having dived at the Great Barrier Reef and in the Bahamas, and would consider becoming a diving instructor as an alternative occupation. Baloo is also training for a pilot license. References Year of birth missing (living people) Living people Computer security specialists Tufts University alumni Women corporate executives
22187808
https://en.wikipedia.org/wiki/Pseudozarba
Pseudozarba
Pseudozarba is a genus of moths in the subfamily Eustrotiinae of the family Noctuidae. The genus was described by Warren in 1913. Species Pseudozarba abbreviata Rothschild, 1921 Niger Pseudozarba aethiops (Distant, 1898) Arabia, Sudan, Ethiopia, Kenya, Tanzania, Malawi, Botswana, Zambia, Zimbabwe, South Africa, Angola, Zaire, Gambia, Niger, Nigeria, Madagascar, Seychelles Pseudozarba bella Rothschild, 1921 Niger Pseudozarba bipartita (Herrich-Schäffer, [1850]) northern Africa, southern Europe, Sicily, Iran, Israel Pseudozarba carnibasalis (Hampson, 1918) Ghana, Ethiopia, Kenya, Uganda, Malawi, Tanzania Pseudozarba cupreofascia (Le Cerf, 1922) Burkina Faso, Yemen, Ethiopia, Kenya, Tanzania Pseudozarba excavata (Walker, 1865) southern India Pseudozarba expatriata (Hampson, 1914) Cape Verde, Senegal, Burkina Faso, northern Nigeria, Gambia Pseudozarba featheri Hacker, 2016 Kenya Pseudozarba fornax Hacker, 2016 Yemen, Oman Pseudozarba hemiplaca (Meyrick, 1902) Pseudozarba kaduna Hacker, 2016 Burkina Faso, northern Nigeria Pseudozarba leucopera (Hampson, 1910) Pseudozarba marmoreata Hacker, 2016 northern Nigeria Pseudozarba mesozona (Hampson, 1896) Arabia, Egypt, Djibouti, Eritrea, Sokotra Pseudozarba mianoides (Hampson, 1893) Sri Lanka Pseudozarba morosa Wiltshire, 1970 Burkina Faso, Nigeria, Gambia, Sudan Pseudozarba nilotica Hacker, 2016 Ethiopia Pseudozarba ochromaura Hacker, 2016 Namibia, South Africa, Ethiopia Pseudozarba opella (Swinhoe, 1885) Cape Verde, Ghana, Nigeria, Niger, Sudan, Eritrea, Somalia, Kenya, Zimbabwe, South Africa, India, Australia Pseudozarba orthopetes Meyrick, 1897 Pseudozarba ozarbica (Hampson, 1910) Pseudozarba plumbicilia (Draudt, 1950) Sichuan Pseudozarba poliochlora Hacker, 2016 Tanzania Pseudozarba reducta Warren, 1913 Mumbai Pseudozarba regula (Gaede, 1916) Togo, Ghana, Kenya, Burundi, Zaire, South Africa Pseudozarba rufigrisea Warren, 1913 Sumba Pseudozarba schencki (Strand, 1912) Angola, Namibia, South Africa, Arabia References Acontiinae
29351291
https://en.wikipedia.org/wiki/Cyberwarfare%20by%20China
Cyberwarfare by China
Cyberwarfare by China is the aggregate of all combative activities in the cyberspace which are taken by the Chinese military against other countries. Organization While some details remain unconfirmed, it is understood that China organizes its resources as follows: “Specialized military network warfare forces” () - Military units specialized in network attack and defense. "PLA - authorized forces” () - network warfare specialists in the Ministry of State Security (MSS) and the Ministry of Public Security (MPS). “Non-governmental forces” () - civilian and semi-civilian groups that spontaneously engage in network attack and defense. Foreign Policy provided an estimated range for China's "hacker army" personnel, anywhere from 50,000 to 100,000 individuals. In response to claims that Chinese universities, businesses, and politicians have been subject to cyber espionage by the United States National Security Agency since 2009, the PLA announced a cyber security squad in May 2011 to defend their own networks. Accusations of espionage Australia In May 2013, ABC News claimed that China stole blueprints to the headquarters of the Australian Security Intelligence Organisation. Canada Officials in the Canadian government claimed that Chinese hackers compromised several departments within the federal government in early 2011, though the Chinese government has denied involvement. In 2014, Canada's Chief Information Officer claimed that Chinese hackers compromised computer systems within the National Research Council. India Officials in the Indian government believe that attacks on Indian government networks, such as the attack on the Indian National Security Council, have originated from China. According to the Indian government, Chinese hackers are experts in operating botnets, of which were used in these attacks. Additionally, other instances of Chinese cyberattacks against India's cyberspace have been reported in multitude. Japan In April 2021 Japan claimed that the Chinese military instructed a hacker group to commit cyberattacks on about 200 companies and research institutes in Japan, which included JAXA. United States The United States of America has accused China of cyberwarfare attacks that targeted the networks of important American military, commercial, research, and industrial organisations. A Congressional advisory group has declared China "the single greatest risk to the security of American technologies" and "there has been a marked increase in cyber intrusions originating in China and targeting U.S. government and defense-related computer systems". In January 2010, Google reported targeted attacks on its corporate infrastructure originating from China "that resulted in the theft of intellectual property from Google." Gmail accounts belonging to two human rights activists were compromised in an attack on Google's password system. American security experts connected the Google attack to various other political and corporate espionage efforts originating from China, which included spying against military, commercial, research, and industrial corporations. Obama administration officials called the cyberattacks "an increasingly serious cyber threat to US critical industries." In addition to Google, at least 34 other companies have been attacked. Reported cases include Northrop Grumman, Symantec, Yahoo, Dow Chemical, and Adobe Systems. Cyber-espionage has been aimed at both commercial and military interests. Diplomatic cables highlight US concerns that China is exploiting its access to Microsoft source code to boost its offensive and defensive capabilities. A number of private computer security firms have stated that they have growing evidence of cyber-espionage efforts originating from China, including the "Comment Group". China has denied accusations of cyberwarfare, and has accused the United States of engaging in cyber-warfare against it, accusations which the United States denies. During March 2013, high-level discussions continued. In May 2014, a federal grand jury in the United States indicted five PLA Unit 61398 officers on charges of theft of confidential business information from U.S. commercial firms and planting malware on their computers. In September 2014, a Senate Armed Services Committee probe revealed hackers associated with the Chinese government committing various intrusions of computer systems belonging to U.S. airlines, technology companies and other contractors involved with the movement of U.S. troops and military equipment, and in October 2014, The FBI added that hackers, who they believe to be backed by the Chinese government, have recently launched attacks on U.S. companies. In 2015, the U.S Office of Personnel Management (OPM) announced that it had been the target of a data breach targeting the records of as many as 21.5 million people. The Washington Post reported that the attack came from China, citing unnamed government officials. FBI director James Comey explained "it is a very big deal from a national security perspective and a counterintelligence perspective. It's a treasure trove of information about everybody who has worked for, tried to work for, or works for the United States government." In 2019, a study showed continued attacks on the US Navy and its industrial partners. In February 2020, a US federal grand jury charged four members of China's People's Liberation Army with the 2017 Equifax hack. The official account of FBI stated on Twitter that they played a role in "one of the largest thefts of personally identifiable information by state-sponsored hackers ever recorded", involving "145 million Americans". The Voice of America reported in April 2020 that "U.S. intelligence agencies concluded the Chinese hackers meddled in both the 2016 and 2018 elections" and said"there have already been signs that China-allied hackers have engaged in so-called "spear-phishing" attacks on American political targets" ahead of the 2020 United States elections. In March 2021 United States intelligence community released analysis in finding that China had considered interfering with the election but decided against it on concerns it would fail or backfire. In April 2021 FireEye said that suspected Chinese hackers used a zero-day attack against Pulse Connect Secure devices, a vpn device, in order to spy on dozens of government, defense industry and financial targets in the U.S. and Europe. Taiwan Comparing the semiconductor industry in China and Taiwan today, Taiwan is the leader in terms of overall competitiveness. On 6 August 2020, Wired published a report, stating that "Taiwan has faced existential conflict with China for its entire existence and has been targeted by China's state-sponsored hackers for years. But an investigation by one Taiwanese security firm has revealed just how deeply a single group of Chinese hackers was able to penetrate an industry at the core of the Taiwanese economy, pillaging practically its entire semiconductor industry." The Vatican In July 2020 it was reported that Chinese state-sponsored hackers operating under the named RedDelta hacked the Vatican's computer network ahead of negotiations between China and the Vatican. IP hijacking During 18 minutes on April 8, 2010, state-owned China Telecom advertised erroneous network routes that instructed "massive volumes" of U.S. and other foreign Internet traffic to go through Chinese servers. A US Defense Department spokesman told reporters that he did not know if "we've determined whether that particular incident ... was done with some malicious intent or not" and China Telecom denied the charge that it "hijacked" U.S. Internet traffic. See also 2011 Canadian government hackings Beijing–Washington cyber hotline Chinese intelligence activity abroad Chinese information operations and information warfare Cyberwarfare by Russia Death of Shane Todd GhostNet Google China Honker Union List of cyber warfare forces - China Operation Aurora Operation Shady RAT Titan Rain People's Liberation Army Strategic Support Force PLA Unit 61398 Red Apollo 2021 Microsoft Exchange Cyberattack References Advanced persistent threat China–United States relations Cyberattacks Foreign relations of China Hacker groups Hacking (computer security)
61355589
https://en.wikipedia.org/wiki/2019%E2%80%9320%20USC%20Trojans%20men%27s%20basketball%20team
2019–20 USC Trojans men's basketball team
The 2019–20 USC Trojans men's basketball team represented the University of Southern California during the 2019–20 NCAA Division I men's basketball season. Led by seventh-year head coach Andy Enfield, they played their home games at the Galen Center in Los Angeles, California as members of the Pac-12 Conference. They finished the season 22–9, 11–7 in Pac-12 play to finish in a tie for third place. They were set to take on Arizona in the quarterfinals of the Pac-12 Tournament. However, the remainder of the Pac-12 Tournament, and all other postseason tournaments, were cancelled amid the COVID-19 pandemic. Previous season The 2018–19 USC Trojans finished the season 16–17, 8–10 in Pac-12 Conference play. As the No. 8 seed in the 2019 Pac-12 Conference Tournament, the Trojans defeated the No. 9 seed Arizona Wildcats in the first round before losing to the No. 1 seed Washington Huskies in the second round. The Trojans were not selected for any postseason play. Off-season Departures Incoming transfers 2019 Recruiting class Roster Dec. 16, 2019 – Redshirt Sophomore forward, Charles O'Bannon Jr. elected to transfer. Exhibition On July 31, 2019, it was announced that USC will play Villanova on October 18, 2019, in an exhibition game. The game, which did not count towards the regular season record, took place at Galen Center, USC's home arena. All proceeds went to the California Fire Foundation. It was USC's first exhibition game open to the public since 2014. Schedule and results |- !colspan=9 style=| Exhibition |- !colspan=9 style=| Non-conference regular season |- !colspan=9 style=| Pac-12 regular season |- !colspan=9 style=| Pac-12 Tournament |- style="background:#bbbbbb" | style="text-align:center"|March 12, 20202:30 pm, P12N | style="text-align:center"| (4) | vs. (5) ArizonaQuarterfinals | colspan=5 rowspan=1 style="text-align:center"|Cancelled due to the COVID-19 pandemic | style="text-align:center"|T-Mobile ArenaParadise, NV |- References USC Trojans men's basketball seasons Usc USC Trojans basketball, men USC Trojans basketball, men USC Trojans basketball, men USC Trojans basketball, men
4353220
https://en.wikipedia.org/wiki/NCR%20VRX
NCR VRX
VRX is an acronym for Virtual Resource eXecutive, a proprietary operating system on the NCR Criterion series, and later the V-8000 series of mainframe computers manufactured by NCR Corporation during the 1970s and 1980s. It replaced the B3 Operating System originally distributed with the Century series, and inherited many of the features of the B4 Operating System from the high-end of the NCR Century series of computers. VRX was upgraded in the late 1980s and 1990s to become VRX/E for use on the NCR 9800 (Criterion) series of computers. Edward D. Scott managed the development team of 150 software engineers who developed VRX and James J "JJ" Whelan was the software architect responsible for technical oversight and the overall architecture of VRX. Tom Tang was the Director of Engineering at NCR responsible for development of the entire Criterion family of computers. This product line achieved over $1B in revenue and $300M in profits for NCR. VRX was NCR's response to IBM's MVS virtual storage operating system and was NCR's first virtual storage system. It was based on a segmented page architecture provided in the Criterion architecture. The Criterion series provided a virtual machine architecture which allowed different machine architectures running under the same operating system. The initial offering provided a Century virtual machine which was instruction compatible with the Century series and a COBOL virtual machine designed to optimize programs written in COBOL. Switching between virtual machines was provided by a virtual machine indicator in the subroutine call mechanism. This allowed programs written in one virtual machine to use subroutines written for another. The same mechanism was used to enter an "executive" state used for operating system functions and a "privileged system" state used for direct access to hardware. Proprietary operating systems NCR Corporation products
34363215
https://en.wikipedia.org/wiki/Sality
Sality
Sality is the classification for a family of malicious software (malware), which infects files on Microsoft Windows systems. Sality was first discovered in 2003 and has advanced over the years to become a dynamic, enduring and full-featured form of malicious code. Systems infected with Sality may communicate over a peer-to-peer (P2P) network to form a botnet for the purpose of relaying spam, proxying of communications, exfiltrating sensitive data, compromising web servers and/or coordinating distributed computing tasks for the purpose of processing intensive tasks (e.g. password cracking). Since 2010, certain variants of Sality have also incorporated the use of rootkit functions as part of an ongoing evolution of the malware family. Because of its continued development and capabilities, Sality is considered to be one of the most complex and formidable forms of malware to date. Aliases The majority of Antivirus (A/V) vendors use the following naming conventions when referring to this family of malware: Sality SalLoad Kookoo SaliCode Kukacka Overview Sality is a family of polymorphic file infectors, which target Windows executable files with the extensions .EXE or .SCR. Sality utilizes polymorphic and entry-point obscuring (EPO) techniques to infect files using the following methods: not changing the entry point address of the host, and replacing the original host code at the entry point of the executable with a variable stub to redirect execution to the polymorphic viral code, which has been inserted in the last section of the host file; the stub decrypts and executes a secondary region, known as the loader; finally, the loader runs in a separate thread within the infected process to eventually load the Sality payload. Sality may execute a malicious payload that deletes files with certain extensions and/or beginning with specific strings, terminates security-related processes and services, searches a user’s address book for e-mail addresses to send spam messages, and contacts a remote host. Sality may also download additional executable files to install other malware, and for the purpose of propagating pay per install applications. Sality may contain Trojan components; some variants may have the ability to steal sensitive personal or financial data (i.e. information stealers), generate and relay spam, relay traffic via HTTP proxies, infect web sites, achieve distributed computing tasks such as password cracking, as well as other capabilities. Sality’s downloader mechanism downloads and executes additional malware as listed in the URLs received using the peer-to-peer component. The distributed malware may share the same “code signature” as the Sality payload, which may provide attribution to one group and/or that they share a large portion of the code. The additional malware typically communicates with and reports to central command and control (C&C) servers located throughout the world. According to Symantec, the "combination of file infection mechanism and the fully decentralized peer-to-peer network [...] make Sality one of the most effective and resilient malware in today's threat landscape." Two versions of the botnet are currently active, versions 3 and 4. The malware circulated on those botnets are digitally signed by the attackers to prevent hostile takeover. In recent years, Sality has also included the use of rootkit techniques to maintain persistence on compromised systems and evade host-based detections, such as anti-virus software. Installation Sality infects files in the affected computer. Most variants use a DLL that is dropped once in each computer. The DLL file is written to disk in two forms, for example: %SYSTEM%\wmdrtc32.dll %SYSTEM%\wmdrtc32.dl_ The DLL file contains the bulk of the virus code. The file with the extension ".dl_" is the compressed copy. Recent variants of Sality, such as Virus:Win32-Sality.AM, do not drop the DLL, but instead load it entirely in memory without writing it to disk. This variant, along with others, also drops a driver with a random file name in the folder %SYSTEM%\drivers. Other malware may also drop Sality in the computer. For example, a Sality variant detected as Virus:Win32-Sality.AU is dropped by Worm:Win32-Sality.AU. Some variants of Sality, may also include a rootkit by creating a device with the name Device\amsint32 or \DosDevices\amsint32. Method of propagation File infection Sality usually targets all files in drive C: that have .SCR or .EXE file extensions, beginning with the root folder. Infected files increase in size by a varying amount. The virus also targets applications that run at each Windows start and frequently used applications, referenced by the following registry keys: HKCU\Software\Microsoft\Windows\ShellNoRoam\MUICache HKCU\Software\Microsoft\Windows\CurrentVersion\Run HKLM\Software\Microsoft\Windows\CurrentVersion\Run Sality avoids infecting particular files, in order to remain hidden in the computer: Files protected by System File Checker (SFC) Files under the %SystemRoot% folder Executables of several antivirus/firewall products by ignoring files that contain certain substrings Removable drives and network shares Some variants of Sality can infect legitimate files, which are then moved to available removable drives and network shares by enumerating all network share folders and resources of the local computer and all files in drive C: (beginning with the root folder). It infects the files it finds by adding a new code section to the host and inserting its malicious code into the newly added section. If a legitimate file exists, the malware will copy the file to the Temporary Files folder and then infect the file. The resulting infected file is then moved to the root of all available removable drives and network shares as any of the following: .pif .exe .cmd The Sality variant also creates an "autorun.inf" file in the root of all these drives that points to the virus copy. When a drive is accessed from a computer supporting the AutoRun feature, the virus is then launched automatically. Some Sality variants may also drop a file with a .tmp file extension to the discovered network shares and resources as well as drop a .LNK file to run the dropped virus. Payload Sality may inject code into running processes by installing a message hook Sality commonly searches for and attempts to delete files related to antivirus updates and terminate security applications, such as antivirus and personal firewall programs; attempts to terminate security applications containing the same strings as the files it avoids infecting; and may also terminate security-related services and block access to security-related websites that contain certain substrings Sality variants may modify the computer registry to lower Windows security, disable the use of the Windows Registry Editor and/or prevent the viewing of files with hidden attributes; Some Sality variants recursively delete all registry values and data under the registry subkeys for HKCU\System\CurrentControlSet\Control\SafeBoot and HKLM\System\CurrentControlSet\Control\SafeBoot to prevent the user from starting Windows in safe mode Some Sality variants can steal sensitive information such as cached passwords and logged keystrokes, which were entered on the affected computer Sality variants usually attempt to download and execute other files including pay per install executables using a preconfigured list of up to 1000 peers; the goal of the P2P network is to exchange lists of URLs to feed to the downloader functionality; the files are downloaded into the Windows Temporary Files folder and decrypted using one of several hardcoded passwords Most of Sality’s payload is executed in the context of other processes, which makes cleaning difficult and allows the malware to bypass some firewalls; to avoid multiple injections in the same process, a system-wide mutex called is created for every process in which code is injected, which would prevent more than one instance from running in memory at the same time. Some variants of Win32-Sality drop a driver with a random file name in the folder %SYSTEM%\drivers to perform similar functions such as terminate security-related processes and block access to security-related websites, and may also disable any system service descriptor table (SSDT) hooks to prevent certain security software from working properly Some Sality variants spread by moving to available removable/remote drives and network shares Some Sality variants drop .LNK files, which automatically run the dropped virus Some Sality variants may search a user's Outlook address book and Internet Explorer cached files for e-mail addresses to send spam messages, which then sends out spammed messages based on information it retrieves from a remote server Sality may add a section to the configuration file %SystemRoot%\system.ini as an infection marker, contact remote hosts to confirm Internet connectivity, report a new infection to its author, receive configuration or other data, download and execute arbitrary files (including updates or additional malware), receive instruction from a remote attacker, and/or upload data taken from the affected computer; some Sality Variants may open a remote connection, allowing a remote attacker to download and execute arbitrary files on the infected computer Computers infected with recent versions of Sality, such as Virus:Win32-Sality.AT, and Virus:Win32-Sality.AU, connect to other infected computers by joining a peer-to-peer (P2P) network to receive URLs pointing to additional malware components; the P2P protocol runs over UDP, all the messages exchanged on the P2P network are encrypted, and the local UDP port number used to connect to the network is generated as a function of the computer name Sality may add a rootkit that includes a driver with capabilities such as terminating processes via NtTerminateProcess as well as blocking access to select anti-virus resources (e.g. anti-virus vendor web sites) by way of IP Filtering; the latter requires the driver to register a callback function, which will be used to determine if packets should be dropped or forwarded (e.g. drop packets if string contains the name of an anti-virus vendor from a comprised list) Recovery Microsoft has identified dozens of files which are all commonly associated with the malware. Sality uses stealth measures to maintain persistence on a system; thus, users may need to boot to a trusted environment in order to remove it. Sality may also make configuration changes such as to the Windows Registry, which makes it difficult to download, install and/or update virus protection. Also, since many variants of Sality attempt to propagate to available removable/remote drives and network shares, it is important to ensure the recovery process thoroughly detects and removes the malware from any and all known/possible locations. See also Computer virus References Internet security Multi-agent systems Distributed computing projects Spamming Botnets
22405831
https://en.wikipedia.org/wiki/Quickoffice
Quickoffice
Quickoffice is a discontinued freeware proprietary productivity suite for mobile devices which allows viewing, creating and editing documents, presentations and spreadsheets. It consists of Quickword (a word processor), Quicksheet (a spreadsheet) and QuickPoint (a presentation program). The programs are compatible with Microsoft Office file formats, but not the OpenDocument file format. Quickoffice was commonly used on smartphones and tablets. It was the main office editing suite on Symbian OS where it first appeared in 2005 and last updated in 2011, and came pre-loaded on all devices. It was released for Android in 2010. There was a project to port Quickoffice to Chromebooks in February 2013, and the port released as a Chrome extension named "Office Editing for Docs, Sheets, and Slides." History Quickoffice, Inc., a company in Plano, Texas, was founded as Cutting Edge Software Inc. by Jeff Musa in 1997, offering Microsoft Office and Excel compatibility for mobile devices. It developed the Quicksheet spreadsheet for Palm OS, and the free QuickOffice and paid-for QuickOffice Pro and QuickOffice Pro HD apps. Its flagship products Quicksheet and SmartDoc both won "Best in Class" honors for 1998 and 1999 by Tap Magazine. Cutting Edge Software was acquired by Mobility Electronics in 2002 for an undisclosed sum and operated as a wholly owned subsidiary until it was sold to Mobile Digital Media in 2004, when that firm changed the name to Quickoffice, Inc. prior to the sale of the Mobile Digital Media business in 2005. On June 5, 2012, Google acquired Quickoffice, Inc., along with its team of developers, for an undisclosed sum. Google re-released Quickoffice as a free app on September 19, 2013, and includes it with its Android operating system from version 4.4 KitKat. After having integrated the features of Quickoffice into its own newly released Google Docs, Google Sheets and Google Slides apps, on 29 June 2014, Google announced that Quickoffice would be discontinued, and had since been removed from the Google Play Store and App Store. See also Google Drive Google Docs Google Sheets Google Slides Polaris Office List of mobile and tablet office suites List of office suites Comparison of office suites List of word processors Comparison of word processors References Mobile software Office suites Android (operating system) software IOS software Google acquisitions Discontinued Google software Discontinued Google acquisitions 2012 mergers and acquisitions
15893148
https://en.wikipedia.org/wiki/Cold%20boot%20attack
Cold boot attack
In computer security, a cold boot attack (or to a lesser extent, a platform reset attack) is a type of side channel attack in which an attacker with physical access to a computer performs a memory dump of a computer's random-access memory (RAM) by performing a hard reset of the target machine. Typically, cold boot attacks are used for retrieving encryption keys from a running operating system for malicious or criminal investigative reasons. The attack relies on the data remanence property of DRAM and SRAM to retrieve memory contents that remain readable in the seconds to minutes following a power switch-off. An attacker with physical access to a running computer typically executes a cold boot attack by cold-booting the machine and booting a lightweight operating system from a removable disk to dump the contents of pre-boot physical memory to a file. An attacker is then free to analyze the data dumped from memory to find sensitive data, such as the keys, using various forms of key finding attacks. Since cold boot attacks target random-access memory, full disk encryption schemes, even with a trusted platform module installed are ineffective against this kind of attack. This is because the problem is fundamentally a hardware (insecure memory) and not a software issue. However, malicious access can be prevented by limiting physical access and using modern techniques to avoid storing sensitive data in random-access memory. Technical details DIMM memory modules gradually lose data over time as they lose power, but do not immediately lose all data when power is lost. Depending on temperature and environmental conditions, memory modules can potentially retain, at least, some data for up to 90 minutes after power loss. With certain memory modules, the time window for an attack can be extended to hours or even weeks by cooling them with freeze spray. Furthermore, as the bits disappear in memory over time, they can be reconstructed, as they fade away in a predictable manner. Consequently, an attacker can perform a memory dump of its contents by executing a cold boot attack. The ability to execute the cold boot attack successfully varies considerably across different systems, types of memory, memory manufacturers and motherboard properties, and may be more difficult to carry out than software-based methods or a DMA attack. While the focus of current research is on disk encryption, any sensitive data held in memory is vulnerable to the attack. Attackers execute cold boot attacks by forcefully and abruptly rebooting a target machine and then booting a pre-installed operating system from a USB flash drive, CD-ROM or over the network. In cases where it is not practical to hard reset the target machine, an attacker may alternatively physically remove the memory modules from the original system and quickly place them into a compatible machine under the attacker's control, which is then booted to access the memory. Further analysis can then be performed against the data dumped from RAM. A similar kind of attack can also be used to extract data from memory, such as a DMA attack that allows the physical memory to be accessed via a high-speed expansion port such as FireWire. A cold boot attack may be preferred in certain cases, such as when there is high risk of hardware damage. Using the high-speed expansion port can short out, or physically damage hardware in certain cases. Uses Cold boots attacks are typically used for digital forensic investigations, malicious purposes such as theft, and data recovery. Digital forensics In certain cases, a cold boot attack is used in the discipline of digital forensics to forensically preserve data contained within memory as criminal evidence. For example, when it is not practical to preserve data in memory through other means, a cold boot attack may be used to perform a dump of the data contained in random-access memory. For example, a cold boot attack is used in situations where a system is secured and it is not possible to access the computer. A cold boot attack may also be necessary when a hard disk is encrypted with full disk encryption and the disk potentially contains evidence of criminal activity. A cold boot attack provides access to the memory, which can provide information about the state of the system at the time such as what programs are running. Malicious intent A cold boot attack may be used by attackers to gain access to encrypted information such as financial information or trade secrets for malicious intent. Circumventing full disk encryption A common purpose of cold boot attacks is to circumvent software-based disk encryption. Cold boot attacks when used in conjunction with key finding attacks have been demonstrated to be an effective means of circumventing full disk encryption schemes of various vendors and operating systems, even where a Trusted Platform Module (TPM) secure cryptoprocessor is used. In the case of disk encryption applications that can be configured to allow the operating system to boot without a pre-boot PIN being entered or a hardware key being present (e.g. BitLocker in a simple configuration that uses a TPM without a two-factor authentication PIN or USB key), the time frame for the attack is not limiting at all. BitLocker BitLocker in its default configuration uses a trusted platform module that neither requires a pin, nor an external key to decrypt the disk. When the operating system boots, BitLocker retrieves the key from the TPM, without any user interaction. Consequently, an attacker can simply power on the machine, wait for the operating system to begin booting and then execute a cold boot attack against the machine to retrieve the key. Due to this, two-factor authentication, such as a pre-boot PIN or a removable USB device containing a startup key together with a TPM should be used to work around this vulnerability in the default BitLocker implementation. However, this workaround does not prevent an attacker from retrieving sensitive data from memory, nor from retrieving encryption keys cached in memory. Mitigation Since a memory dump can be easily performed by executing a cold boot attack, storage of sensitive data in RAM, like encryption keys for full disk encryption is unsafe. Several solutions have been proposed for storing encryption keys in areas, other than random-access memory. While these solutions may reduce the chance of breaking full disk encryption, they provide no protection of other sensitive data stored in memory. Register-based key storage One solution for keeping encryption keys out of memory is register-based key storage. Implementations of this solution are TRESOR and Loop-Amnesia. Both of these implementations modify the kernel of an operating system so that CPU registers (in TRESOR's case the x86 debug registers and in Loop-Amnesia's case the AMD64 or EMT64 profiling registers) can be used to store encryption keys, rather than in RAM. Keys stored at this level cannot easily be read from userspace and are lost when the computer restarts for any reason. TRESOR and Loop-Amnesia both must use on-the-fly round key generation due to the limited space available for storing cryptographic tokens in this manner. For security, both disable interrupts to prevent key information from leaking to memory from the CPU registers while encryption or decryption is being performed, and both block access to the debug or profile registers. There are two potential areas in modern x86 processors for storing keys: the SSE registers which could in effect be made privileged by disabling all SSE instructions (and necessarily, any programs relying on them), and the debug registers which were much smaller but had no such issues. A proof of concept distribution called 'paranoix' based on the SSE register method has been developed. The developers claim that "running TRESOR on a 64-bit CPU that supports AES-NI, there is no performance penalty compared to a generic implementation of AES", and run slightly faster than standard encryption despite the need for key recalculation. The primary advantage of Loop-Amnesia compared to TRESOR is that it supports the use of multiple encrypted drives; the primary disadvantages are a lack of support for 32-bit x86 and worse performance on CPUs not supporting AES-NI. Cache-based key storage "Frozen cache" (sometimes known as "cache as RAM"), may be used to securely store encryption keys. It works by disabling a CPU's L1 cache and uses it for key storage, however, this may significantly degrade overall system performance to the point of being too slow for most purposes. A similar cache-based solution was proposed by Guan et al. (2015) by employing the WB (Write-Back) cache mode to keep data in caches, reducing the computation times of public key algorithms. Mimosa in IEEE S&P 2015 presented a more practical solution for public-key cryptographic computations against cold-boot attacks and DMA attacks. It employs hardware transactional memory (HTM) which was originally proposed as a speculative memory access mechanism to boost the performance of multi-threaded applications. The strong atomicity guarantee provided by HTM, is utilized to defeat illegal concurrent accesses to the memory space that contains sensitive data. The RSA private key is encrypted in memory by an AES key that is protected by TRESOR. On request, an RSA private-key computation is conducted within an HTM transaction: the private key is firstly decrypted into memory, and then RSA decryption or signing is conducted. Because a plain-text RSA private key only appears as modified data in an HTM transaction, any read operation to these data will abort the transaction - the transaction will roll-back to its initial state. Note that, the RSA private key is encrypted in initial state, and it is a result of write operations (or AES decryption). Currently HTM is implemented in caches or store-buffers, both of which are located in CPUs, not in external RAM chips. So cold-boot attacks are prevented. Mimosa defeats against attacks that attempt to read sensitive data from memory (including cold-boot attacks, DMA attacks, and other software attacks), and it only introduces a small performance overhead. Dismounting encrypted disks Best practice recommends dismounting any encrypted, non-system disks when not in use, since most disk encryption softwares are designed to securely erase keys cached in memory after use. This reduces the risk of an attacker being able to salvage encryption keys from memory by executing a cold boot attack. To minimize access to encrypted information on the operating system hard disk, the machine should be completely shut down when not in use to reduce the likelihood of a successful cold boot attack. However, data may remain readable from tens of seconds to several minutes depending upon the physical RAM device in the machine, potentially allowing some data to be retrieved from memory by an attacker. Configuring an operating system to shut down or hibernate when unused, instead of using sleep mode, can help mitigate the risk of a successful cold boot attack. Effective countermeasures Preventing Physical access Typically, a cold boot attack can be prevented by limiting an attacker's physical access to the computer or by making it increasingly difficult to carry out the attack. One method involves soldering or gluing in the memory modules onto the motherboard, so they cannot be easily removed from their sockets and inserted into another machine under an attacker's control. However, this does not prevent an attacker from booting the victim's machine and performing a memory dump using a removable USB flash drive. A mitigation such as UEFI Secure Boot or similar boot verification approaches can be effective in preventing an attacker from booting up a custom software environment to dump out the contents of soldered-on main memory. Full memory encryption Encrypting random-access memory (RAM) mitigates the possibility of an attacker being able to obtain encryption keys or other material from memory via a cold boot attack. This approach may require changes to the operating system, applications, or hardware. One example of hardware-based memory encryption was implemented in the Microsoft Xbox. Implementations on newer x86-64 hardware from AMD are available and support from Intel is forthcoming in Willow Cove. Software-based full memory encryption is similar to CPU-based key storage since key material is never exposed to memory, but is more comprehensive since all memory contents are encrypted. In general, only immediate pages are decrypted and read on the fly by the operating system. Implementations of software-based memory encryption solutions include: a commercial product from PrivateCore. and RamCrypt, a kernel-patch for the Linux kernel that encrypts data in memory and stores the encryption key in the CPU registers in a manner similar to TRESOR. Since version 1.24, VeraCrypt supports RAM encryption for keys and passwords. More recently, several papers have been published highlighting the availability of security-enhanced x86 and ARM commodity processors. In that work, an ARM Cortex A8 processor is used as the substrate on which a full memory encryption solution is built. Process segments (for example, stack, code or heap) can be encrypted individually or in composition. This work marks the first full memory encryption implementation on a general-purpose commodity processor. The system provides both confidentiality and integrity protections of code and data which are encrypted everywhere outside the CPU boundary. Secure erasure of memory Since cold boot attacks target unencrypted random-access memory, one solution is to erase sensitive data from memory when it is no longer in use. The "TCG Platform Reset Attack Mitigation Specification", an industry response to this specific attack, forces the BIOS to overwrite memory during POST if the operating system was not shut down cleanly. However, this measure can still be circumvented by removing the memory module from the system and reading it back on another system under the attacker's control that does not support these measures. An effective secure erase feature would be that if power is interrupted, the RAM is wiped in the less than 300 ms before power is lost in conjunction with a secure BIOS and hard drive/SSD controller that encrypts data on the M-2 and SATAx ports. If the RAM itself contained no serial presence or other data and the timings were stored in the BIOS with some form of failsafe requiring a hardware key to change them, it would be nearly impossible to recover any data and would also be immune to TEMPEST attacks, man-in-the-RAM and other possible infiltration methods. Some operating systems such as Tails provide a feature that securely writes random data to system memory when the operating system is shut down to mitigate against a cold boot attack. However, video memory erasure is still not possible and as of 2021 it's still an open ticket on the Tails forum. Potential attacks which could exploit this flaw are: Generation of a GnuPG keypair and viewing the private key on a text editor could lead to the key being recovered. A cryptocurrency seed could be seen, therefore bypassing the wallet (even if encrypted) allowing access to the funds. Typing a password with visibility enabled might show parts of it or even the whole key. If a keyfile is used, it could be shown to reduce time needed for a password attack. Traces of mounted or opened encrypted volumes with plausible deniability might be shown, leading to the discovery of them. If connected to a .onion service, the URL might be shown and lead to its discovery, whereas otherwise would be extremely difficult. Usage of a particular program could show user's patterns. For instance, if a steganography program is used and opened, the assumption that the user has been hiding data could be made. Likewise, if an instant messenger is being used, a list of contacts or messages could be shown. External key storage A cold boot attack can be prevented by ensuring no keys are stored by the hardware under attack. User enters the disk encryption key manually Using an enclosed fully encrypted hard disk drive where the encryption keys are held in hardware separate from the hard disk drive. Ineffective countermeasures Memory scrambling may be used to minimize undesirable parasitic effects of semiconductors as a feature of modern Intel Core processors. However, because the scrambling is only used to decorrelate any patterns within the memory contents, the memory can be descrambled via a descrambling attack. Hence, memory scrambling is not a viable mitigation against cold boot attacks. Sleep mode provides no additional protection against a cold boot attack because data typically still resides in memory while in this state. As such, full disk encryption products are still vulnerable to attack because the keys reside in memory and do not need to be re-entered once the machine resumes from a low power state. Although limiting the boot device options in the BIOS may make it slightly less easy to boot another operating system, firmware in modern chipsets tends to allow the user to override the boot device during POST by pressing a specified hot key. Limiting the boot device options will not prevent the memory module from being removed from the system and read back on an alternative system either. In addition, most chipsets provide a recovery mechanism that allows the BIOS settings to be reset to default even if they are protected with a password. The BIOS settings can also be modified while the system is running to circumvent any protections enforced by it, such as memory wiping or locking the boot device. Smartphones The cold boot attack can be adapted and carried out in a similar manner on Android smartphones. Since smartphones lack a reset button, a cold boot can be performed by disconnecting the phone's battery to force a hard reset. The smartphone is then flashed with an operating system image that can perform a memory dump. Typically, the smartphone is connected to an attacker's machine using a USB port. Typically, Android smartphones securely erase encryption keys from random-access memory when the phone is locked. This reduces the risk of an attacker being able to retrieve the keys from memory, even if they succeeded in executing a cold boot attack against the phone. References External links McGrew Security's Proof of Concept Boffins Freeze Phone to Crack Android On-Device Crypto Disk encryption Side-channel attacks
1670481
https://en.wikipedia.org/wiki/Managed%20services
Managed services
Managed services is the practice of outsourcing the responsibility for maintaining, and anticipating need for, a range of processes and functions, ostensibly for the purpose of improved operations and reduced budgetary expenditures through the reduction of directly-employed staff. It is an alternative to the break/fix or on-demand outsourcing model where the service provider performs on-demand services and bills the customer only for the work done. Under this subscription model, the client or customer is the entity that owns or has direct oversight of the organization or system being managed, whereas the managed services provider (MSP) is the service provider delivering the managed services. The client and the MSP are bound by a contractual, service-level agreement that states the performance and quality metrics of their relationship. Advantages and challenges Adopting managed services is intended to be an efficient way to stay up-to-date on technology, have access to skills and address issues related to cost, quality of service and risk. As the IT infrastructure components of many SMB and large corporations are migrating to the cloud, with MSPs (managed services providers) increasingly facing the challenge of cloud computing, a number of MSPs are providing in-house cloud services or acting as brokers with cloud services providers. A recent survey claims that a lack of knowledge and expertise in cloud computing rather than offerors' reluctance, appears to be the main obstacle to this transition. For example, in transportation, many companies face a significant increase of fuel and carrier costs, driver shortages, customer service requests and global supply chain complexities. Managing day-to-day transportation processes and reducing related costs come as significant burdens that require the expertise of Transportation Managed Services (or managed transportation services) providers. History and evolution The evolution of MSP started in the 1990s with the emergence of application service providers (ASPs) who helped pave the way for remote support for IT infrastructure. From the initial focus of remote monitoring and management of servers and networks, the scope of an MSP's services expanded to include mobile device management, managed security, remote firewall administration and security-as-a-service, and managed print services. Around 2005, Karl W. Palachuk, Amy Luby (Founder of Managed Service Provider Services Network acquired by High Street Technology Ventures), and Erick Simpson (Managed Services Provider University) were the first advocates and the pioneers of the managed services business model. The first books on the topic of managed services: Service Agreements for SMB Consultants: A Quick-Start Guide to Managed Services and The Guide to a Successful Managed Services Practice were published in 2006 by Palachuk and Simpson, respectively. Since then, the managed services business model has gained ground among enterprise-level companies. As the value-added reseller (VAR) community evolved to a higher level of services, it adapted the managed service model and tailored it to SMB companies. In the new economy, IT manufacturers are currently moving away from a "box-shifting" resale to a more customized, managed service offering. In this transition, the billing and sales processes of intangible managed services, appear as the main challenges for traditional resellers. The global managed services market is expected to grow from an estimated $342.9 Billion in 2020 to $410.2 Billion by 2027, representing a CAGR of 2.6%. Types In the information technology area, the most common managed services appear to evolve around connectivity and bandwidth, network monitoring, security, virtualization, and disaster recovery. Beyond traditional application and infrastructure management, managed services may also include storage, desktop and communications, mobility, help desk, and technical support. In general, common managed services include the following applications. Provision Definition A managed IT services provider (MSP) is most often information technology (IT) services provider that manages and assumes responsibility for providing a defined set of services to its clients either proactively or as the MSP (not the client) determines that services are needed. Most MSPs bill an upfront setup or transition fee and an ongoing flat or near-fixed monthly fee, which benefits clients by providing them with predictable IT support costs. Sometimes, MSPs act as facilitators who manage and procure staffing services on behalf of the client. In such context, they use an online application called vendor management system (VMS) for transparency and efficiency. A managed service provider is also useful in creating disaster recovery plans, similar to a corporation's. Managed Service Providers tend to prove most useful to small businesses with a limited IT budget. The managed services model has been useful in the private sector, notably among Fortune 500 companies, and has an interesting future in government. Main players Main managed service providers originate from the United States (IBM, Accenture, Cognizant), Europe (Atos, Capgemini) and India (TCS, Infosys, Wipro). See also Managed service company Application service provider Customer service Enterprise architecture Service (economics) Service provider Service science, management and engineering Information technology outsourcing Technical support Web service Remote monitoring and management References Further reading Managed Services in a Month, 2nd edition. Great Little Book Publishing Co., Inc. 2013. The Guide to a Successful Managed Services Practice, January 2013. Intelligent Enterprise. Business-to-business Business occupations Business terms International trade Outsourcing Service industries Supply chain management
1098818
https://en.wikipedia.org/wiki/Gene%20expression%20programming
Gene expression programming
In computer programming, gene expression programming (GEP) is an evolutionary algorithm that creates computer programs or models. These computer programs are complex tree structures that learn and adapt by changing their sizes, shapes, and composition, much like a living organism. And like living organisms, the computer programs of GEP are also encoded in simple linear chromosomes of fixed length. Thus, GEP is a genotype–phenotype system, benefiting from a simple genome to keep and transmit the genetic information and a complex phenotype to explore the environment and adapt to it. Background Evolutionary algorithms use populations of individuals, select individuals according to fitness, and introduce genetic variation using one or more genetic operators. Their use in artificial computational systems dates back to the 1950s where they were used to solve optimization problems (e.g. Box 1957 and Friedman 1959). But it was with the introduction of evolution strategies by Rechenberg in 1965 that evolutionary algorithms gained popularity. A good overview text on evolutionary algorithms is the book "An Introduction to Genetic Algorithms" by Mitchell (1996). Gene expression programming belongs to the family of evolutionary algorithms and is closely related to genetic algorithms and genetic programming. From genetic algorithms it inherited the linear chromosomes of fixed length; and from genetic programming it inherited the expressive parse trees of varied sizes and shapes. In gene expression programming the linear chromosomes work as the genotype and the parse trees as the phenotype, creating a genotype/phenotype system. This genotype/phenotype system is multigenic, thus encoding multiple parse trees in each chromosome. This means that the computer programs created by GEP are composed of multiple parse trees. Because these parse trees are the result of gene expression, in GEP they are called expression trees. Encoding: the genotype The genome of gene expression programming consists of a linear, symbolic string or chromosome of fixed length composed of one or more genes of equal size. These genes, despite their fixed length, code for expression trees of different sizes and shapes. An example of a chromosome with two genes, each of size 9, is the string (position zero indicates the start of each gene): 012345678012345678 L+a-baccd**cLabacd where “L” represents the natural logarithm function and “a”, “b”, “c”, and “d” represent the variables and constants used in a problem. Expression trees: the phenotype As shown above, the genes of gene expression programming have all the same size. However, these fixed length strings code for expression trees of different sizes. This means that the size of the coding regions varies from gene to gene, allowing for adaptation and evolution to occur smoothly. For example, the mathematical expression: can also be represented as an expression tree: where "Q” represents the square root function. This kind of expression tree consists of the phenotypic expression of GEP genes, whereas the genes are linear strings encoding these complex structures. For this particular example, the linear string corresponds to: 01234567 Q*-+abcd which is the straightforward reading of the expression tree from top to bottom and from left to right. These linear strings are called k-expressions (from Karva notation). Going from k-expressions to expression trees is also very simple. For example, the following k-expression: 01234567890 Q*b**+baQba is composed of two different terminals (the variables “a” and “b”), two different functions of two arguments (“*” and “+”), and a function of one argument (“Q”). Its expression gives: K-expressions and genes The k-expressions of gene expression programming correspond to the region of genes that gets expressed. This means that there might be sequences in the genes that are not expressed, which is indeed true for most genes. The reason for these noncoding regions is to provide a buffer of terminals so that all k-expressions encoded in GEP genes correspond always to valid programs or expressions. The genes of gene expression programming are therefore composed of two different domains – a head and a tail – each with different properties and functions. The head is used mainly to encode the functions and variables chosen to solve the problem at hand, whereas the tail, while also used to encode the variables, provides essentially a reservoir of terminals to ensure that all programs are error-free. For GEP genes the length of the tail is given by the formula: where h is the head's length and nmax is maximum arity. For example, for a gene created using the set of functions and the set of terminals T = {a, b}, nmax = 2. And if we choose a head length of 15, then t = 15 (2–1) + 1 = 16, which gives a gene length g of 15 + 16 = 31. The randomly generated string below is an example of one such gene: 0123456789012345678901234567890 *b+a-aQab+//+b+babbabbbababbaaa It encodes the expression tree: which, in this case, only uses 8 of the 31 elements that constitute the gene. It's not hard to see that, despite their fixed length, each gene has the potential to code for expression trees of different sizes and shapes, with the simplest composed of only one node (when the first element of a gene is a terminal) and the largest composed of as many nodes as there are elements in the gene (when all the elements in the head are functions with maximum arity). It's also not hard to see that it is trivial to implement all kinds of genetic modification (mutation, inversion, insertion, recombination, and so on) with the guarantee that all resulting offspring encode correct, error-free programs. Multigenic chromosomes The chromosomes of gene expression programming are usually composed of more than one gene of equal length. Each gene codes for a sub-expression tree (sub-ET) or sub-program. Then the sub-ETs can interact with one another in different ways, forming a more complex program. The figure shows an example of a program composed of three sub-ETs. In the final program the sub-ETs could be linked by addition or some other function, as there are no restrictions to the kind of linking function one might choose. Some examples of more complex linkers include taking the average, the median, the midrange, thresholding their sum to make a binomial classification, applying the sigmoid function to compute a probability, and so on. These linking functions are usually chosen a priori for each problem, but they can also be evolved elegantly and efficiently by the cellular system of gene expression programming. Cells and code reuse In gene expression programming, homeotic genes control the interactions of the different sub-ETs or modules of the main program. The expression of such genes results in different main programs or cells, that is, they determine which genes are expressed in each cell and how the sub-ETs of each cell interact with one another. In other words, homeotic genes determine which sub-ETs are called upon and how often in which main program or cell and what kind of connections they establish with one another. Homeotic genes and the cellular system Homeotic genes have exactly the same kind of structural organization as normal genes and they are built using an identical process. They also contain a head domain and a tail domain, with the difference that the heads contain now linking functions and a special kind of terminals – genic terminals – that represent the normal genes. The expression of the normal genes results as usual in different sub-ETs, which in the cellular system are called ADFs (automatically defined functions). As for the tails, they contain only genic terminals, that is, derived features generated on the fly by the algorithm. For example, the chromosome in the figure has three normal genes and one homeotic gene and encodes a main program that invokes three different functions a total of four times, linking them in a particular way. From this example it is clear that the cellular system not only allows the unconstrained evolution of linking functions but also code reuse. And it shouldn't be hard to implement recursion in this system. Multiple main programs and multicellular systems Multicellular systems are composed of more than one homeotic gene. Each homeotic gene in this system puts together a different combination of sub-expression trees or ADFs, creating multiple cells or main programs. For example, the program shown in the figure was created using a cellular system with two cells and three normal genes. The applications of these multicellular systems are multiple and varied and, like the multigenic systems, they can be used both in problems with just one output and in problems with multiple outputs. Other levels of complexity The head/tail domain of GEP genes (both normal and homeotic) is the basic building block of all GEP algorithms. However, gene expression programming also explores other chromosomal organizations that are more complex than the head/tail structure. Essentially these complex structures consist of functional units or genes with a basic head/tail domain plus one or more extra domains. These extra domains usually encode random numerical constants that the algorithm relentlessly fine-tunes in order to find a good solution. For instance, these numerical constants may be the weights or factors in a function approximation problem (see the GEP-RNC algorithm below); they may be the weights and thresholds of a neural network (see the GEP-NN algorithm below); the numerical constants needed for the design of decision trees (see the GEP-DT algorithm below); the weights needed for polynomial induction; or the random numerical constants used to discover the parameter values in a parameter optimization task. The basic gene expression algorithm The fundamental steps of the basic gene expression algorithm are listed below in pseudocode: Select function set; Select terminal set; Load dataset for fitness evaluation; Create chromosomes of initial population randomly; For each program in population: Verify stop condition; Select programs; Replicate selected programs to form the next population; Modify chromosomes using genetic operators; Go to step 5. The first four steps prepare all the ingredients that are needed for the iterative loop of the algorithm (steps 5 through 10). Of these preparative steps, the crucial one is the creation of the initial population, which is created randomly using the elements of the function and terminal sets. Populations of programs Like all evolutionary algorithms, gene expression programming works with populations of individuals, which in this case are computer programs. Therefore, some kind of initial population must be created to get things started. Subsequent populations are descendants, via selection and genetic modification, of the initial population. In the genotype/phenotype system of gene expression programming, it is only necessary to create the simple linear chromosomes of the individuals without worrying about the structural soundness of the programs they code for, as their expression always results in syntactically correct programs. Fitness functions and the selection environment Fitness functions and selection environments (called training datasets in machine learning) are the two facets of fitness and are therefore intricately connected. Indeed, the fitness of a program depends not only on the cost function used to measure its performance but also on the training data chosen to evaluate fitness The selection environment or training data The selection environment consists of the set of training records, which are also called fitness cases. These fitness cases could be a set of observations or measurements concerning some problem, and they form what is called the training dataset. The quality of the training data is essential for the evolution of good solutions. A good training set should be representative of the problem at hand and also well-balanced, otherwise the algorithm might get stuck at some local optimum. In addition, it is also important to avoid using unnecessarily large datasets for training as this will slow things down unnecessarily. A good rule of thumb is to choose enough records for training to enable a good generalization in the validation data and leave the remaining records for validation and testing. Fitness functions Broadly speaking, there are essentially three different kinds of problems based on the kind of prediction being made: Problems involving numeric (continuous) predictions; Problems involving categorical or nominal predictions, both binomial and multinomial; Problems involving binary or Boolean predictions. The first type of problem goes by the name of regression; the second is known as classification, with logistic regression as a special case where, besides the crisp classifications like "Yes" or "No", a probability is also attached to each outcome; and the last one is related to Boolean algebra and logic synthesis. Fitness functions for regression In regression, the response or dependent variable is numeric (usually continuous) and therefore the output of a regression model is also continuous. So it's quite straightforward to evaluate the fitness of the evolving models by comparing the output of the model to the value of the response in the training data. There are several basic fitness functions for evaluating model performance, with the most common being based on the error or residual between the model output and the actual value. Such functions include the mean squared error, root mean squared error, mean absolute error, relative squared error, root relative squared error, relative absolute error, and others. All these standard measures offer a fine granularity or smoothness to the solution space and therefore work very well for most applications. But some problems might require a coarser evolution, such as determining if a prediction is within a certain interval, for instance less than 10% of the actual value. However, even if one is only interested in counting the hits (that is, a prediction that is within the chosen interval), making populations of models evolve based on just the number of hits each program scores is usually not very efficient due to the coarse granularity of the fitness landscape. Thus the solution usually involves combining these coarse measures with some kind of smooth function such as the standard error measures listed above. Fitness functions based on the correlation coefficient and R-square are also very smooth. For regression problems, these functions work best by combining them with other measures because, by themselves, they only tend to measure correlation, not caring for the range of values of the model output. So by combining them with functions that work at approximating the range of the target values, they form very efficient fitness functions for finding models with good correlation and good fit between predicted and actual values. Fitness functions for classification and logistic regression The design of fitness functions for classification and logistic regression takes advantage of three different characteristics of classification models. The most obvious is just counting the hits, that is, if a record is classified correctly it is counted as a hit. This fitness function is very simple and works well for simple problems, but for more complex problems or datasets highly unbalanced it gives poor results. One way to improve this type of hits-based fitness function consists of expanding the notion of correct and incorrect classifications. In a binary classification task, correct classifications can be 00 or 11. The "00" representation means that a negative case (represented by "0”) was correctly classified, whereas the "11" means that a positive case (represented by "1”) was correctly classified. Classifications of the type "00" are called true negatives (TN) and "11" true positives (TP). There are also two types of incorrect classifications and they are represented by 01 and 10. They are called false positives (FP) when the actual value is 0 and the model predicts a 1; and false negatives (FN) when the target is 1 and the model predicts a 0. The counts of TP, TN, FP, and FN are usually kept on a table known as the confusion matrix. So by counting the TP, TN, FP, and FN and further assigning different weights to these four types of classifications, it is possible to create smoother and therefore more efficient fitness functions. Some popular fitness functions based on the confusion matrix include sensitivity/specificity, recall/precision, F-measure, Jaccard similarity, Matthews correlation coefficient, and cost/gain matrix which combines the costs and gains assigned to the 4 different types of classifications. These functions based on the confusion matrix are quite sophisticated and are adequate to solve most problems efficiently. But there is another dimension to classification models which is key to exploring more efficiently the solution space and therefore results in the discovery of better classifiers. This new dimension involves exploring the structure of the model itself, which includes not only the domain and range, but also the distribution of the model output and the classifier margin. By exploring this other dimension of classification models and then combining the information about the model with the confusion matrix, it is possible to design very sophisticated fitness functions that allow the smooth exploration of the solution space. For instance, one can combine some measure based on the confusion matrix with the mean squared error evaluated between the raw model outputs and the actual values. Or combine the F-measure with the R-square evaluated for the raw model output and the target; or the cost/gain matrix with the correlation coefficient, and so on. More exotic fitness functions that explore model granularity include the area under the ROC curve and rank measure. Also related to this new dimension of classification models, is the idea of assigning probabilities to the model output, which is what is done in logistic regression. Then it is also possible to use these probabilities and evaluate the mean squared error (or some other similar measure) between the probabilities and the actual values, then combine this with the confusion matrix to create very efficient fitness functions for logistic regression. Popular examples of fitness functions based on the probabilities include maximum likelihood estimation and hinge loss. Fitness functions for Boolean problems In logic there is no model structure (as defined above for classification and logistic regression) to explore: the domain and range of logical functions comprises only 0's and 1's or false and true. So, the fitness functions available for Boolean algebra can only be based on the hits or on the confusion matrix as explained in the section above. Selection and elitism Roulette-wheel selection is perhaps the most popular selection scheme used in evolutionary computation. It involves mapping the fitness of each program to a slice of the roulette wheel proportional to its fitness. Then the roulette is spun as many times as there are programs in the population in order to keep the population size constant. So, with roulette-wheel selection programs are selected both according to fitness and the luck of the draw, which means that some times the best traits might be lost. However, by combining roulette-wheel selection with the cloning of the best program of each generation, one guarantees that at least the very best traits are not lost. This technique of cloning the best-of-generation program is known as simple elitism and is used by most stochastic selection schemes. Reproduction with modification The reproduction of programs involves first the selection and then the reproduction of their genomes. Genome modification is not required for reproduction, but without it adaptation and evolution won't take place. Replication and selection The selection operator selects the programs for the replication operator to copy. Depending on the selection scheme, the number of copies one program originates may vary, with some programs getting copied more than once while others are copied just once or not at all. In addition, selection is usually set up so that the population size remains constant from one generation to another. The replication of genomes in nature is very complex and it took scientists a long time to discover the DNA double helix and propose a mechanism for its replication. But the replication of strings is trivial in artificial evolutionary systems, where only an instruction to copy strings is required to pass all the information in the genome from generation to generation. The replication of the selected programs is a fundamental piece of all artificial evolutionary systems, but for evolution to occur it needs to be implemented not with the usual precision of a copy instruction, but rather with a few errors thrown in. Indeed, genetic diversity is created with genetic operators such as mutation, recombination, transposition, inversion, and many others. Mutation In gene expression programming mutation is the most important genetic operator. It changes genomes by changing an element by another. The accumulation of many small changes over time can create great diversity. In gene expression programming mutation is totally unconstrained, which means that in each gene domain any domain symbol can be replaced by another. For example, in the heads of genes any function can be replaced by a terminal or another function, regardless of the number of arguments in this new function; and a terminal can be replaced by a function or another terminal. Recombination Recombination usually involves two parent chromosomes to create two new chromosomes by combining different parts from the parent chromosomes. And as long as the parent chromosomes are aligned and the exchanged fragments are homologous (that is, occupy the same position in the chromosome), the new chromosomes created by recombination will always encode syntactically correct programs. Different kinds of crossover are easily implemented either by changing the number of parents involved (there's no reason for choosing only two); the number of split points; or the way one chooses to exchange the fragments, for example, either randomly or in some orderly fashion. For example, gene recombination, which is a special case of recombination, can be done by exchanging homologous genes (genes that occupy the same position in the chromosome) or by exchanging genes chosen at random from any position in the chromosome. Transposition Transposition involves the introduction of an insertion sequence somewhere in a chromosome. In gene expression programming insertion sequences might appear anywhere in the chromosome, but they are only inserted in the heads of genes. This method guarantees that even insertion sequences from the tails result in error-free programs. For transposition to work properly, it must preserve chromosome length and gene structure. So, in gene expression programming transposition can be implemented using two different methods: the first creates a shift at the insertion site, followed by a deletion at the end of the head; the second overwrites the local sequence at the target site and therefore is easier to implement. Both methods can be implemented to operate between chromosomes or within a chromosome or even within a single gene. Inversion Inversion is an interesting operator, especially powerful for combinatorial optimization. It consists of inverting a small sequence within a chromosome. In gene expression programming it can be easily implemented in all gene domains and, in all cases, the offspring produced is always syntactically correct. For any gene domain, a sequence (ranging from at least two elements to as big as the domain itself) is chosen at random within that domain and then inverted. Other genetic operators Several other genetic operators exist and in gene expression programming, with its different genes and gene domains, the possibilities are endless. For example, genetic operators such as one-point recombination, two-point recombination, gene recombination, uniform recombination, gene transposition, root transposition, domain-specific mutation, domain-specific inversion, domain-specific transposition, and so on, are easily implemented and widely used. The GEP-RNC algorithm Numerical constants are essential elements of mathematical and statistical models and therefore it is important to allow their integration in the models designed by evolutionary algorithms. Gene expression programming solves this problem very elegantly through the use of an extra gene domain – the Dc – for handling random numerical constants (RNC). By combining this domain with a special terminal placeholder for the RNCs, a richly expressive system can be created. Structurally, the Dc comes after the tail, has a length equal to the size of the tail t, and is composed of the symbols used to represent the RNCs. For example, below is shown a simple chromosome composed of only one gene a head size of 7 (the Dc stretches over positions 15–22): 01234567890123456789012 +?*+?**aaa??aaa68083295 where the terminal "?” represents the placeholder for the RNCs. This kind of chromosome is expressed exactly as shown above, giving: Then the ?'s in the expression tree are replaced from left to right and from top to bottom by the symbols (for simplicity represented by numerals) in the Dc, giving: The values corresponding to these symbols are kept in an array. (For simplicity, the number represented by the numeral indicates the order in the array.) For instance, for the following 10 element array of RNCs: C = {0.611, 1.184, 2.449, 2.98, 0.496, 2.286, 0.93, 2.305, 2.737, 0.755} the expression tree above gives: This elegant structure for handling random numerical constants is at the heart of different GEP systems, such as GEP neural networks and GEP decision trees. Like the basic gene expression algorithm, the GEP-RNC algorithm is also multigenic and its chromosomes are decoded as usual by expressing one gene after another and then linking them all together by the same kind of linking process. The genetic operators used in the GEP-RNC system are an extension to the genetic operators of the basic GEP algorithm (see above), and they all can be straightforwardly implemented in these new chromosomes. On the other hand, the basic operators of mutation, inversion, transposition, and recombination are also used in the GEP-RNC algorithm. Furthermore, special Dc-specific operators such as mutation, inversion, and transposition, are also used to aid in a more efficient circulation of the RNCs among individual programs. In addition, there is also a special mutation operator that allows the permanent introduction of variation in the set of RNCs. The initial set of RNCs is randomly created at the beginning of a run, which means that, for each gene in the initial population, a specified number of numerical constants, chosen from a certain range, are randomly generated. Then their circulation and mutation is enabled by the genetic operators. Neural networks An artificial neural network (ANN or NN) is a computational device that consists of many simple connected units or neurons. The connections between the units are usually weighted by real-valued weights. These weights are the primary means of learning in neural networks and a learning algorithm is usually used to adjust them. Structurally, a neural network has three different classes of units: input units, hidden units, and output units. An activation pattern is presented at the input units and then spreads in a forward direction from the input units through one or more layers of hidden units to the output units. The activation coming into one unit from other unit is multiplied by the weights on the links over which it spreads. All incoming activation is then added together and the unit becomes activated only if the incoming result is above the unit's threshold. In summary, the basic components of a neural network are the units, the connections between the units, the weights, and the thresholds. So, in order to fully simulate an artificial neural network one must somehow encode these components in a linear chromosome and then be able to express them in a meaningful way. In GEP neural networks (GEP-NN or GEP nets), the network architecture is encoded in the usual structure of a head/tail domain. The head contains special functions/neurons that activate the hidden and output units (in the GEP context, all these units are more appropriately called functional units) and terminals that represent the input units. The tail, as usual, contains only terminals/input units. Besides the head and the tail, these neural network genes contain two additional domains, Dw and Dt, for encoding the weights and thresholds of the neural network. Structurally, the Dw comes after the tail and its length dw depends on the head size h and maximum arity nmax and is evaluated by the formula: The Dt comes after Dw and has a length dt equal to t. Both domains are composed of symbols representing the weights and thresholds of the neural network. For each NN-gene, the weights and thresholds are created at the beginning of each run, but their circulation and adaptation are guaranteed by the usual genetic operators of mutation, transposition, inversion, and recombination. In addition, special operators are also used to allow a constant flow of genetic variation in the set of weights and thresholds. For example, below is shown a neural network with two input units (i1 and i2), two hidden units (h1 and h2), and one output unit (o1). It has a total of six connections with six corresponding weights represented by the numerals 1–6 (for simplicity, the thresholds are all equal to 1 and are omitted): This representation is the canonical neural network representation, but neural networks can also be represented by a tree, which, in this case, corresponds to: where "a” and "b” represent the two inputs i1 and i2 and "D” represents a function with connectivity two. This function adds all its weighted arguments and then thresholds this activation in order to determine the forwarded output. This output (zero or one in this simple case) depends on the threshold of each unit, that is, if the total incoming activation is equal to or greater than the threshold, then the output is one, zero otherwise. The above NN-tree can be linearized as follows: 0123456789012 DDDabab654321 where the structure in positions 7–12 (Dw) encodes the weights. The values of each weight are kept in an array and retrieved as necessary for expression. As a more concrete example, below is shown a neural net gene for the exclusive-or problem. It has a head size of 3 and Dw size of 6: 0123456789012 DDDabab393257 Its expression results in the following neural network: which, for the set of weights: W = {−1.978, 0.514, −0.465, 1.22, −1.686, −1.797, 0.197, 1.606, 0, 1.753} it gives: which is a perfect solution to the exclusive-or function. Besides simple Boolean functions with binary inputs and binary outputs, the GEP-nets algorithm can handle all kinds of functions or neurons (linear neuron, tanh neuron, atan neuron, logistic neuron, limit neuron, radial basis and triangular basis neurons, all kinds of step neurons, and so on). Also interesting is that the GEP-nets algorithm can use all these neurons together and let evolution decide which ones work best to solve the problem at hand. So, GEP-nets can be used not only in Boolean problems but also in logistic regression, classification, and regression. In all cases, GEP-nets can be implemented not only with multigenic systems but also cellular systems, both unicellular and multicellular. Furthermore, multinomial classification problems can also be tackled in one go by GEP-nets both with multigenic systems and multicellular systems. Decision trees Decision trees (DT) are classification models where a series of questions and answers are mapped using nodes and directed edges. Decision trees have three types of nodes: a root node, internal nodes, and leaf or terminal nodes. The root node and all internal nodes represent test conditions for different attributes or variables in a dataset. Leaf nodes specify the class label for all different paths in the tree. Most decision tree induction algorithms involve selecting an attribute for the root node and then make the same kind of informed decision about all the nodes in a tree. Decision trees can also be created by gene expression programming, with the advantage that all the decisions concerning the growth of the tree are made by the algorithm itself without any kind of human input. There are basically two different types of DT algorithms: one for inducing decision trees with only nominal attributes and another for inducing decision trees with both numeric and nominal attributes. This aspect of decision tree induction also carries to gene expression programming and there are two GEP algorithms for decision tree induction: the evolvable decision trees (EDT) algorithm for dealing exclusively with nominal attributes and the EDT-RNC (EDT with random numerical constants) for handling both nominal and numeric attributes. In the decision trees induced by gene expression programming, the attributes behave as function nodes in the basic gene expression algorithm, whereas the class labels behave as terminals. This means that attribute nodes have also associated with them a specific arity or number of branches that will determine their growth and, ultimately, the growth of the tree. Class labels behave like terminals, which means that for a k-class classification task, a terminal set with k terminals is used, representing the k different classes. The rules for encoding a decision tree in a linear genome are very similar to the rules used to encode mathematical expressions (see above). So, for decision tree induction the genes also have a head and a tail, with the head containing attributes and terminals and the tail containing only terminals. This again ensures that all decision trees designed by GEP are always valid programs. Furthermore, the size of the tail t is also dictated by the head size h and the number of branches of the attribute with more branches nmax and is evaluated by the equation: For example, consider the decision tree below to decide whether to play outside: It can be linearly encoded as: 01234567 HOWbaaba where “H” represents the attribute Humidity, “O” the attribute Outlook, “W” represents Windy, and “a” and “b” the class labels "Yes" and "No" respectively. Note that the edges connecting the nodes are properties of the data, specifying the type and number of branches of each attribute, and therefore don't have to be encoded. The process of decision tree induction with gene expression programming starts, as usual, with an initial population of randomly created chromosomes. Then the chromosomes are expressed as decision trees and their fitness evaluated against a training dataset. According to fitness they are then selected to reproduce with modification. The genetic operators are exactly the same that are used in a conventional unigenic system, for example, mutation, inversion, transposition, and recombination. Decision trees with both nominal and numeric attributes are also easily induced with gene expression programming using the framework described above for dealing with random numerical constants. The chromosomal architecture includes an extra domain for encoding random numerical constants, which are used as thresholds for splitting the data at each branching node. For example, the gene below with a head size of 5 (the Dc starts at position 16): 012345678901234567890 WOTHabababbbabba46336 encodes the decision tree shown below: In this system, every node in the head, irrespective of its type (numeric attribute, nominal attribute, or terminal), has associated with it a random numerical constant, which for simplicity in the example above is represented by a numeral 0–9. These random numerical constants are encoded in the Dc domain and their expression follows a very simple scheme: from top to bottom and from left to right, the elements in Dc are assigned one-by-one to the elements in the decision tree. So, for the following array of RNCs: C = {62, 51, 68, 83, 86, 41, 43, 44, 9, 67} the decision tree above results in: which can also be represented more colorfully as a conventional decision tree: Criticism GEP has been criticized for not being a major improvement over other genetic programming techniques. In many experiments, it did not perform better than existing methods. Software Commercial applications GeneXproTools GeneXproTools is a predictive analytics suite developed by Gepsoft. GeneXproTools modeling frameworks include logistic regression, classification, regression, time series prediction, and logic synthesis. GeneXproTools implements the basic gene expression algorithm and the GEP-RNC algorithm, both used in all the modeling frameworks of GeneXproTools. Open-source libraries GEP4J – GEP for Java Project Created by Jason Thomas, GEP4J is an open-source implementation of gene expression programming in Java. It implements different GEP algorithms, including evolving decision trees (with nominal, numeric, or mixed attributes) and automatically defined functions. GEP4J is hosted at Google Code. PyGEP – Gene Expression Programming for Python Created by Ryan O'Neil with the goal to create a simple library suitable for the academic study of gene expression programming in Python, aiming for ease of use and rapid implementation. It implements standard multigenic chromosomes and the genetic operators mutation, crossover, and transposition. PyGEP is hosted at Google Code. jGEP – Java GEP toolkit Created by Matthew Sottile to rapidly build Java prototype codes that use GEP, which can then be written in a language such as C or Fortran for real speed. jGEP is hosted at SourceForge. Further reading See also Symbolic Regression Artificial intelligence Decision trees Evolutionary algorithms Genetic algorithms Genetic programming Grammatical evolution Linear genetic programming GeneXproTools Machine learning Multi expression programming Neural networks References External links GEP home page, maintained by the inventor of gene expression programming. GeneXproTools, commercial GEP software. Evolutionary algorithms Evolutionary computation Genetic algorithms Genetic programming
10183913
https://en.wikipedia.org/wiki/Mercer%20Mayer%20bibliography
Mercer Mayer bibliography
This is a list of the works of Mercer Mayer. The following is a partial list of books that Mercer Mayer has written and/or illustrated. It also includes books and items that are related to Mercer Mayer and his creations (like coloring books, sticker books, lacing cards, toys, etc.). Little Critter related books Little Critter is an anthropomorphic character created by Mercer Mayer. Although it's not specified what species the Little Critter is, he resembles a small and furry rodent-like creature such as a porcupine, hamster, hedgehog, capybara, or guinea pig. Little Critter first appeared in the 1975 book Just for You. This book is sometimes mis-titled Just for Yu because of the childlike mistake on the front cover (see picture). The following books feature Little Critter. Little Critter main series Published in the Golden Books "Look-Look Books" series Individual books may also be available in special editions There's A Nightmare In My Closet! (1968) (2nd Anniversary In 1970) (21st Anniversary In 1989, Since The Mid-1990s) (Embedded 1997 Version In 1997) Just For You (1975) (Embedded 1987 Version In 1987) (Embedded 1989 Version In 1989) (With A Spider And The Grasshopper) (first hardcover printing has 5 more pages of story and artwork than all subsequent printings, including "I wanted to build a beautiful house just for you, but I hurt myself") Just Me and My Dad (1977) (Embedded 1992 Version In 1992) (With A Spider And The Grasshopper) The New Baby (1980) (Embedded 1983 Version In 1983) (With 1 Mouse) All by Myself (1983) (Embedded 2000 Version In 2000) (With 1 Mouse) I Was So Mad (1983) (Embedded 2009 Version In August 5, 2009) (With 1 Mouse) Just Go To Bed (1983) Just Grandma and Me (1983) Just Grandpa and Me (1983) Me Too! (1983) Merry Christmas Mom and Dad (1983) When I Get Bigger (1985) (also released as a mini-hardback book) Just Me and My Puppy (1985) (Embedded 1992 Version In 1992) (With A Black Scary Spider And The Green Silly Cricket) Just Me and My Babysitter (1986) Just Me and My Little Sister (1986) Just a Mess (1987) Baby Sister Says No (1987) (Embedded 2015 Version In September 1, 2015) (With A Black Scary Spider And The Green Grasshopper) Happy Easter, Little Critter (1988) I Just Forgot (1988) (With 1 Mouse + Black Scary Spider!!!) Just My Friend and Me (1988) Just a Daydream (1989) (With Blue Sky And White Clouds) (Since The Mids 1990's) Just Shopping with Mom (1989) (The original version had the mother asking Little Sister if she wanted a spanking --or "beating"-- instead. In the original 1989 version it was "spanking". Reprints --since the mid 1990's-- replaced the spanking/beating line --spanking/beating references-- with "time out".) Just Me and My Mom (1990) (With 1 Frog) Just Going to the Dentist (1990) (With 1 Frog + Black Scary Spider) What a Bad Black And White Version (1990) Just Me and My Little Brother (1991) Little Critter at Scout Camp (1991) What a Bad Dream (1992) Just Me and My Cousin (1992; with Gina Mayer) This is My Family (1992; with Gina Mayer) Little Critter's Joke Book (1993) Trick or Treat, Little Critter (1993; with Gina Mayer) A Very Special Critter (1993; with Gina Mayer) Just Me in the Tub (1994; with Gina Mayer) Just Lost! (1994; with Gina Mayer) Just a Bully (1999; with Gina Mayer) Just a New Neighbor (1999; with Gina Mayer) Just a Toy (2000; with Gina Mayer) Just a Piggy Bank (2001; with Gina Mayer) Just a Secret (2001; with Gina Mayer) Just a Snowy Vacation (2001; with Gina Mayer) Just Not Invited (2002; with Gina Mayer) Just a Baseball Game (2003; with Gina Mayer) Just Fishing with Grandma (2003; with Gina Mayer) Just a Little Homework (2004; with Gina Mayer) The new adventures Continuation of the main series with HarperFestival same dimensions, may contain some stickers, or other items. Bye-Bye, Mom and Dad (2004; with Gina Mayer) (with pull-out poster Family Tree) Good for Me and You (2004; with Gina Mayer) (with more than 20 stickers) Happy Halloween, Little Critter! (2004) (with pull back flaps) Just a School Project (2004; with Gina Mayer) (with more than 20 stickers) Just a Snowman (2004; with Gina Mayer) (with more than 20 stickers) Just Big Enough (2004; with Gina Mayer) (with Pull out growth chart) (this book can be found as an oversized hardback) Merry Christmas, Little Critter (2004; with Gina Mayer) (with pull back flaps) My Trip to the Hospital (2005; with Gina Mayer) (with 5 adhesive bandages that feature Little Critter) Happy Valentine's Day, Little Critter (2005; with Gina Mayer) (with pull back flaps) Just so Thankful (2006; with Gina Mayer) (with four thank you cards) It's Easter, Little Critter! (2007; with Gina Mayer) (with pull back flaps) Grandma, Grandpa, and Me (2007; with Gina Mayer) Happy Father's Day! (2007; with Gina Mayer) The Lost Dinosaur Bone (December 2007) Snowball Soup (an I Can Read book) (September 2007) (hardcover) and (paperback) It's Earth Day (February 2008) (originally announced under the title My Earth Day Surprise) The Best Teacher Ever (May 2008) Going to the Firehouse (an I Can Read book) (June 2008) (hardcover) (paperback) Just a Day at the Pond (July 2008) To the Rescue! (an I Can Read book) (September 2008) (hardcover) (paperback) This Is My Town (an I Can Read book) (December 2008) (hardcover) (paperback) Happy Mother's Day! (March 2008) First Day of School (June 2009) The Fall Festival (an I Can Read book) (July 2009) Going to the Sea Park (an I Can Read book) (September 2009) Just a Little Music (December 2009) Just a Little Sick (December 2009) Just Saving My Money (an I Can Read book) (June 2010) The Best Yard Sale (July 2010) Just Critters Who Care (an I Can Read book) (August 2010) Just a Little Luck (February 2011) A Green, Green Garden (an I Can Read book) (March 2011) Just Helping My Dad (an I Can Read book) (April, 2011) The Best Show and Share (June, 2011) Just a Little Too Little (February, 2012) What a Good Kitty (May 2012) We Are Moving (October, 2012) Just a Big Storm (March, 2013) Just One More Pet (May, 2013) Just Big Enough (June 4, 2013) It's True (Inspired Kids series - publisher Thomas Nelson) (Oct 8, 2013) You Go First (Inspired Kids series - publisher Thomas Nelson) (Oct 8, 2013) Just a Little Love (an I Can Read book) (November 26, 2013) Just a Kite (an I Can Read book) (March 4, 2014) Just My Lost Treasure (May 27, 2014) Being Thankful (Inspired Kids series - publisher Thomas Nelson) (July 29, 2014) We All Need Forgiveness (Inspired Kids series - publisher Thomas Nelson) (Jul 29, 2014) Just a Special Day (an I Can Read book) (October 14, 2014) Just Fishing with Grandma by Mercer Mayer and Gina Mayer (March 10, 2015) Scholastic series Portrait shaped in different sizes I'm Sorry (1995; with Gina Mayer) At the Beach With Dad (1998; with Gina Mayer) It's Mine (2000; with Gina Mayer) Special publications Just a Snowy Day (1983) "Golden Touch and Feel Book" (republished by HarperCollins) Little Critter In Search of the Beautiful Princess (1993) Green Frog Publishers (oversized hardcover book in the style of the Where's Waldo series) Little Critter's Camp Out: A Golden Sound Story (1994) Little Critter: Just a Pirate (a "Magic Touch Talking Book" by Hasbro, Incorporated) (July 1996) Little Critter: Just Going to the Moon (a "Magic Touch Talking Book" by Hasbro, Incorporated (July 1996) Super Critter To The Rescue: A Golden Sound Story (1997) Just a Bubble Bath (1997) Inchworm Press, "Scrub-A-Dub Bath Book" (10 pages) Just My Camera and Me: Photo Fun Package (1998) Inchworm Press, (comes with a camera, a photo album, and the book Just My Camera and Me) Just a Garden (1999) (was sold as a kit with four small plastic gardening tools and the book Just a Garden) Little Golden Books A numbered series. These were re-released by Scholastic and as a part of Mercer Mayer's Little Critter Book Club Just a Bad Day (1993; with Gina Mayer) Taking Care of Mom (1993; with Gina Mayer) Just a Little Different (1993; with Gina Mayer) Just Like Dad (1993; with Gina Mayer) Just Say Please (1993; with Gina Mayer) This is My Body (1993; with Gina Mayer) I'm Sorry (1993; with Gina Mayer) Just A Gum Wrapper (1993; by Gina and Mercer Mayer) Just Me and My Bicycle (1993; by Gina and Mercer Mayer) Just Too Little (1993; by Gina and Mercer Mayer) Just Leave Me Alone (1995; with Gina Mayer) The School Play (1995; by Gina and Mercer Mayer) The Loose Tooth (1995; by Gina and Mercer Mayer) Just an Airplane (1995; by Gina and Mercer Mayer) I Was So Sick (1995; with Gina Mayer) Little Sister (of Little Critter) Published as Golden Books "Little Look-Look Books" Little Sister's Birthday (1988) Just a Nap (1989) (Since The Mid-1990's) Just a Rainy Day (1990) When I Grow Up (1991) Just Camping Out (1991) The New Potty (1992; with Gina Mayer) Just a Thunderstorm (1993; with Gina Mayer) My Big Sister (1995; with Gina Mayer) The Magic Pumpkin (1997; with Gina Mayer) Little Critter Storybooks featuring the "Critter Kids" These were initially published by Scholastic publishing as accordion-style fold-out board books. Most were republished by Random House and Green Frog as regular hardcover and softcover books. Malcom's Race (1983) Possum Child Goes Shopping (1983) Little Sister's Bracelet (originally titled Too's Bracelet) (1983) , , Bun Bun's Birthday (originally titled SweetMeat's Birthday) (1983) , , Bat Child's Haunted House (1983) , Gator Cleans House (1983) , , Readers Published by Random House, McGraw-Hill Children's Publishing, and by School Specialty Publishing Little Critter Sleeps Over (Road To Reading adaption of Little Critter's Staying Overnight from 1988) (1999) My Trip to the Zoo (2001) (Level 1) Country Fair (2002) (Level 1) Show and Tell (2002) (Level 1) Beach Day (2001) (Level 1) Tiger's Birthday (2001) (Level 2) A Day at Camp (2001) (Level 2) The New Fire Truck (2001) (Level 2) Grandma's Garden (2001) (Level 2) Our Tree House (2001) (Level 3) Goodnight, Little Critter (2001) (Level 3) Class Trip (2001) (Level 3) New Kid in Town (2001) (Level 3) Helping Mom (2000) Little Critters' The Best Present (2000) Our Park (2000) Field Day (2000) Camping Out (2001) The Mixed-up Morning (2001) Our Friend Sam (2001) My Trip to the Farm (2001) No One Can Play (2001) Play Ball (2001) A Yummy Lunch (2001) Surprise! (2002) Harvest Time (2003) We Love You, Little Critter (2003) The Little Christmas Tree (2003) Christmas for Miss Kitty (2003) Play It Safe (2004) Skating Day (2004) Boardbooks Published by Little Simon (Simon & Schuster), Random House, Golden Books, GT Publishing and HarperFestival Little Critter's Play with Me (1982) Astronaut Critter (1986) Construction Critter (1986) Cowboy Critter (1986) Fireman Critter (1986) Police Critter (1986) Mail Critter (1987) Doctor Critter (1987) Sailor Critter (1987) Little Critter's Day (1990) Little Critter at Play (1990) Little Critter (Booktivity) Little Critter Colors (1992) Little Critter Numbers (1992) Little Critter Shapes (1992) Little Critter's ABC's (1993) Little Critter Cowboy (1996) (edited 10 page version of the 14 page Cowboy Critter) Little Critter Doctor (1996) (edited 10 page version of the 14 page Doctor Critter) Little Critter Astronaut (1996) (edited 10 page version of the 14 page Astronaut Critter) Little Critter Policeman (1996)(edited 10 page version of the 14 page Police Critter) Little Critter Construction (1996) (edited 10 page version of the 14 page Construction Critter, also released as a part of the Little Critter Construction Playset) Little Critter Sailor (1998) (edited 10 page version of the 14 page Sailor Critter) Little Critter All Grown Up! (1999) (Collection containing the edited versions of the 4 books: Doctor, Sailor, Cowboy, and Construction) Just a Dump Truck (2004) Just a Tugboat (2004) Little Critter: A Busy Day (box set of 4 mini-board books) (August, 2011) Lift-a-Flap Books Published by multiple publishing houses. Some were originally released as hardcovers and then later re-released as Chunky Flap Board Books (two ISBN numbers are listed when this is the case). Where's Kitty (1991) , Where is My Frog? (1991) , Where's My Sneaker? (1991) , Little Critter Hansel & Gretel: A Lift the Flap Book (1991) , Little Critter's Jack and the Beanstalk (1991) , Little Critter's Little Red Riding Hood (1991) , Just an Easter Egg (1998; written by Erica Farber and John Sansevere) Just a Magic Trick (1998; written by Erica Farber and John Sansevere) Activity books Little Critter: My Stories: Write and Draw Your Own Stories (1991) Little Critter Stand Ups to Color and Share (1992) (comes with 6 stand ups and stickers) Little Critter Favorite Things (1994) (a coloring book) Little Critter's Day at the Farm (with reusable stickers) (1994) (and ) Little Critter's Holiday Fun Sticker Book (1994) Little Critter Shapes & Colors Coloring Book Little Critter Dots and Mazes (Golden First Fun) Little Critter's Song and Activity Book (1996) Little Critter's Halloween: A Coloring and Activity Book (1997) (also came with Spooky Halloween Kit which included the book The Magic Pumpkin (Little Sister), a Flashlight, and a Trick-Or-Treat Bag). Little Critter's Christmas: A Coloring and Activity Book (1997) Little Critter's Backseat Busy Book (1999) Painting the Seasons with Little Critter (2003) HarperCollins, Fun at School with Little Critter (2004) Collections Little Critter's Bedtime Storybook (1987) (includes: The Fussy Princess, The Grumpy Old Rabbit, The Day the Wind Stopped Blowing, The Bear Who Wouldn't Share, and some bumper "Bedtime" segments) Two-minute Little Critter Stories: Eight favorite stories (1990) (Includes: Just A Mess, Just Me and My Babysitter, I Just Forgot, Just Me and My Puppy, I Was So Mad, Just My Friends and Me, When I Get Bigger, and Just Go to Bed) Thrills and Spills (1991) (Early Bird Series Big Books: 19.5 × 16.2) (Contains four stories: Just for You by Mercer Mayer, Jamberry by Bruce Degen, The Gingerbread Boy by Paul Galdone, and Baby Days) Just Me and My Family (1997) (A box set of four separate Golden Look Look books) Just Me and My Family: Six Story Books in One (1999) (contains: Just Me and My Mom, Just Me and My Dad, Just Me and My Little Brother, Just Grandpa and Me, Just Grandma and Me, and Just Me and My Puppy) Little Critter Read-It-Yourself Storybook: Six Funny Easy to Read Stories (2000) (contains: Little Critter's This Is My House, Little Critter's These Are My Pets, Little Critter's Little Sister's Birthday, Little Critter's This Is My School, Little Critter's This Is My Friend, and Little Critter's Staying Overnight). Growing Every Day (A Little Critter Collection) (2002) (Contains: Just Go to Bed, When I Get Bigger, Just a Mess, Just Going to the Dentist, Just Lost and Just Me in the Tub) Feelings and Manners (2002) (Contains: All by Myself, I was So Mad, Me Too!, I Just Forgot, I'm Sorry, and Just a Bully) Little Critter Storybook Collection (2005) (Contains 7 stories) Just a Little Critter Collection: 7 Books Inside (2005) (contains: Just For You, When I Get Bigger, I Was So Mad, All By Myself, Just Go To Bed, Just A Mess, and I Just Forgot) Little Critter: Just a Storybook Collection (2012) Little Critter: Phonics Fun (a My First I Can Read box set of 12 mini-paperbacks) 1. Just Critters who care 2. Just saving my money 3. Just helping my Dad 4. A green, green garden 5. Just a little sick 6. Going to the firehouse 7. What a good kitty 8. Snowball soup 9. To the rescue 10. Fall festival 11. Going to the sea park 12. This is my town Little Critter: Bedtime Stories (a box set featuring stickers, a poster, and 6 paperback books) Little Critter workbooks By Spectrum & Brighter Child (for Homeschool) Little Critter Math: Grade Pre K (2001) or 1577685792 Little Critter Math: Grade K (2001) or 1577688007 Little Critter Math: Grade 1 (2001) Little Critter Math: Grade 2 (2001) Little Critter Phonics: Grade Pre K (2002) Little Critter Phonics: Grade K (2002) Little Critter Phonics: Grade 1 (2002) Little Critter Phonics: Grade 2 (2002) Little Critter Reading: Grade Pre K (2002) Little Critter Reading: Grade K (2002) Little Critter Reading: Grade 1 (2002) Little Critter Reading: Grade 2 (2002) Little Critter Language Arts: Grade Pre K (2002) Little Critter Language Arts: Grade K (2002) Little Critter Language Arts: Grade 1 (2002) Little Critter Language Arts: Grade 2 (2002) Little Critter Beginning Writing: Grade Pre K (2002) Little Critter Beginning Writing: Grade K (2002) Little Critter Beginning Writing: Grade 1 (2002) Little Critter Beginning Writing: Grade 2 (2002) Little Critter Basic Concepts: Grade Pre K (2002) Little Critter Basic Concepts: Grade K (2002) Little Critter Basic Concepts: Grade 1 (2002) Little Critter Basic Concepts: Grade 2 (2002) Critters of the Night AKA Creepy Critters, all written by Erica Farber and J. R. Sansevere (illustrated by Mercer Mayer) Werewolves for Lunch (1995) No Howling in the House (1996) The Headless Gargoyle (1996) To Catch a Little Fish (1996) If You Dream a Dragon (1996) Purple Pickle Juice (1996) Zombies Don't Do Windows (1996) The Vampire Brides (1996) The Goblin's Birthday Party(1996) Old Howl Hall Big Lift-And-look Book (1996) Pirate Soup (Pictureback Shape Books) Night of the Walking Dead Part 1 (1997) Night of the Walking Dead Part 2 (1997) Love You to Pieces: (24 Spooky Punch-out Valentines) (1997) Critters of the Night Glow-In-The-Dark Book (1997) Chomp Chomp! (1998) Ooey Gooey (1998) Roast and Toast (1998) Midnight Snack (1999) Kiss of the Mermaid (1999) Mummy Pancakes (Tattoo Tales) (with over 20 tattoos) (1997) Zoom on My Broom (2001) Mercer Mayer's LC + the Critter Kids All written by Erica Farber and J. R. Sansevere (illustrated by Mercer Mayer) My Teacher Is a Vampire (1994) The Secret Code (1994) The Purple Kiss (1994) The Mummy's Curse (1994) Top Dog (1994) Surf's Up (1994) Pizza War (1994) The Cat's Meow (1994) Showdown at the Arcade (1994) The Ghost of Goose Island (1995) Mystery at Big Horn Ranch (1995) The E-Mail Mystery (1995) The Swamp Thing (1995) Backstage Pass (1995) The Alien (1995) The Prince (1995) The Haunted House (1995) Jaguar Paw (1995) Golden Eagle (1995) Octopus Island (1996) Blue Ribbon Mystery (1996) Circus of Ghouls (1996) Lil Shop of Magic (1996) Kiss of the Vampire (1996) Other Little Critter titles I am Hiding (1992) I am Helping (1992) I am Playing (1992) I am Sharing (1992) I Smell Christmas: A Nose Tickler (1997) Little Critter's These Are My Pets (1988) Little Critter's The Trip (1988) (Originally published as an ABC style book, and then as an edited story with fewer pages in 1997). Little Critter's The Picnic (1988) Little Critter's Staying Overnight (1988) Little Critter's This Is My Friend (1989) Little Critter's This Is My School (1990) Little Critter's Christmas Book (1989) Little Critter's Spooky Halloween Party (1999) Little Critter's The Night Before Christmas (1995) The Grumpy Old Rabbit: Little Critter's Bedtime Storybook (1987) (Taco Bell Promotional Book) The Bear Who Wouldn't Share: Little Critter's Bedtime Storybook (1987) ASIN B00072HVVC (published by Western Publishing) The Fussy Princess: Little Critter's Bedtime Storybook: (1989) Little Critter's Picture Dictionary (2001) Little Critter's Favorite Things (1994) I Didn't Know That (by Gina and Mercer Mayer) Mercer Mayer's Little Critter Lacing Cards (1992) (toy) Little Monster series Little Monster is an anthropomorphic character created by Mercer Mayer. He is a dinosaur-like dragon. Little Monster first appeared in the 1977 book Little Monster's Word Book, but many characters in the Little Monster series were first introduced in the 1975 book One Monster After Another and the 1976 book Professor Wormbog in Search for the Zipperump-A-Zoo. Books that feature the character Little Monster: Little Monster's Word Book (1977) Little Monster's Alphabet Book (1978) Little Monster's Counting Book (1978) Little Monster's Neighborhood (1978) Little Monster at School (1978) Little Monster at Home (1978) Little Monster at Work (1978) Little Monster's Bedtime Book (1978) Little Monster's Library (box set containing the first 6 Little Monster books and 6 punch-out paper monsters) (1978) Little Monster's You-Can-Make-It Book (1978) Little Monster's Mother Goose (1979) Little Monster's Scratch and Sniff Mystery (1980) Little Monster's Sports Fun Sticker Book (with reusable stickers) (1985) Little Monster's Moving Day Sticker Book (with reusable stickers) (1995) Little Monster Private Eye: The Smelly Mystery (1998) (re-release edited version of Scratch and Sniff Mystery without Scratch and Sniff Spots, all dialogue balloons removed, major text changes, and 5 pages shorter) (also released as part of a Detective Kit gift set ) Little Monster Private Eye: The Lost Wish (by Erica Farber and J. R. Sansevere) (1998) Little Monster Private Eye: How The Zebra Lost His Stripes (by Erica Farber and J. R. Sansevere)(1998) (also released with the Little Monster Private Eye Goes on Safari gift set ) Mercer Mayer's Little Monster Private Eye: The Mummy Mystery (by Erica Farber) (also released with The Treasure of the Nile gift set ) Mercer Mayer's Little Monster Private Eye: 101 Penguins (by Erica Farber and J. R. Sansevere) (1998) (also came in a 101 Penguins A Polar Adventure gift set with a Snow Globe that has two penguins in it) Mercer Mayer's Little Monster Private Eye: The Bubble Gum Pirates (by Erica Farber and J. R. Sansevere) (1998) (also came as part of a pirate themed gift set featuring a sword and other items) Mercer Mayer's Little Monster Treasury Book (contains 11 previously released Little Monster stories, some edited) Professor Wormbog series Creatures in the Professor Wormbog series tend to also appear in the Little Monster series of books. Professor Wormbog in Search for the Zipperump-A-Zoo (1976) Professor Wormbog's Gloomy Kerploppus: A Book of Great Smells (and a Heart-Warming Story, Besides) (1977) Professor Wormbog's Cut It, Glue It, Tape It, Do It (1980) Professor Wormbog's Crazy Cut-Ups (1980) Other Little Monster related books Books that feature characters that also appear in the Little Monster and Professor Wormbog series. One Monster After Another (1974) How the Trollusk Got His Hat (1979) Mercer's Monsters (a "Golden Book of Picture Postcards" with verses by Seymour Reit) (1977) Boy, Dog, Frog series A series of 6 wordless books. These have been re-released in many formats, but they are usually smaller in size. A Boy, a Dog and a Frog (1967) Frog, Where Are You? (1969) A Boy, a Dog, a Frog, and a Friend (1971) Frog on His Own (1973) Frog Goes to Dinner (1974) One Frog Too Many (1975) Four Frogs In a Box (1976) (collection of the first four "Frog" mini-books in a box set) Tink Tonk series AKA A Tiny Tink! Tonk! Tale series published by Bantam Books. Also see the Mercer Mayer Computer Software section for the video game titles related to this series that were developed by Mercer Mayer. Tinka Bakes a Cake (1984) Tink Goes Fishing (1984) Tuk Takes a Trip (1984) Tonk Gives a Magic Show (1985) Teep and Beep Go to Sleep (1985) Zoomer Builds a Racing Car (1985) "There's a..." series There's a Nightmare in My Closet (AKA There's a Nightmare in my Cupboard - Australia) (1968) There's an Alligator Under My Bed (1987) There's Something in My Attic (AKA There's Something Spooky in My Attic) (1988) There's Something There: Three Bedtime Classics (1998) (Re-prints Nightmare, Alligator, and Attic) There Are Monsters Everywhere (2005) One word series A series of virtually wordless books featuring a male and a female anthropomorphic hippopotamus or elephant and the word that is in the title. Hiccup (1976) Ah-choo (1976) Oops (1977) Liverwurst series Both books in this series are written by Mercer Mayer, but illustrated by Steven Kellogg: Appelard and Liverwurst (1978) Liverwurst is Missing (1981) Fairy tale and classic story re-telling Beauty and the Beast (with Marianna Mayer) (1978) East of the Sun & West of the Moon (1980) Favorite Tales from Grimm (Retold by Nancy Garden) (1982) The Sleeping Beauty (1984) A Christmas Carol (1986) (retold with mice, originally by Charles Dickens) The Pied Piper of Hamelin (1987) Moral Tales series Wordless flip-books featuring two stories Two Moral Tales (1974) featuring: "Bird's New Hat" "Bear's New Clothes" Two More Moral Tales (1974) featuring: "Sly Fox's Folly" "Just a Pig at Heart" Other Mercer Mayer books Terrible Troll (1968) (re-released as The Bravest Knight in May, 2007 with ) If I Had (1968) (re-released as If I Had a Gorilla) I Am a Hunter (1969) A Special Trick (1970) Mine! (with Marianna Mayer) (1970) Me and My Flying Machine (1971) The Queen Always Wanted to Dance (1971) A Silly Story (1972) Bubble Bubble (1973) Mrs. Beggs and the Wizard (re-released as The Wizard Comes to Town) (1973) Walk, Robot, Walk (1974) You're the Scaredy-Cat (1974) What Do You Do with a Kangaroo? (1974) The Great Cat Chase: A Wordless Book (1975) (originally released with black and white illustrations, it was re-released as just The Great Cat Chase in the 1990s with added words and in color) Liza Lou and the Yeller Belly Swamp (1976) Herbert the Timid Dragon (1980) Whinnie the Lovesick Dragon (illustrated by Diane Dawson Hearn) (1986) Mercer Mayer's a Monster Followed Me to School (1991) Rosie's Mouse (1992) Shibumi and the Kitemaker (1999) The Rocking Horse Angel (2000) The Little Drummer Mouse (2006) (Mercer Mayer also narrates the audio version, and he wrote the music) Octopus Soup (a wordless book) (2011) Too Many Dinosaurs (2011) Illustrations for other authors' books The Master and Margarita - by Mikhail Bulgakov (1967 English edition by Harper & Row) (features a winking cat holding a gun on the front cover) Logan's Run - by William F. Nolan and George Clayton Johnson (Dial Press, 1967 first printing hardcover) Outside My Window - by Liesel Moak Skorpen (1968) (re-issued 2004) The Boy Who Made A Million - by Sidney Offit (1968) Golden Butter - by Sheila LaFarge (1969) Boy Was I Mad - by Kathryn Hitte (1969) The Mousechildren and the Famous Collector - by Warren Fine (1970) Jack Tar - by Jean Russell Larson (1970) The Bird of Time - by Jane Yolen (1971) Altogether, One At a Time - by E.L. Konigsburg (1971) Good-bye Kitchen - by Mildred Kantrowitz (1972) Kim Ann and the Yellow Machine - by Candida Palmer (1972) While the Horses Galloped to London - by Mabel Watts (1973) The Greenhouse - by Antonia Lamb (1974 paperback version) The Figure In the Shadows - by John Bellairs (1975) (Re-released in 2004 as A John Bellairs Mystery Featuring Lewis Barnavelt: The Figure in the Shadows) A Poison Tree and Other Poems - written by various poets, poems selected by and illustrated by Mercer Mayer (1977) A Book of Unicorns - by Welleran Poltarnees (1978) (various illustrators including a Mercer Mayer's Unicorn illustration from Amanda Dreaming) The Dictopedia: A - L - by Pleasant T. Rowland (1979) (an Addison-Wesley Reading Program anthology) - features "The Case of the Gingerbread Ghost" (AKA "The Gingerbread Ghost") by Shirleyann Costigan a seven-page story with five unique full color illustrations by Mercer Mayer The Dictopedia: M - Z - by Pleasant T. Rowland (an Addison-Wesley Reading Program anthology) - features one blue, black, and white illustration by Mercer Mayer for James A. Emanuel's "A Small Discovery" poem (the illustration also appeared in Mercer Mayer's A Poison Tree and Other Poems) (1979) Illustrations for George Mendoza's books Books written by George Mendoza that Mercer Mayer illustrated: The Crack in the Wall & Other Terribly Weird Tales (1968) The Gillygoofang (1968) Illustrations for Jan Wahl's books Book by Jan Wahl that Mercer Mayer illustrated: Margaret's Birthday (1971) Grandmother Told Me (1972) Illustrations for Jay Williams' books Book by Jay Williams that Mercer Mayer illustrated: Everyone Knows What a Dragon Looks Like (1976) The Reward Worth Having (1977) Illustrations for John D. Fitzgerald's Great Brain series Books from John D. Fitzgerald's The Great Brain series that were originally illustrated by Mercer Mayer. Some later releases had new front covers by a different illustrator, but were still illustrated by Mercer Mayer on the inside. The Great Brain series by John D. Fitzgerald The Great Brain (1967) (by John D. Fitzgerald) More Adventures of the Great Brain (1969) (by John D. Fitzgerald) Me and My Little Brain (1971) (by John D. Fitzgerald) The Great Brain at the Academy (1972) (by John D. Fitzgerald) The Great Brain Reforms (1973) (by John D. Fitzgerald) The Return of the Great Brain (1974) (by John D. Fitzgerald) The Great Brain Does it Again (1975) (by John D. Fitzgerald) Illustrations for Barbara Wersba's books Books by Barbara Wersba that Mercer Mayer illustrated: Let Me Fall Before I Fly (1971) Amanda Dreaming (1973) Magazine appearances Harper's Magazine, April 1967. Vol. 284. No. 1403. - features, The War with the Birds by Philip Wagner with drawings by Mercer Mayer) Harper's Magazine, June 1967. Vol. 234. No. 1405. - features, The Riddle of the Dangerous Bean: A Scientific Detective Story by Judith R. Marcus and Gerald Cohen with a drawing by Mercer Mayer) Harper's Magazine, August 1967. Vol. 235. No. 1407. - features, What Keeps Nixon Running by Stephen Hess and David S. Broder with a drawing by Mercer Mayer) Children's Digest, December 1968 - front cover illustration Penthouse: The International Magazine for Men, Vol. 5, #10, June 1974. Illustration for “Okay Tribesmen, Now Hear This” by Henry Morgan (page 98) Penthouse: The International Magazine for Men, Vol. 6, #5, January 1975. Illustration for “Good Eats” by Henry Morgan (page 80) Penthouse: The International Magazine for Men, Vol. 6, #6, February 1975. Illustration for “Another Damn Year is Under Way” by Henry Morgan (page 82) Penthouse: The International Magazine for Men, Vol. 6, #7, March 1975. Illustration for “The Irish” by Henry Morgan (page 80) Cricket: The Magazine for Children, Vol. 4 No. 7 (March, 1977) - reprints Hiccup. Cricket: The Magazine for Children, Vol. 4 No. 9 (May, 1977) - reprints Ah-Choo. Mercer Mayer recordings (audio books and other) Audio books What do You do with a Kangaroo (released on cassette, CD, and download) - read by Jane Casserly There's a Nightmare in My Closet (released as an Audible.com download) - read by Mo Godin Little Critter Most of these are from Disneyland Records and Little Golden Books (usually labeled as a "Little Golden Book & Cassette" or "Little Golden Book & Record"). These are books that came with a word for word audio recording on record (speed = 33, size = 7") or cassette tape of someone reading the story. They usually included music, sound effects, and original songs too. Sometimes the cassettes were labeled "Record your own story" on the B-side (with the original recording on the A-side). "SEE the pictures HEAR the record READ the book", was a catch phrase that was written on most of these books. Just For You (1984) ASIN B000JFJCIM Just Me and My Dad Merry Christmas Mom & Dad (Includes the original songs "Merry Christmas Mom and Dad" and "Dear Santa") Series # 226 (1983) ISBN per Amazon was 9-9963-6247-7 (out of print) Just Go To Bed (1986) Just Grandpa and Me (1986) Just Grandma and Me (1986) ISBN per Amazon was 9-9988-8357-1 Just Me and My Babysitter (1986) When I Get Bigger (1986) Little Critter's The Night Before Christmas Mercer Mayer recordings Audio CDs that are available on Mercer Mayer's official site. Mercer Mayer Alligator Under My Bed and Other Story Songs CD (featuring the songs: "What Do You Do With A Kangaroo," "Critters Of The Night (Theme)," "Alligator Under My Bed," "Let's Go Camping," "Me And My Mom," "If I Had A Gorilla," "Big Paw's Coming," "The World Goes Around") The Little Drummer Mouse A Christmas Story CD (featuring the story read by Mercer Mayer and the songs: "Three Kings From Far Away," "I Wish," "The New Baby King," "Me And My Drum," "The Blessing," "You Must Be From The City") Other known Mercer Mayer songs These songs are either mentioned on the official Mercer Mayer website or featured on it, but are not currently available otherwise. "Sunshine" (a.k.a. "Sunshine Makes You Sneeze") "My Momma Said" (a.k.a. "Clean Up Your Room") "Clean up My Doggie" (a.k.a. "My Doggie Lies in a Mud Puddle") Mercer Mayer computer software CD-Roms The CD-Roms usually included the original story and additional material (animations, audio) for fun and educational purposes (they were produced in association with Mercer Mayer's company Big Tuna New Media, LLC). The Tink! Tonk! series of games were educational and action video game style. Mercer Mayer's Just Grandma and Me (part of the Living Books series) (1992) Mercer Mayer's Little Monster at School (part of the Living Books series) CD-Rom (1994) Mercer Mayer's Just Me and My Dad CD-Rom (1996) Mercer Mayer's Just Me and My Mom CD-Rom (1996) The Smelly Mystery Starring Mercer Mayer's Little Monster, Private Eye CD-Rom (1997) The Mummy Mystery Starring Mercer Mayer's Little Monster, Private Eye CD-Rom (2001) Mercer Mayer's Little Critter Just Me and My Grandpa CD-Rom (1998) Mercer Mayer's Little Critter and the Great Race CD-Rom (2001) Tink! Tonk! Tink's Adventure Atari / Commodore 64 /Apple II (C64) (Sprout Software) Tink! Tonk! Tonk in the Land of Buddy Bots Atari / C64 /Apple II (Sprout Software) Tink! Tonk! Tinka's Mazes Atari / C64 / Apple II (Sprout Software) Tink! Tonk! Tuk Goes to Town Atari / C64 / Apple II (Sprout Software) Tink! Tonk! Tink's Subtraction Fair Atari / C64 / Apple II (Sprout Software) Tink! Tonk! Castle Clobber Atari / C64 / Apple II (Sprout Software) Forbidden Castle IBM PC / Apple II (Mindscape) (1985) Reception Computer Gaming World in 1993 stated that Just Grandma and Mes "quality ... is unmatched and goes with the highest recommendation". Announced but unreleased books Critter Kids This is a list of Critter Kids books with dates originally scheduled for late 2006 but they have yet to be released: Danger Down Under (by Erica Farber and Mercer Mayer) (Date unknown) The Return of the Dinosaurs (by Erica Farber and Mercer Mayer) (Date unknown) Canyon River Camp (by Erica Farber and Mercer Mayer) (Date unknown) The Secrets of Snowy Mountain (by Erica Farber and Mercer Mayer) (Date unknown) The Critter Kids Talent Show (by Erica Farber and Mercer Mayer) (Date unknown) The Mystery of the Missing Vase (by Erica Farber and Mercer Mayer) (Date unknown) References External links Little Critter, Mercer Mayer's official website Bibliographies of American writers Children's literature bibliographies
26920127
https://en.wikipedia.org/wiki/4715%20Medesicaste
4715 Medesicaste
4715 Medesicaste (prov. designation: ) is a dark Jupiter trojan from the Trojan camp, approximately in diameter. It was discovered on 9 October 1989, by Japanese astronomer Yoshiaki Oshima at the Gekko Observatory east of Shizuoka, Japan. The assumed C-type asteroid belongs to the 70 largest Jupiter trojans. It is possibly elongated in shape and has a rotation period of 8.8 hours. It was named from Greek mythology after Medesicaste, an illegitimate daughter of Trojan King Priam. Orbit and classification Medesicaste is orbiting in the trailering Trojan camp, at Jupiter's Lagrangian point, 60° behind the Gas Giant's orbit in a 1:1 resonance . It is also a non-family asteroid of the Jovian background population. It orbits the Sun at a distance of 4.9–5.4 AU once every 11 years and 7 months (4,218 days; semi-major axis of 5.11 AU). Its orbit has an eccentricity of 0.05 and an inclination of 19° with respect to the ecliptic. The body's observation arc begins with a precovery taken at Palomar Observatory in October 1954, or 35 years prior to its official discovery observation at Gekko. Numbering and naming This minor planet was numbered by the Minor Planet Center on 30 January 1991 (). On 14 May 2021, the object was named by the Working Group Small Body Nomenclature (WGSBN), after Medesicaste from Greek mythology, who was an illegitimate daughter of King Priam and wife of Imbrius. Before Medesicaste was named, it belonged to a small group of only 8 unnamed minor planets with a designated number smaller than 5000. (All of them are Jupiter trojans or near-Earth asteroids). Since then, several have already been named : 3708 Socus – named in May 2021 4035 Thestor – named in May 2021 4489 Dracius – named in May 2021 4715 Medesicaste – named in May 2021 Physical characteristics Medesicaste is an assumed C-type asteroid. It has a V–I color index of 0.85, slightly below that seen for most Jovian D-type asteroids (also seen table below). Rotation period A rotational lightcurve of Medesicaste was first obtained by Stefano Mottola in November 1991, using the Loiano 1.52-meter telescope at Bologna Observatory in Italy. Lightcurve analysis gave a rotation period of hours with a brightness amplitude of 0.46 magnitude (). In September 2012, it was also observed in the R-band by astronomers at the Palomar Transient Factory in California (). Since January 2015, several photometric observations by Robert Stephens at the Center for Solar System Studies in California confirmed Mottola's period determination from 1991, and measured a brightness amplitude of 0.50–0.53, which is indicative of a non-spherical, possibly elongated shape (). Diameter and albedo According to the surveys carried out by the Japanese Akari satellite and the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer, Medesicaste measures between 62.10 and 65.93 kilometers in diameter and its surface has an albedo between 0.060 and 0.079. It has not been observed by the Supplemental IRAS Minor Planet Survey. The Collaborative Asteroid Lightcurve Link assumes a standard albedo for a carbonaceous asteroid of 0.057 and calculates a diameter of 63.91 kilometers based on an absolute magnitude of 9.7. Notes References External links Lightcurve Database Query (LCDB), at www.minorplanet.info Discovery Circumstances: Numbered Minor Planets (1)-(5000) – Minor Planet Center 004715 Discoveries by Yoshiaki Oshima Minor planets named from Greek mythology Named minor planets 19891009
67589287
https://en.wikipedia.org/wiki/Epigeus
Epigeus
In Greek mythology, Epigeus or Epeigeus (Ancient Greek: Ἐπειγεύς Epeigeus) was one of the best soldiers in the Myrmidon army against Troy. He was the son of Agacles. Mythology Before the Trojan War, Epeigeus was a king of Budeum but he killed one of his kin and fled to Peleus and Thetis. They sent Epigeus to accompany their son Achilles to Troy. During the siege, he was eventually slain by the hero Hector.And first the Trojans drave back the bright-eyed Achaeans, for smitten was a man in no wise the worst among the Myrmidons, even the son of great-souled Agacles, goodly Epeigeus, that was king in well-peopled Budeum of old, but when he had slain a goodly man of his kin, to Peleus he came as a suppliant, and to silver-footed Thetis; and they sent him to follow with Achilles, breaker of the ranks of men, to Ilios, famed for its horses, that he might fight with the Trojans. Him, as he was laying hold of the corpse, glorious Hector smote upon the head with a stone; and his head was wholly cloven asunder within the heavy helmet, and he fell headlong upon the corpse, and death, that slayeth the spirit, was shed about him. Legacy A Trojan asteroid, 5259 Epeigeus, has been named after him. Notes References Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. . Online version at the Perseus Digital Library. Homer, Homeri Opera in five volumes. Oxford, Oxford University Press. 1920. . Greek text available at the Perseus Digital Library. Achaeans (Homer) Characters in the Iliad Characters in Greek mythology
1256194
https://en.wikipedia.org/wiki/V-Model
V-Model
The V-model is a graphical representation of a systems development lifecycle. It is used to produce rigorous development lifecycle models and project management models. The V-model falls into three broad categories, the German V-Modell, a general testing model and the US government standard. The V-model summarizes the main steps to be taken in conjunction with the corresponding deliverables within computerized system validation framework, or project life cycle development. It describes the activities to be performed and the results that have to be produced during product development. The left side of the "V" represents the decomposition of requirements, and creation of system specifications. The right side of the "V" represents integration of parts and their validation. However, requirements need to be validated first against the higher level requirements or user needs. Furthermore, there is also something as validation of system models. This can partially be done at the left side also. To claim that validation only occurs at the right side may not be correct. The easiest way is to say that verification is always against the requirements (technical terms) and validation always against the real world or the user needs. The aerospace standard RTCA DO-178B states that requirements are validated—confirmed to be true—and the end product is verified to ensure it satisfies those requirements. Validation can be expressed with the query "are you building the right thing?" and verification with "are you building it right?" Types There are three general types of V-model. V-Modell The German V-Model "V-Modell", the official project management method of the German government. It is roughly equivalent to PRINCE2, but more directly relevant to software development. The key attribute of using a "V" representation was to require proof that the products from the left-side of the V were acceptable by the appropriate test and integration organization implementing the right-side of the V. General testing Throughout the testing community worldwide, the V-model is widely seen as a vaguer illustrative depiction of the software development process as described in the International Software Testing Qualifications Board Foundation Syllabus for software testers. There is no single definition of this model, which is more directly covered in the alternative article on the V-Model (software development). US government standard The US also has a government standard V-model which dates back about 20 years like its German counterpart. Its scope is a narrower systems development lifecycle model, but far more detailed and more rigorous than most UK practitioners and testers would understand by the V-model. Validation vs. verification It is sometimes said that validation can be expressed by the query "Are you building the right thing?" and verification by "Are you building it right?" In practice, the usage of these terms varies. The PMBOK guide, also adopted by the IEEE as a standard (jointly maintained by INCOSE, the Systems engineering Research Council SERC, and IEEE Computer Society) defines them as follows in its 4th edition: "Validation. The assurance that a product, service, or system meets the needs of the customer and other identified stakeholders. It often involves acceptance and suitability with external customers. Contrast with verification." "Verification. The evaluation of whether or not a product, service, or system complies with a regulation, requirement, specification, or imposed condition. It is often an internal process. Contrast with validation." Objectives The V-model provides guidance for the planning and realization of projects. The following objectives are intended to be achieved by a project execution: Minimization of project risks: The V-model improves project transparency and project control by specifying standardized approaches and describing the corresponding results and responsible roles. It permits an early recognition of planning deviations and risks and improves process management, thus reducing the project risk. Improvement and guarantee of quality: As a standardized process model, the V-Model ensures that the results to be provided are complete and have the desired quality. Defined interim results can be checked at an early stage. Uniform product contents will improve readability, understandability and verifiability. Reduction of total cost over the entire project and system life cycle: The effort for the development, production, operation and maintenance of a system can be calculated, estimated and controlled in a transparent manner by applying a standardized process model. The results obtained are uniform and easily retraced. This reduces the acquirer's dependency on the supplier and the effort for subsequent activities and projects. Improvement of communication between all stakeholders: The standardized and uniform description of all relevant elements and terms is the basis for the mutual understanding between all stakeholders. Thus, the frictional loss between user, acquirer, supplier and developer is reduced. V-model topics Systems engineering and verification The systems engineering process (SEP) provides a path for improving the cost-effectiveness of complex systems as experienced by the system owner over the entire life of the system, from conception to retirement. It involves early and comprehensive identification of goals, a concept of operations that describes user needs and the operating environment, thorough and testable system requirements, detailed design, implementation, rigorous acceptance testing of the implemented system to ensure it meets the stated requirements (system verification), measuring its effectiveness in addressing goals (system validation), on-going operation and maintenance, system upgrades over time, and eventual retirement. The process emphasizes requirements-driven design and testing. All design elements and acceptance tests must be traceable to one or more system requirements and every requirement must be addressed by at least one design element and acceptance test. Such rigor ensures nothing is done unnecessarily and everything that is necessary is accomplished. The two streams Specification stream The specification stream mainly consists of: User requirement specifications Functional requirement specifications Design specifications Testing stream The testing stream generally consists of: Installation qualification (IQ) Operational qualification (OQ) Performance qualification (PQ) The development stream can consist (depending on the system type and the development scope) of customization, configuration or coding. Applications The V-model is used to regulate the software development process within the German federal administration. Nowadays it is still the standard for German federal administration and defense projects, as well as software developers within the region. The concept of the V-model was developed simultaneously, but independently, in Germany and in the United States in the late 1980s: The German V-model was originally developed by IABG in Ottobrunn, near Munich, in cooperation with the Federal Office for Defense Technology and Procurement in Koblenz, for the Federal Ministry of Defense. It was taken over by the Federal Ministry of the Interior for the civilian public authorities domain in summer 1992. The US V-model, as documented in the 1991 proceedings for the National Council on Systems Engineering (NCOSE; now INCOSE as of 1995), was developed for satellite systems involving hardware, software, and human interaction. The V-model first appeared at Hughes Aircraft circa 1982 as part of the pre-proposal effort for the FAA Advanced Automation System (AAS) program. It eventually formed the test strategy for the Hughes AAS Design Competition Phase (DCP) proposal. It was created to show the test and integration approach which was driven by new challenges to surface latent defects in the software. The need for this new level of latent defect detection was driven by the goal to start automating the thinking and planning processes of the air traffic controller as envisioned by the automated enroute air traffic control (AERA) program. The reason the V is so powerful comes from the Hughes culture of coupling all text and analysis to multi dimensional images. It was the foundation of Sequential Thematic Organization of Publications (STOP) created by Hughes in 1963 and used until Hughes was divested by the Howard Hughes Medical Institute in 1985. The US Department of Defense puts the systems engineering process interactions into a V-model relationship. It has now found widespread application in commercial as well as defense programs. Its primary use is in project management and throughout the project lifecycle. One fundamental characteristic of the US V-model is that time and maturity move from left to right and one cannot move back in time. All iteration is along a vertical line to higher or lower levels in the system hierarchy, as shown in the figure. This has proven to be an important aspect of the model. The expansion of the model to a dual-Vee concept is treated in reference. As the V-model is publicly available many companies also use it. In project management it is a method comparable to PRINCE2 and describes methods for project management as well as methods for system development. The V-Model, while rigid in process, can be very flexible in application, especially as it pertains to the scope outside of the realm of the System Development Lifecycle normal parameters. Advantages These are the advantages V-model offers in front of other systems development models: The users of the V-model participate in the development and maintenance of the V-model. A change control board publicly maintains the V-Model. The change control board meets anywhere from every day to weekly and processes all change requests received during system development and test. The V-model provides concrete assistance on how to implement an activity and its work steps, defining explicitly the events needed to complete a work step: each activity schema contains instructions, recommendations and detailed explanations of the activity. Limits The following aspects are not covered by the V-model, they must be regulated in addition, or the V-Model must be adapted accordingly: The placing of contracts for services is not regulated. The organization and execution of operation, maintenance, repair and disposal of the system are not covered by the V-model. However, planning and preparation of a concept for these tasks are regulated in the V-model. The V-model addresses software development within a project rather than a whole organization. See also Engineering Information Management (EIM) IBM Rational Unified Process (as a supporting software process) Waterfall model of software development Systems architecture Systems design Theory U References External links Software project management Systems engineering
33247593
https://en.wikipedia.org/wiki/List%20of%20open-source%20mobile%20phones
List of open-source mobile phones
This is a list of mobile phones with open-source operating systems. Scope of the list Cellular modem and other firmware Some hardware components used in phones require drivers (or firmware) to run. For many components, only proprietary drivers are available (open source phones usually seek components with open drivers.) If firmware is not updatable and does not have control over any other part of the phone, it might be considered equivalent to part of the hardware. However, these conditions do not hold for cellular modems. , all available mobile phones have a proprietary baseband chip (GSM module, cellular modem), except for the Necuno, which has no such chip and communicates by peer-to-peer VOIP. The modem is usually integrated with the system-on-a-chip and the memory. This presents security concerns; baseband attacks can read and alter data on the phone remotely. The Librem 5 mobile segregates the modem from the system and memory, making it a separate module, a configuration rare in modern cellphones. There is an open-source baseband project, OsmocomBB. Operating system: middleware and user interface Generally, the phones included on this list contain copyleft software other than the Linux kernel, and minimal closed-source component drivers (see section above). Android-based devices do not appear on this list because of the heavy use of proprietary components, particularly drivers and applications. There are numerous versions of Android which seek to replace the proprietary components, such as LineageOS (successor to the now-defunct Cyanogenmod) and Replicant, that can be installed on a large number of phones after-market. Phones natively running these are included. WebOS was initially available only under a proprietary license but the source code was later released under a free permissive license by HP. Open WebOS will not run on all WebOS devices. Firefox OS was released under a permissive MIT license but its KaiOS successor is proprietary; the former is included. Maemo (mixed permissive and proprietary licenses) spawned Maemo Leste (permissive and protective) and MeeGo (permissive); MeeGo split into Tizen (proprietary) and Mer middleware (see diagram). All but Tizen are included. Sailfish OS is a proprietary user interface atop the Mer middleware; it is thus not included. Qt Extended had proprietary components and is not included, but its community fork QTMoko/OpenMoko is. Note that it is often possible to install a wide variety of open-source operating systems on any open-source phone; the higher-level software is designed to be largely interchangeable and independent of the hardware. List Distributions for existing phones postmarketOS, Ubports, and KDE Neon are open-source distributions running on existing smartphones originally running Android. Maemo Leste is available for Nokia N900 and Motorola Droid 4. There exists a database listing which older phones will run which open-source operating systems. Custom-made phones It is possible to home-build a phone from partially open hardware and software. The Arduinophone (touchscreen) and the MIT DIY Cellphone (segmented display) both use the Arduino open-hardware single-board computer, with added components. Circuitmess Ringo (previously MakerPhone) is another DIY Arduino phone with open source firmware and available schematics, focusing on education. The PiPhone and ZeroPhone are similar, but based on the Raspberry Pi. The main components to make an open mobile phone are: Back cover Touch screen Battery Logic board See also Comparison of open-source mobile phones (features) Mobile operating system (categorized by license) postmarketOS Greenphone Mobile device (mobile platform) OsmocomBB Blackphone Fairphone References Mobile Linux Mobile phone standards Lists of computer hardware Open-source mobile phones Open source
56221905
https://en.wikipedia.org/wiki/BlueBorne%20%28security%20vulnerability%29
BlueBorne (security vulnerability)
BlueBorne is a type of security vulnerability with Bluetooth implementations in Android, iOS, Linux and Windows. It affects many electronic devices such as laptops, smart cars, smartphones and wearable gadgets. One example is . The vulnerabilities were first reported by Armis, an IoT security firm, on 12 September 2017. According to Armis, "The BlueBorne attack vector can potentially affect all devices with Bluetooth capabilities, estimated at over 8.2 billion devices today [2017]." History The BlueBorne security vulnerabilities were first reported by Armis, an IoT security firm, on 12 September 2017. Technical Information The BlueBorne vulnerabilities are a set of 8 separate vulnerabilities. They can be broken down into groups based upon platform and type. There were vulnerabilities found in the Bluetooth code of the Android, iOS, Linux and Windows platforms: Linux kernel RCE vulnerability - CVE-2017-1000251 Linux Bluetooth stack (BlueZ) information Leak vulnerability - CVE-2017-1000250 Android information Leak vulnerability - CVE-2017-0785 Android RCE vulnerability #1 - CVE-2017-0781 Android RCE vulnerability #2 - CVE-2017-0782 The Bluetooth Pineapple in Android - Logical Flaw CVE-2017-0783 The Bluetooth Pineapple in Windows - Logical Flaw CVE-2017-8628 Apple Low Energy Audio Protocol RCE vulnerability - CVE-2017-14315 The vulnerabilities are a mixture of information leak vulnerabilities, remote code execution vulnerability or logical flaw vulnerabilities. The Apple iOS vulnerability was a remote code execution vulnerability due to the implementation of LEAP (Low Energy Audio Protocol). This vulnerability was only present in older versions of the Apple iOS. Impact In 2017, BlueBorne was estimated to potentially affect all of the 8.2 billion Bluetooth devices worldwide, although they clarify that 5.3 billion Bluetooth devices are at risk. Many devices are affected, including laptops, smart cars, smartphones and wearable gadgets. In 2018, after one year after the original disclosure, Armis estimated that over 2 billion devices were still vulnerable. Mitigation Google provides a BlueBorne vulnerability scanner from Armis for Android. Procedures to help protect devices from the BlueBorne security vulnerabilities were reported by September 2017. References External links Computer security 2017 in computing
1212721
https://en.wikipedia.org/wiki/Release%20notes
Release notes
Release notes are documents that are distributed with software products or hardware products, sometimes when the product is still in the development or test state (e.g., a beta release). For products that have already been in use by clients, the release note is delivered to the customer when an update is released. Another abbreviation for Release notes is Changelog or Release logs or Software changes or Revision history Updates or README file. However, in some cases, the Release notes and Changelog are published separately. This split is for clarity and differentiation of feature-highlights from bugs, Change requests (CRs) or improvements on the other side. Purpose Release notes are documents that are shared with end users, customers and clients of an organization. The definition of the terms 'End Users', 'Clients' and 'Customers' are very relative in nature and might have various interpretations based on the specific context. For instance, the Quality Assurance group within a software development organization can be interpreted as an internal customer. Content Release notes detail the corrections, changes or enhancements (functional or non-functional) made to the service or product the company provides. They might also be provided as an artifact accompanying the deliverables for System Testing and System Integration Testing and other managed environments especially with reference to an information technology organization. Release notes can also contain test results and information about the test procedure. This kind of information gives readers of the release note more confidence in the fix/change done; this information also enables implementer of the change to conduct rudimentary acceptance tests. They differ from End-user license agreement, since they do not (should not) contain any legal terms of the software product or service. The focus should be on the software release itself, not for example legal conditions. Release notes can also be interpreted as describing how to install or build the software, instead of highlighting new features or resolved bugs. Another term often used in this context is System Requirements, meaning the required hardware and software for installing or building the software. Format style There is no standard format for release notes that is followed throughout different organizations. Organizations normally adopt their own formatting styles based on the requirement and type of the information to be circulated. The content of release notes also vary according to the release type. For products that are at testing stage and that are newly released, the content is usually more descriptive compared to release notes for bug fixes and feature enhancements, which are usually brief. Release notes may include the following sections: Header – Document Name (i.e. Release Notes), product name, release number, release date, note date, note version, etc. Overview - A brief overview of the product and changes, in the absence of other formal documentation. Purpose - A brief overview of the purpose of the release note with a listing of what is new in this release, including bug fixes and new features. Issue Summary - A short description of the bug or the enhancement in the release. Steps to Reproduce - The steps that were followed when the bug was encountered. Resolution - A short description of the modification/enhancement that was made to fix the bug. End-User Impact - What different actions are needed by the end-users of the application. This should include whether other functionality is impacted by these changes. Support Impacts - Changes required in the daily process of administering the software. Notes - Notes about software or hardware installation, upgrades and product documentation (including documentation updates) Disclaimers - Company and standard product related messages. e.g.; freeware, anti-piracy, duplication etc.. See also Disclaimer. Contact - Support contact information. A release note is usually a terse summary of recent changes, enhancements and bug fixes in a particular software release. It is not a substitute for user guides. Release notes are frequently written in the present tense and provide information that is clear, correct, and complete. A proposal for an open-specification exists and is called Release Notes Schema Specification. Prominent examples (mainly software) The following list is a selection of major software from different branches, such as software games, operating systems, automotive, CAD design, etc. Apache Maven Project Release Notes Apple iOS 14 Updates Apple macOS Release Notes Apple Xcode Release Notes FreeBSD Releases FXhome's Hitfilm Express GNOME Release Notes Gitlab Releases i.MX Linux® Release Notes (PDF by NXP Semiconductors) Atlassian Jira Software release notes Linux (Ubuntu) Linux Kernel 5.x Microsoft Visual Studio Release Notes Minecraft Release Changelogs Tesla Software Updates Unity3d 2020.1.0 Wikipedia MediaWiki software Windows 10 (see also Windows Release Health) Xilinx Release Notes (e.g. Vivado Design Suite) See also Changelog Configuration management End-user license agreement README Release management Release Candidate Software release life cycle SWEBOK Terms of service Further reading Laura Moreno et al. ARENA: An Approach for the Automated Generation of Release Notes, IEEE Transactions on Software Engineering (Volume: 43, Issue: 2, Feb. 1 2017) Casey Newton. I drank beer and wrote release notes with the Medium release notes team, The Verge (2016-02-10) GNU coding standards - 6.8 Change Logs References External links How to write release notes How should release notes be written? (Stackoverflow) The Strange Art of Writing App Release Notes Release Notes Hub (also https://www.release-notes.com) open-source on GitHub Release Notes Schema Specification Technical communication Software Configuration management Change management
29736990
https://en.wikipedia.org/wiki/Criticism%20of%20Amazon
Criticism of Amazon
Amazon.com has drawn criticism from multiple sources, where the ethics of certain business practices and policies have been drawn into question. Amazon has faced numerous allegations of anti-competitive or monopolistic behavior, as well as criticisms of their treatment of workers and consumers. Concerns have frequently been raised regarding the availability or unavailability of products and services on Amazon platforms, as Amazon.com is considered a monopoly due to its size. Anti-competitive practices One-click patent The company has been controversial for its alleged use of patents as a competitive hindrance. The "1-Click patent" is perhaps the best-known example of this. Amazon's use of the 1-click patent against competitor Barnes & Noble's website led the Free Software Foundation to announce a boycott of Amazon in December 1999. The boycott was discontinued in September 2002. On February 22, 2000, the company was granted a patent covering an Internet-based customer referral system, or what is commonly called an "affiliate program". Industry leaders Tim O'Reilly and Charlie Jackson spoke out against the patent, and O'Reilly published an open letter to Jeff Bezos, the CEO of Amazon, protesting the 1-click patent and the affiliate program patent, and petitioning him to "avoid any attempts to limit the further development of Internet commerce". O'Reilly collected 10,000 signatures with this petition. Bezos responded with his own open letter. The protest ended with O'Reilly and Bezos visiting Washington, D.C., to lobby for patent reform. On February 25, 2003, the company was granted a patent titled "Method and system for conducting a discussion relating to an item on Internet discussion boards". On May 12, 2006, the USPTO ordered a re-examination of the "1-Click" patent, based on a request filed by actor Peter Calveley, who cited the prior art of an earlier e-commerce patent and the Digicash electronic cash system. Canadian site Amazon has a Canadian site in both English and French, but until a ruling in March 2010, was prevented from operating any headquarters, servers, fulfillment centers or call centers in Canada by that country's legal restrictions on foreign-owned booksellers. Instead, Amazon's Canadian site originates in the United States, and Amazon has an agreement with Canada Post to handle distribution within Canada and for the use of the Crown corporation's Mississauga, Ontario shipping facility. The launch of Amazon.ca generated controversy in Canada. In 2002, the Canadian Booksellers Association and Indigo Books and Music sought a court ruling that Amazon's partnership with Canada Post represented an attempt to circumvent Canadian law, but the litigation was dropped in 2004. In January 2017, doormat products with the Indian flag on them went on sale on the Amazon Canada website. The use of the Indian flag in this way is considered offensive to the Indian community and in violation of the Flag code of India. The Minister of External Affairs of India Sushma Swaraj threatened a visa embargo for Amazon officials if Amazon did not tender an unconditional apology and withdraw all such products. In January 2017, Amazon.ca was required by the Competition Bureau to pay a $1M penalty, plus $100,000 in costs, overpricing practices for failing to provide "truth in advertising" according to Josephine Palumbo, the deputy commissioner for deceptive marketing practices. This fine was levied because some products on Amazon.ca were shown with an artificially high "list price", making the lower selling price appear to be very attractive, producing an unfair competitive edge over other retailers. This is a frequent practice among some retailers and the fine was intended to "send a clear message [to the industry] that unsubstantiated savings claims will not be tolerated". The Bureau also indicated that the company has made changes to ensure that regular prices are more accurately listed. BookSurge In March 2008, sales representatives of Amazon's BookSurge division started contacting publishers of print on demand (POD) titles to inform them that for Amazon to continue selling their POD books, they were required to sign agreements with Amazon's own BookSurge POD company. Publishers were told that eventually, the only POD titles that Amazon would be selling would be those printed by their own company, BookSurge. Some publishers felt that this ultimatum amounted to monopoly abuse, and questioned the ethics of the move and its legality under anti-trust law. Direct selling In 2008, Amazon UK came under criticism for attempting to prevent publishers from direct selling at discount from their own websites. Amazon's argument was that they should be able to pay the publishers based on the lower prices offered on their websites, rather than on the full recommended retail price (RRP). Also in 2008, Amazon UK drew criticism in the British publishing community following their withdrawal from sale of key titles published by Hachette Livre UK. The withdrawal was possibly intended to put pressure on Hachette to provide levels of discount described by the trade as unreasonable. Curtis Brown's managing director Jonathan Lloyd opined that "publishers, authors, and agents are 100% behind [Hachette]. Someone has to draw a line in the sand. Publishers have given 1% a year away to retailers, so where does it stop? Using authors as a financial football is disgraceful." In August 2013, Amazon agreed to end its price parity policy for marketplace sellers in the European Union, in response to investigations by the UK Office of Fair Trade and Germany's Federal Cartel Office. It is not yet clear if this ruling applies to direct selling by publishers. Price control Following the announcement of the Apple iPad on January 27, 2010, Macmillan Publishers entered into a pricing dispute with Amazon.com regarding electronic publications. Macmillan asked Amazon to accept a new pricing scheme it had worked out with Apple, raising the price of e-books from $9.99 to $15. Amazon responded by pulling all Macmillan books, both electronic and physical, from their website (although affiliates selling the books were still listed). On January 31, 2010, Amazon "capitulated" to Macmillan's pricing request. In 2014, Amazon and Hachette became involved in a dispute over agency pricing. Agency pricing is when the agent (such as Hachette) determines the price of a book; normally, however, Amazon dictates the discount level of a book. High-profile authors became involved; hundreds of writers, including Stephen King and John Grisham, signed a petition saying "We encourage Amazon in the strongest possible terms to stop harming the livelihood of the authors on whom it has built its business. None of us, neither readers nor authors, benefit when books are taken hostage." Author Ursula K. Le Guin commented on Amazon's practice of making Hachette books harder to buy on its site, stating "We're talking about censorship: deliberately making a book hard or impossible to get, 'disappearing' an author." Although her statement was met with some outrage and disbelief, Amazon's actions such as eliminating discounts, delaying the delivery time, and refusing pre-publication orders did make physical Hachette books harder to get. Plummeting sales of Hachette books on Amazon indicated that its policies likely succeeded in deterring customers. On August 11, 2014, Amazon removed the option to preorder Captain America: The Winter Soldier in an effort to gain control over the online pricing of Disney films. Amazon has previously used similar tactics with Warner Bros. and Hachette Book Group. The conflict was resolved in late 2014 with neither having to concede anything. Then in February 2017, Amazon again began to block preorders of Disney films, just before Moana and Rogue One were due to be released to the home market. The law firm Hagens Berman filed a lawsuit in district court in New York in January 2021, alleging that Amazon colluded with leading publishers to keep e-book prices artificially high. The state of Connecticut also announced it was investigating Amazon for potential anti-competitive behaviour in its sale of e-books. Removal of competitors' products On October 1, 2015, Amazon announced that Apple TV and Google Chromecast products were banned from sale on Amazon.com by all merchants, with no new listings allowed effective immediately, and all existing listings removed effective October 29, 2015. Amazon argued that this was to prevent "customer confusion", as these devices do not support the Amazon Prime Video ecosystem. This move was criticized, as commentators believed that it was meant primarily to suppress the sale of products deemed as competition to Amazon Fire TV products, given that Amazon itself had deliberately refused to offer software for its own streaming services on these devices, and the action contradicted the implication that Amazon.com was a general online retailer. In May 2017, it was reported that Apple and Amazon were nearing an agreement to offer Prime Video on Apple TV, and allow the product to return to the retailer. Prime Video launched on Apple TV December 6, 2017, with Amazon beginning to sell the Apple TV product again shortly thereafter. Amazon is known to remove products for trivial policy violations by third-party sellers that compete with Amazon's home-grown brands. To compete for product placement where Amazon's own brands are featured prominently, third-party sellers often need to resort to advertisement spends and list themselves with Amazon's expensive Prime program for which they are charged a premium on order fulfillment and returns, resulting in increased costs and lower profit margins. Amazon has since suppressed other Google products, including Google Home (which competes with Amazon Echo), Pixel phones, and recent products of Google subsidiary Nest Labs (despite the Nest Learning Thermostat having integration support for Amazon's voice assistant platform Alexa). In retaliation, Google announced on December 6, 2017, that it would block YouTube from the Amazon Echo Show and Amazon Fire TV products. In December 2017, Amazon stated that it intended to start offering Chromecast again (which it would do a year later). Meanwhile, Nest stated that it would no longer offer any of its future stock to Amazon until it commits to offering its entire product line. In April 2019, Amazon announced that it would add Chromecast support to the Prime Video mobile app and release its Android TV app more widely, while Google announced that it would, in return, restore access to YouTube on Fire TV (but not Echo Show). Prime Video for Chromecast and YouTube for Fire TV were both released July 9, 2019. In December 2019, following the acquisition of Honey—a browser extension that automatically applies online coupons on online stores—by PayPal, the Amazon website began to display warnings advising users to uninstall the software, claiming it was a security risk. Apple partnership In November 2018, Amazon reached an agreement with Apple Inc. to sell selected products through the service, via the company, selected Apple Authorized Resellers, and vendors who meet specific criteria. As a result of this partnership, only Apple Authorized Resellers and vendors who purchase $2.5 million in refurbished stock from Apple every 90 days (via the Amazon Renewed program) may sell Apple products on the service. The partnership has faced criticism from independent resellers, who believe that this deal has restricted their ability to sell refurbished Apple products on Amazon at a low cost. In August 2019, The Verge reported that Amazon was being investigated by the FTC over the deal. Marketplace participant and owner Amazon has raised concerns by being both the owner of a dominant marketplace and a retail seller in that marketplace. Amazon uses the data it gets from the entire marketplace (data not available to other retailers in the marketplace) to determine what products would be advantageous to produce in-house, at what price point. The company markets products under AmazonBasics, Lark & Ro, and various other private-label brands. U.S. presidential candidate Elizabeth Warren has proposed forcing Amazon to sell AmazonBasics and Whole Foods Market, where Amazon competes against other marketplace participants as a brick-and-mortar retailer. Tim O'Reilly, comparing Ingram's business with Amazon's, noted that Amazon exclusive focus on just the customer debilitates the rest of the retail ecosystem, including sellers, manufacturers, and even its own employees, while Ingram seeks to innovate and build on behalf of all the stakeholders in the marketplace it operates in. O'Reilly adds that Amazon's ecosystem-crippling behaviour is driven by the its insatiable need for growth at all costs. Third-party sellers have long accused Amazon's rent-seeking behaviour like steadily increasing cost of doing business on their platform, abusing their dominant market position to manipulate pricing, copying popular products of third-party retailers, and unjustifiably promoting its own brands. EU antitrust charges The European Commission commenced an investigation in June 2015 regarding clauses in Amazon's e-book distribution agreements which potentially breached EU antitrust rules by making it harder for other e-book platforms to compete. This investigation was concluded in May 2017 when the Commission adopted a decision which rendered binding Amazon's commitments not to use or enforce these clauses. In July 2019 and in November 2020, the European Commission opened two in-depth investigations into Amazon's use of marketplace seller data as well as possible preferential treatment of Amazon's own retail offers and those of marketplace sellers that use Amazon's logistics and delivery services. It charged that Amazon systematically relies on nonpublic data it gathers from third party sellers to unfairly compete against them, to the benefit of its own retail business, thus violating competition law in the European Economic Area. Treatment of workers Amazon has faced various critiques over the quality of its working environments and treatment of its workforce. A group known as The FACE (Former And Current Employees) of Amazon has regularly used social media to disseminate criticism of the company and allegations regarding negative work conditions. Employee mismanagement Amazon has been accused of mistakenly firing people on medical leave for no-shows, not fixing inaccuracy in their payroll systems resulting in a section of both its blue-collar and white-collar employees going under-paid for months, and violating employment laws by deliberately denying unpaid leaves. Opposition to trade unions Amazon has opposed efforts by trade unions to organize in both the United States and the United Kingdom. In 2001, 850 employees in Seattle were laid off by Amazon.com after a unionization drive. The Washington Alliance of Technological Workers (WashTech) accused the company of violating union laws and claimed Amazon managers subjected them to intimidation and heavy propaganda. Amazon denied any link between the unionization effort and layoffs. Also in 2001, Amazon.co.uk hired a US management consultancy organization, The Burke Group, to assist in defeating a campaign by the Graphical, Paper and Media Union (GPMU, now part of Unite the Union) to achieve recognition in the Milton Keynes distribution depot. It was alleged that the company victimized or sacked four union members during the 2001 recognition drive and held a series of captive meetings with employees. An Amazon training video that was leaked in 2018 stated "We are not anti-union, but we are not neutral either. We do not believe unions are in the best interest of our customers or shareholders or most importantly, our associates." Two years later, it was found that Whole Foods was using a heat map to track which of its 510 stores had the highest levels of pro-union sentiment. Factors including racial diversity, proximity to other unions, poverty levels in the surrounding community and calls to the National Labor Relations Board were named as contributors to "unionization risk". Data collected in the heat map suggest that stores with low racial and ethnic diversity, especially those located in poor communities, are more likely to unionize. Amazon also had a job listing for an Intelligence Analyst, whose role it would be to identify and tackle threats to Amazon, which included unions and organised labour. On 4 December 2020, Buzzfeed News reported that The National Labor Relations Board (NLRB) accused Amazon for illegally firing a worker who urged for better working conditions during the pandemic. According to Buzzfeed News the National Labor Relations Board filed a complaint against Amazon for firing former warehouse worker Courtney Bowde. The largest unionization drive by Amazon employees, and the first since 2014, occurred in February and March 2021 in Bessemer, Alabama. The move to join the Retail, Wholesale and Department Store Union was met with "anti-union" signs and mandatory "union education meetings" according to Amazon worker Jennifer Bates. During the voting, President Joe Biden made a speech acknowledging the organizing workers in Alabama and called for "no anti-union propaganda". This was followed by an increase of activity by public relations staff on Twitter, reportedly at the personal direction of Jeff Bezos. The tone used by some of the posts led one Amazon engineer to initially suspect that the accounts had been hacked. The exchanges went on to include several progressive politicians. Executive Dave Clark, for instance, compared Amazon's $15 wage favourably to Vermont's $11.75 minimum wage in a response to Bernie Sanders despite the fact that Sanders can only vote on bills related to the federal minimum wage. Some of the criticism of unions came from generic recently created accounts rather than known Amazon personalities. One account, which was quickly banned, had attempted to use the likeness of YouTube star Tyler Toney from Dude Perfect. Warehouse conditions in the US In September 2011, Allentown, Pennsylvania's Morning Call interviewed 20 past and present employees at Amazon's Breinigsville warehouse, all but one of whom criticized the company's warehouse conditions and employment practice. Specific investigatory concerns were: heat so extreme it required the regular posting of ambulances to take away workers who passed out, strenuous workloads in that heat, and first-person reports of summary terminations for health conditions such as breast cancer. The Morning Call also published, verbatim, Amazon.com's direct response to a query by OSHA, where amazon.com detailed its response when heat conditions reach as high as , including water and ice treatment, electrolyte drinks, nutrition advice, and extended breaks in air-conditioned rooms. Five days after the Morning Call article was published, Amazon stated that it had spent $2.4 million "urgently installing" air conditioning at four warehouses including the Breinigsville facility. However, the original investigator states that when he checked back with current employees for his September 23 follow-up story, "they told him nothing had changed since his original story ran." In June 2012, Amazon began the installation of a $52 million investment in cooling its warehouses around the country, a major cost for the company equivalent to 8.2 percent of Amazon's 2011 total earnings. Experts speculated Amazon made such a massive investment either to dampen negative publicity over worker conditions and/or to better protect goods in the warehouse such as food and electronics equipment. Sucharita Mulpuru, an analyst with Forrester Research, said: Amazon ships a lot of electronics and food now. It's not good to have that stuff in extreme temperatures. ... I would like to think there was an element of humanity to the decision but there's nothing in Amazon's history or in Jeff Bezos' public persona that would lead me to think that was the driver of the decision. … Rarely has Amazon made any business decisions that didn't affect the bottom line. In December 2014, the Supreme Court of the United States ruled unanimously against temporary staffing workers for Amazon warehouses in Nevada who were seeking compensation for time spent waiting to go through security screening checkpoints. A 2021 report by the National Employment Law Project found that working conditions at Amazon fulfillment centers in Minnesota are dangerous and unsustainable, with more than double the rate of injuries compared to non-Amazon warehouses for the years 2018 to 2020. In December 2021, after a tornado destroyed an Amazon warehouse in Illinois, the company and its policies were criticized on several fronts: making people work during an imminent tornado, cell phone ban preventing access to emergency alerts, and company founder Jeff Bezos' apparent insensitivity to the fatal catastrophe as he celebrated his space company's latest achievement and only belatedly acknowledged the loss of life. Warehouse conditions in the UK Complaints about Amazon's Marston Gate UK facility date back to 2001. prompting a threatened protest from Billy Bragg. These claims resurfaced in 2008 with fresh reports of "sweatshop conditions". A Channel 4 documentary broadcast on 1 August 2013 employed secret cameras within Amazon UK's Rugeley warehouse documenting worker abuses, calling the working practices 'horrendous and exhausting'. In November 2016 a BBC undercover report at Amazon's delivery depot in Avonmouth found that in some instances delivery drivers had no choice but to break the speed limit and use their van as a toilet to save time. It also exposed that after deductions (such as van hire and insurance) drivers could be paid as little as £2.59 per hour, less than half the UK minimum wage. In December 2016 Willie Rennie, the Liberal Democrat leader in Scotland, said that Amazon should be ashamed of both its working conditions and pay in Dunfermline after photographs were released showing workers camped outside in the winter to save the cost of commuting. In December 2017, it was reported that Amazon drivers in the U.K. are making less than the national minimum wage because they have to pay for van hire and insurance and did not have enough time to deliver the parcels that were ordered forcing them to urinate in plastic bottles in their vans. Working conditions for delivery drivers A September 11, 2018 article exposed poor working conditions for Amazon's delivery drivers, describing a variety of alleged abuses, including missing wages, lack of overtime pay, favoritism, intimidation, and time constraints that forced them to drive at dangerous speeds and skip meals and bathroom breaks. Amazon uses Netradyne artificial intelligence cameras in some partner vans to monitor safety incidents and driver behaviour, drawing criticism from some drivers. In 2019, NBC reported some contracted Amazon locations, against company policy, allowed people to make deliveries using other people's badges and passwords in order to circumvent employee background checks and avoid financial penalties or termination due to sub-standard performance. Amazon's performance quotas were criticized as unrealistic and as pressuring drivers to speed, run stop signs, carry overloaded vehicles, and urinate in bottles due to lack of time for bathroom stops; the company was generally able to avoid legal liability for resulting vehicle crashes by using independent contractors. In June 2020, subcontracted delivery drivers based in Canada launched a class action lawsuit against Amazon Canada, claiming that $200 million in unpaid wages were owed to them because Amazon retained "effective control" over their work and should therefore legally be considered their employer. 2018 workers strike Spanish unions called on 1,000 Amazon workers to strike starting on July 10 and lasted through Amazon Prime Day, with calls for the strike to be seen all across the world, and for customers to follow suit. The strike based in Spain was timed around Prime Day, with a representative of the Comisiones Obreras (CCOO) union said complaints were based on wage cuts, working conditions, and restrictions on time off. However, other European countries have other raised grievances, with Poland, Germany, Italy, Spain, England, and France all being represented and shown below. Poland workers claim an anti-strike law has made it impossible to negotiate a better salary. German workers have been fighting for over two years for a collective bargaining agreement. Italian workers have highlighted claims that Amazon routinely hires contract workers who aren't required to have benefits. Spanish Amazon leaders have unilaterally imposed working conditions when previous collective bargaining agreements had expired. English and French Amazon leaders have imposed demanding measures on time and efficiency leading to workers expected to process 300 items per hour and urinate in bottles, with penalties being given for sick days and pregnancies. Stop BEZOS Act On September 5, 2018, Senator Bernie Sanders (I-VT) and Representative Ro Khanna (D-CA-17) introduced the Stop Bad Employers by Zeroing Out Subsidies (Stop BEZOS) Act aimed at Amazon and other alleged beneficiaries of corporate welfare such as Walmart, McDonald's, and Uber. This followed several media appearances in which Sanders underscored the need for legislation to ensure that Amazon workers received a living wage. These reports cited a finding by New Food Economy that one third of fulfilment center workers in Arizona were on the Supplemental Nutrition Assistance Program (SNAP). Although Amazon initially released a statement which called statistics such as this "inaccurate and misleading", an October 2 announcement affirmed that its minimum wage for all employees would be raised to $15 per hour. Protest over coronavirus policies During the COVID-19 pandemic, Amazon warehouses in the United States raised their hourly wages by $2 and announced that employees testing positive would be entitled to 14 days of paid leave. After a company statement that two employees at the Staten Island warehouse had been infected, workers there claimed the actual number was 10. On March 30, 2020, between 15 and 60 people attended a walkout to demand that Amazon temporarily close the warehouse in order to disinfect it. The main organizer, Chris Smalls, was subsequently fired, allegedly for violating social distancing guidelines. Smalls countered that the incident cited, which brought him near an infected employee for 5 minutes, occurred on March 11 meaning that his 14-day quarantine would have already ended if it had been ordered at the proper time. He also stated that the purpose of his conversation was to urge his fellow employee to stay home even when the results of her test were still pending making the company's paid leave option unavailable. New York City Mayor Bill de Blasio ordered an investigation and the state's Attorney General Letitia James called the firing "disgraceful". Representative Jerry Nadler welcomed the investigation. Democratic senators from New York, New Jersey, Connecticut, and Ohio sent a letter to Amazon expressing their concerns. On April 9, 2021, U.S. District Judge Jed Rakoff ruled in favor of James' request to return her lawsuit to a New York state court instead of the Brooklyn federal court where Amazon had sued James. According to leaked emails, Amazon executives, including Jeff Bezos, held a meeting to discuss the implications for the company's image. One email from the general counsel described Smalls as "not smart or articulate". Similar protests continued into April with one of them taking place at a Minnesota warehouse which held a strike in 2019. This led to the firing of one organizer, Bashir Mohamed, with social distancing as the nominal reason. Supporters of Mohamed countered that the guidelines were set up to be difficult to follow and applied selectively. On April 10, two user interface designers, Emily Cunningham and Maren Costa, were fired after they tweeted support for striking Amazon workers and offered to match up to $500 worth of donations to them. Executives cited "violating internal policies" as justification which has been interpreted as an invocation of a non-disparagement agreement that Amazon employees sign. Cunningham and Costa argued that the firings were retaliatory and partly motivated by their criticism of Amazon with regard to climate change. On April 16, they took part in a virtual meeting related to the crisis with approximately 400 colleagues and environmental activist Naomi Klein. There were reports from some Amazon workers that their calendar invitations to the event were being deleted. On May 1, the day of Amazon workers' May Day protest strike, Amazon Web Services VP Tim Bray resigned in protest of the company's treatment of workers. Weeks earlier, in mid-April, Bray became troubled by Amazon's attacks on, and firing of, warehouse workers for demanding safe working conditions and voiced those concerns among upper management. Bray previously supported the Amazon Employees for Climate Justice (AECJ) workers' campaign for shareholder support of Amazon climate action; he was one of 8000+ employees to sign that petition. Employee dissent In 2014, a former Amazon employee Kivin Varghese went on a hunger strike to change Amazon unfair policies. In November 2016, an Amazon employee jumped from the roof of headquarters office as a result of unfair treatment at work. In 2020, Tim Bray, Vice President at AWS at the time, resigned in protest of Amazon's treatment of its activist employees involved with Amazon Employees for Climate Justice who led a public agitation against unhealthy working conditions in Amazon's warehouses during the COVID-19 pandemic. Forced labor in China Amazon is one of the companies "potentially directly or indirectly benefiting" from forced Uighur labor according to a report by Australian Strategic Policy Institute, a think tank partly funded by the US Department of Defense. Treatment of customers Differential pricing In September 2000, price discrimination potentially violating the Robinson–Patman Act was found on amazon.com. Amazon offered to sell a buyer a DVD for one price, but after the buyer deleted cookies that identified him as a regular Amazon customer, he was offered the same DVD for a substantially lower price. Jeff Bezos subsequently apologized for the differential pricing and vowed that Amazon "never will test prices based on customer demographics". The company said the difference was the result of a random price test and offered to refund customers who paid the higher prices. Amazon had also experimented with random price tests in 2000 as customers comparing prices on a "bargain-hunter" website discovered that Amazon was randomly offering the Diamond Rio MP3 player for substantially less than its regular price. Kindle content removal In July 2009, The New York Times reported that amazon.com deleted all customer copies of certain books published in violation of US copyright laws by MobileReference, including the books Nineteen Eighty-Four and Animal Farm from users' Kindles. This action was taken with neither prior notification nor specific permission of individual users. Customers did receive a refund of the purchase price and, later, an offer of an Amazon gift certificate or a check for $30. The e-books were initially published by MobileReference on Mobipocket for sale in Australia only—owing to those works having fallen into public domain in Australia. However, when the e-books were automatically uploaded to Amazon by MobiPocket, the territorial restriction was not honored, and the book was allowed to be sold in territories such as the United States where the copyright term had not expired. Author Selena Kitt fell victim to Amazon content removal in December 2010; some of her fiction had described incest. Amazon claimed "Due to a technical issue, for a short window of time three books were temporarily unavailable for re-download by customers who had previously purchased them. When this was brought to our attention, we fixed the problem..." in an attempt to defuse user complaints about the deletions. Late in 2013, online blog The Kernel released multiple articles revealing "an epidemic of filth" on Amazon and other e-book storefronts. Amazon responded by blocking books dealing with incest, bestiality, child pornography as well as topics such as virginity, monsters, and barely-legal. Sale of Wikipedia's material as books The German-speaking press and blogosphere have criticized Amazon for selling tens of thousands of print on demand books which reproduced Wikipedia articles. These books are produced by an American company named Books LLC and by three Mauritian subsidiaries of the German publisher VDM: Alphascript Publishing, Betascript Publishing and Fastbook Publishing. Amazon did not acknowledge this issue raised on blogs and some customers that have asked the company to withdraw all these titles from its catalog. The collaboration between amazon.com and VDM Publishing began in 2007. Product substitution The British consumer organization Which? has published information about Amazon Marketplace in the UK which indicates that when small electrical products are sold on Marketplace the delivered product may not be the same as the product advertised. A test purchase is described in which eleven orders were placed with different suppliers via a single listing. Only one of the suppliers delivered the actual product displayed, two others delivered different, but functionally equivalent products and eight suppliers delivered products that were quite different and not capable of safely providing the advertised function. The Which? article also describes how the customer reviews of the product are actually a mix of reviews for all of the different products delivered, with no way to identify which product comes from which supplier. This issue was raised in evidence to the UK Parliament in connection with a new Consumer Rights bill. Items added onto baby registries In 2018 it was reported that Amazon has been selling sponsored ads pretending to be items on a baby registry. The ads looked very similar to the actual items on the list. Third-party sellers A 2019 Wall Street Journal (WSJ) investigation found third-party retailers selling over 4,000 unsafe, banned, or deceptively labeled products on Amazon.com. According to the WSJ article, when customers have sued Amazon for unsafe products sold by third-party sellers on Amazon.com, Amazon's legal defense has been that it is not the seller and therefore cannot be held liable. Wirecutter reported in 2020 that over several months they "were able to purchase items through Amazon Prime that were either confirmed counterfeits, lookalikes unsafe for use, or otherwise misrepresented." CNBC reported in 2019 that Amazon third-party sellers regularly sell expired food products, and that the sheer size of the Amazon Marketplace has made policing the platform exceptionally difficult for the company. , third-party sellers accounted for 54% of paid units sold on Amazon platforms. In 2019, Amazon earned $54 billion from the fees third-party retailers pay to Amazon for seller services. De-platforming WikiLeaks On December 1, 2010, Amazon stopped hosting the website associated with the whistle-blowing organization WikiLeaks. Amazon did not initially comment on whether it forced the site to leave. The New York Times reported: "Senator Joseph I. Lieberman, an independent of Connecticut, said Amazon had stopped hosting the WikiLeaks site on Wednesday after being contacted by the staff of the Homeland Security and Governmental Affairs Committee". In a later press release issued by Amazon.com, they denied that they had terminated Wikileaks.org because of either "a government inquiry" or "massive DDOS attacks". They claimed that it was because of "a violation of [Amazon's] terms of service" because Wikileaks.org was "securing and storing large quantities of data that isn't rightfully theirs, and publishing this data without ensuring it won't injure others." According to WikiLeaks founder Julian Assange, this demonstrated that Amazon (a US based company) was in a jurisdiction that "suffered a free speech deficit". Amazon's action led to a public letter from Daniel Ellsberg, famous for leaking the Pentagon Papers during the Vietnam war. Ellsberg stated that he was "disgusted by Amazon's cowardice and servility", likening it to "China's control of information and deterrence of whistle-blowing", and he called for a "broad" and "immediate" boycott of Amazon. Users' privacy On July 16, 2021, the Luxembourg National Commission for Data Protection fined Amazon Europe Core S.à.r.l. a record 746 million-euro ($888 million) for processing personal data in violation of the EU General Data Protection Regulation (GDPR). The fine represents a quote of about 4.2 percent of Amazon's reported $21.3 billion income for 2020. It is the largest fine ever inflicted for a violation of the GDPR. Amazon has announced it will appeal the decision. Competitive advantages Tax avoidance Amazon's tax affairs were investigated in China, Germany, Poland, South Korea, France, Japan, Ireland, Singapore, Luxembourg, Italy, Spain, United Kingdom, United States and Portugal. According to a report released by Fair Tax Mark in 2019, Amazon is the worst offender of tax avoidance, having paid a 12% effective tax rate between 2010 and 2018, in contrast with 35% corporate tax rate in the US during the same period. Amazon countered that it had a 24% effective tax rate during the same period. Effects on small businesses Due to its size and economies of scale, Amazon is able to out price local small-scale shopkeepers. Stacy Mitchell and Olivia Lavecchia, researchers with the Institute for Local Self-Reliance, argue that this has caused most local small-scale shopkeepers to close down in a number of cities and towns in the United States. Additionally, a merchant cannot have an item in the warehouse available to sell prior to Amazon if they choose to list it as well. Many times fraudulent charges have been made on the company banking and financial channels without approval, since Amazon prides itself on keeping all financial data permanently on file in their database. If they charge your account they will not refund the money back to the account they took it from, they will only provide an Amazon credit. Additionally, there is not any merchant customer support which at times needs to be handled in real-time. Government contracts In 2013, Amazon secured a contract with the CIA, which poses a potential conflict of interest involving the Bezos-owned The Washington Post and his newspaper's coverage of the CIA. Kate Martin, director of the Center for National Security Studies, said, "It's a serious potential conflict of interest for a major newspaper like The Washington Post to have a contractual relationship with the government and the most secret part of the government." This was later followed by a bid for a contract with the Department of Defense. Although critics initially considered the government's preference for Amazon to be a foregone conclusion, the contract was ultimately signed with Microsoft. The release of the Amazon Echo was met with concerns about Amazon releasing customer data at the behest of government authorities. According to Amazon, voice recordings of customer interactions with the assistant are stored with the possibility of being released later in the event of a warrant or subpoena. A police request for such data occurred during the investigation into the November 22, 2015 death of Victor Collins in the home of James Andrew Bates in Bentonville, Arkansas. Amazon refused to comply at first, but Bates later consented. While Amazon has publicly opposed secret government surveillance, as revealed by Freedom of Information Act requests it has supplied facial recognition support to law enforcement in the form of the Rekognition technology and consulting services. Initial testing included the city of Orlando, Florida, and Washington County, Oregon. Amazon offered to connect Washington County with other Amazon government customers interested in Rekognition and a body camera manufacturer. These ventures are opposed by a coalition of civil rights groups with concern that they could lead to expansion of surveillance and be prone to abuse. Specifically, it could automate the identification and tracking of anyone, particularly in the context of potential police body camera integration. Due to the backlash, the city of Orlando has publicly stated it will no longer use the technology. HQ2 bidding war The announcement of Amazon's plan to build a second headquarters, dubbed HQ2, was met with 238 proposals, 20 of which became finalist cities on January 18, 2018. In November 2018, Amazon was criticized for narrowing this down to "the two richest cities", namely Long Island City and Arlington, Virginia, which are in the New York metropolitan area and Washington metropolitan area respectively. Critics, including business professor Scott Galloway, described the bidding war as "a con" and stated that it was a pretext for gaining tax breaks and insider information for the company. Congresswoman Alexandria Ocasio-Cortez opposed the $1.5 billion in tax subsidies that had been given to Amazon as part of the deal. She stated that restoring the subway system would be a better use for the money, despite rebuttals from Andrew Cuomo and others that New York would benefit economically. Shortly afterward, Politico reported that 1,500 affordable homes had previously been slated for the land being occupied by Amazon's new office. The request by Amazon executives for a helipad at each location proved especially controversial with multiple New York City Council members decrying the proposal as frivolous. Product availability Animal cruelty Amazon at one time carried two cockfighting magazines and two dog fighting videos although the Humane Society of the United States (HSUS) contends that the sale of these materials is a violation of U.S. Federal law and filed a lawsuit against Amazon. A campaign to boycott Amazon in August 2007 gained attention after a dog fighting case involving NFL quarterback Michael Vick. In May 2008, Marburger Publishing agreed to settle with the Humane Society by requesting that Amazon stop selling their magazine, The Game Cock. The second magazine named in the lawsuit, The Feathered Warrior, remained available. Animal rights group Mercy for Animals has alleged that Amazon allows the listing of foie gras on its website, a product that has been banned in several countries followed by California, and alleged to be produced by the mistreatment of ducks. The listing promoted animal rights groups to launch a movement called "Amazon cruelty". Items prohibited by UK law In December 2015 The Guardian newspaper published an exposé of sales that violated British law. These included a pepper-spray gun (sold directly by amazon.co.uk), acid, stun guns and a concealed cutting weapon (sold by Amazon Marketplace traders). All are classed as prohibited weapons in the UK. At the same time, The Guardian published a video describing some of the weapons. Antisemitic content An article published in the Czech weekly Tyden in January 2008 called attention to shirts sold by Amazon which were emblazoned with "I Love Heinrich Himmler" and "I Love Reinhard Heydrich", professing affection for the infamous Nazi officers and war criminals. Patricia Smith, a spokeswoman for Amazon, told Tyden, "Our catalog contains millions of items. With such a large number, unexpected merchandise may get onto the Web." Smith told Tyden that Amazon does not intend to stop cooperating with Direct Collection, the producer of the T-shirts. Following pressure from the World Jewish Congress (WJC), Amazon announced that it had removed from its website the aforementioned T-shirts as well as "I love Hitler" T-shirts that they were selling for women and children. After the WJC intervention, other items such as a Hitler Youth Knife emblazoned with the Nazi slogan "Blood and Honor" were also removed from Amazon.com as well as a 1933 German SS Officer Dagger distributed by Knife-Kingdom. An October 2013 report in the British online magazine The Kernel revealed that Amazon.com was selling books that defend Holocaust denial, and shipped them even to customers in countries where Holocaust denial is prohibited by law. That month, the WJC called on Amazon CEO Jeff Bezos to remove from its offer books that deny the Holocaust and promote antisemitism, white supremacy, racism or sexism. "No one should profit from the sale of such vile and offensive hate literature. Many Holocaust survivors are deeply offended by the fact that the world's largest online retailer is making money from selling such material," WJC Executive Vice President Robert Singer wrote in a letter to Bezos. Although Nazi paraphernalia was still listed on Amazon in the US and Canada in 2016, on March 9, 2017, the WJC announced Amazon's compliance with the requests it and other Jewish organizations had submitted by removing from sale the Holocaust denial works complained of in the requests. The WJC offered ongoing assistance in identifying Holocaust denial works among Amazon's offerings in the future. In July 2019, the Central Council of Jews in Germany denounced Amazon for continuing to sell items that glorify the Nazis. Pedophile guide On November 10, 2010, a controversy arose over the sale by Amazon of an e-book by Phillip R. Greaves entitled The Pedophile's Guide to Love and Pleasure: a Child-lover's Code of Conduct. Readers threatened to boycott Amazon over its selling of the book, which was described by critics as a "pedophile guide". Amazon initially defended the sale of the book, saying that the site "believes it is censorship not to sell certain books simply because we or others believe their message is objectionable" and that the site "supported the right of every individual to make their own purchasing decisions". However, the site later removed the book. The San Francisco Chronicle wrote that Amazon "defended the book, then removed it, then reinstated it, and then removed it again". Christopher Finan, the president of the American Booksellers Foundation for Free Expression, argued that Amazon has the right to sell the book as it is not child pornography or legally obscene since it does not have pictures. On the other hand, Enough Is Enough, a child safety organization, issued a statement saying that the book should be removed and that it "lends the impression that child abuse is normal". People for the Ethical Treatment of Animals, citing the removal of The Pedophile's Guide from Amazon, urged the website to also remove books on dog fighting from its catalogue. Greaves was arrested on December 20, 2010, at his Pueblo, Colorado home on a felony warrant issued by the Polk County Sheriff's Office in Lakeland, Florida. Detectives from the county's Internet Crimes Division ordered a signed hard copy version of Greaves' book and had it shipped to the agency's jurisdiction, where it violated state obscenity laws. According to Sheriff Grady Judd, upon receipt of the book, Greaves violated local laws prohibiting the distribution of "obscene material depicting minors engaged in harmful conduct", a third-degree felony. Greaves pleaded no contest to the charges and was later released under probation with his previous jail time counting as time served. Counterfeit products American copyright lobbyists have accused Amazon of facilitating the sale of unlicensed CDs and DVDs particularly in the Chinese market. The Chinese government has responded by announcing plans to increase regulation of Amazon (along with Apple Inc. and Taobao.com) in relation to Internet copyright infringement issues. Amazon has already had to shut down third party distributors due to pressure from the NCAC (National Copyright Administration of China). On October 16, 2016, Apple filed a trademark infringement case against Mobile Star LLC for selling counterfeit Apple products to Amazon. In the suit, Apple provided evidence that Amazon was selling these counterfeit Apple products and advertising them as genuine. Through purchasing, Apple found that it was able to identify counterfeit products with a success rate of 90%. Amazon was sourcing and selling items without properly determining if they are genuine. Mobile Star LLC settled with Apple for an undisclosed amount on April 27, 2017. In the years since, selling of counterfeit products by Amazon has attracted widespread notice, with both purchases marked as being fulfilled by third parties and those shipped directly from Amazon warehouses being found to be counterfeit. Counterfeit charging cables sold on Amazon as purported Apple products have been found to be a fire hazard. Items that have been sold as counterfeits include a widespread array of products, from big ticket items, to every day items such as tweezers, gloves, and umbrellas. More recently, this has spread to Amazon's newer grocery services. As a result of these issues, companies such as Birkenstocks and Nike have pulled their products from the website. Removal of LGBT works In April 2009, it was publicized that some lesbian, gay, bisexual, transgender, feminist, and politically liberal books were being excluded from Amazon's sales rankings. Various books and media were flagged as "Adult content", including children's books, self-help books, non-fiction, and non-explicit fiction. As a result, works by established authors E. M. Forster, Gore Vidal, Jeanette Winterson and D. H. Lawrence were unranked. The change first received publicity on the blog of author Mark R. Probst, who reproduced an e-mail from Amazon describing a policy of de-ranking "adult" material. However, Amazon later said that there was no policy of de-ranking lesbian, gay, bisexual, and transgender material and blamed the change first on a "glitch" and then on "an embarrassing and ham-fisted cataloging error" that had affected 57,310 books (a hacker also claimed to have been the cause of said metadata loss). Removal of other books In 2014, Amazon removed a book, described by critics as a "guide to rape", which claimed to reveal how women could be pressured into accepting sexual advances. Later, it removed a book by anti-Muslim activist Tommy Robinson. It temporarily banned a book promoting fringe claims about the COVID-19 pandemic, as well as books that promoted fake COVID-19 cures. In 2021, Amazon removed listings for a 2018 book by conservative philosopher Ryan T. Anderson because it criticized legal protections for transgender people. Partnerships and associations Hikvision Amazon has worked with the Chinese technology company Hikvision. According to The Nation, "The United States has considered sanctioning against Hikvision, which has provided thousands of cameras that monitor mosques, schools, and concentration camps in Xinjiang." Palantir hosting Amazon provides cloud web hosting services via Amazon Web Services (AWS) to Palantir. Palantir is a well-known data analysis company that has developed software used to gather data on undocumented immigrants. The software is hosted on Amazon's AWS cloud. In June 2018, Amazon employees signed a letter demanding Amazon to drop Palantir, a data collection company, as an AWS customer. According to Forbes, Palantir "has come under scrutiny because its software has been used by ICE agents to identify and start deportation proceedings against undocumented migrants." On July 7, 2019, local Jewish leaders connected with the organization Jews for Racial and Economic Justice, along with Make the Road New York, led a protest of more than 1,000 Jews and others in response to Amazon's financial ties to Palantir, and its $150 million in contracts the U.S. Immigration Customs Enforcement Agency (ICE). The direct action shut down Amazon's midtown Manhattan location of Amazon Books. The protest was held on the Jewish day of mourning and fasting, Tisha B'Av, which commemorates the destruction of the ancient temples in Jerusalem. Influence over local news In late May 2020, ahead of its May 27 shareholders' meeting, at least eleven local news stations aired identically worded segments which commented positively on Amazon's response to the coronavirus pandemic. Zach Rael, an anchor for the Oklahoma City station KOCO-TV, posted that Amazon had tried to send him the same prepared package. Senator and Amazon critic Bernie Sanders condemned the coverage and called it propaganda. The majority of the video provided was narrated by Amazon's public relations manager Todd Walker. Of the eleven identified channels, WTVG in Toledo, Ohio, was the only one that attributed the statements to him. Other legal action Trademark issues Amazon Bookstore In 1999, the Amazon Bookstore Cooperative of Minneapolis, Minnesota sued amazon.com for trademark infringement. The cooperative had been using the name "Amazon" since 1970, but reached an out-of-court agreement to share the name with the on-line retailer. Lush soap In 2014, UK courts declared that Amazon had infringed the trademark of Lush soap. The soap manufacturer, Lush, had previously made its products unavailable on Amazon. Despite this, Amazon advertised alternative products via Google searches for Lush soap. Alleged libel In September 2009, it emerged that Amazon was selling MP3 music downloads falsely suggesting a well-known Premier League football manager was a child sex offender. Despite a campaign urging the retailer to withdraw the item, they refused to do so, citing freedom of speech. The company eventually decided to withdraw the item from their UK website when legal action was threatened. However, they continued to sell the item on their American, German and French websites. Alleged release of personal details In October 2011, actress Junie Hoang filed Hoang v. Amazon.com, a $1 million lawsuit against Amazon in the Western District Court of Washington, for allegedly revealing her age on IMDb, which Amazon owns, by using personal details from her credit card. The lawsuit, which alleges fraud, breach of contract and violation of her private life and consumer rights, states that after joining IMDbPro in 2008 to increase her chance of getting roles, the actress claims that her legal date of birth had been added to her public profile, revealing that she is older than she looks, causing her to suffer a substantial decrease in acting work and earnings. The actress also stated that the site refused her request to remove the information in question. All claims against Amazon, and most claims against IMDb, were dismissed by Judge Marsha J. Pechman; the jury found for IMDb on the sole remaining claim. , the case against IMDb remains under appeal. Amazon reviews As the customer review process has become more integral to Amazon.com marketing, reviews have been increasingly challenged for accuracy and ethics. In 2004, The New York Times reported that a glitch in the Amazon Canada website revealed that a number of book reviews had been written by authors of their own books or of competing books. In response, Amazon changed its policy of allowing anonymous reviews to one that gave an online credential marker to those reviewers registered with Amazon, though it still allowed them to remain anonymous through the use of pen names. In April 2010 the British historian Orlando Figes was found to have posted negative reviews of other author's books. In June 2010, a Cincinnati news blog uncovered a group of 75 Amazon book reviews that had been written and posted by a public relations company on behalf of its clients. A study at Cornell University in that year asserted that 85% of Amazon's high-status consumer reviewers "had received free products from publishers, agents, authors and manufacturers." By June 2011, Amazon itself had moved into the publishing business and begun to solicit positive reviews from established authors in exchange for increased promotion of their own books and upcoming projects. Amazon.com's customer reviews are monitored for indecency but do permit negative comments. Robert Spector, author of the book amazon.com, describes how "when publishers and authors asked Bezos why amazon.com would publish negative reviews, he defended the practice by claiming that amazon.com was 'taking a different approach...we want to make every book available – the good, the bad, and the ugly...to let truth loose'" (Spector 132). Allegations have been made that Amazon has selectively deleted negative reviews of Scientology-related items despite compliance with comments guidelines. In November 2012, it was reported that Amazon.co.uk deleted "a wave of reviews by authors of their fellow writers' books in what is believed to be a response to [a] 'sock puppet' scandal." Following listing for sale of Untouchable: The Strange Life and Tragic Death of Michael Jackson, a disparaging biography of Michael Jackson by Randall Sullivan, his fans, organized via social media as "Michael Jackson's Rapid Response Team to Media Attacks", bombarded Amazon with negative reviews and negative ratings of positive reviews. In 2017, Amazon removed an inordinate number of 1-star reviews from the listing of former presidential candidate Hillary Clinton's book, What Happened. In 2018 and 2020, it was reported that Amazon had for some time allowed sellers to perform a bait-and-switch confidence trick: after reviewers had heaped praise on a particular product, the product would be replaced with a different product altogether while retaining the earlier positive reviews. Environmental impact Climate policy In September 2019, employees at their Seattle headquarters, organized under the name Amazon Employees for Climate Justice, walked out in protest over Amazon's climate policy. Specifically, they demanded that Amazon reach zero emissions by 2030, cut ties to oil and gas companies, and to stop funding lobbyist groups accused of spreading climate denialism. Alleged destruction of unsold stock An uncover report from ITV News in June 2021 found that the company, at one of its 24 "fulfilment centres" in the UK, a warehouse in Dunfermline, Scotland, was destroying 130,000 items of unsold stock a week, often completely unused items such as Smart TVs, laptops, hairdryers, computer drives, and books. A representative of Greenpeace, Sam Chetan Welsh, told ITV News: "It's an unimaginable amount of unnecessary waste, and just shocking to see a multi-billion pound company getting rid of stock in this way." Responding, Amazon itself said: "We are working towards a goal of zero product disposal" and rejected assertions that it sent unsold goods to landfill, although ITV journalists had followed lorries containing Amazon's discarded goods to such sites. The issue is not restricted to the UK. Legislation in France and Germany has been enacted to discourage retailers from destroying new goods after Amazon's policies were challenged. Notes References External links Amazon (company) Amazon.com Criticisms of software and websites Tech sector trade unions
39548304
https://en.wikipedia.org/wiki/Ghana%20Open%20Data%20Initiative
Ghana Open Data Initiative
Ghana Open Data Initiative (GODI) was started in January 2012 by the National Information Technology Agency (NITA) in partnership with the Web Foundation (WF), to make Government of Ghana data available to the public for re-use. The establishment of GODI is meant to promote efficiency, transparency and accountability in governance as well as to facilitate economic growth by means of the creation of Mobile and Web applications for the Ghanaian and world markets. The project was scheduled for completion in 2014 and aimed to create a sustainable Open Data ecosystem for Ghana. GODI was launched with a 100 data sets categorized as political, legal, organizational, technical, social or economic. The vision of GODI is to develop an open data community involving the Government of Ghana, civil society organizations, industry, developer communities, academia, media practitioners, and the citizenry, to interact with one another with the aim of developing an open data portal to bring about transparency, accountability and efficiency in government. History At the close of 2011, the president of the Republic of Ghana, His Excellency Prof. J.E. Mills, signed the Open Government Partnership (OGP), a global initiative started by the United States government. The OGP is a new multilateral initiative that aims to secure concrete commitments from governments to promote transparency, empower citizens, fight corruption, and harness new technologies to strengthen governance. In the spirit of multistakeholder collaboration, the OGP is overseen by a steering committee of governments, civil society organizations, academia and the developer community. Prior to Ghana signing onto the OGP, the World Wide Web Foundation (WF) had conducted feasibility studies in Ghana and Chile as special case studies for developing countries and published the report in February 2011. On the basis of the feasibility report, the National Information Technology Agency (NITA), an agency of the Ministry of Communications, which was created by an Act of Parliament to oversee and implement government policy on Information and Communication Technology (ICT), began discussions with the WF in April 2011 on how to develop an open government portal in Ghana. NITA over the last three years has been deploying a massive government network dubbed the GovNET across the country and data centers which will be repositories of Government data. NITA has also been mandated to engage the citizenry with government by providing e-Services platforms to serve the citizenry with services like online passport application, business and birth and death registration which NITA is executing with 11 pilot e-service platforms. On the initial signing of the OGP commitment by the embassy of Ghana in the US in September 2011, NITA intensified its discussions with the WF on developing a national plan to create an open data portal where government could make its data available in a format that civil society organizations (CSOs), the developer community, academia, the media and industry could re-use. These discussions culminated in a visit to Ghana by a team from the WF during which a strategic plan was developed at a formal stakeholders’ meeting which took place in Accra, where the initiative was duly launched. The visiting WF team and NITA officials paid a courtesy call on the then Vice President His Excellency John Dramani Mahama to invite him to champion the Ghana Open Data Initiative. Relevance of open government data to Ghana Open government data is particularly important to low and middle income countries like Ghana because: Transparency and accountability are critical dimensions for foreign aid and investments, which in turn is essential for social and economic development. The potential of ICT in developing countries to provide basic services in health, education, business and governance has been highlighted for more than a decade by the WSIS. Easy access to government-held data reduces risks and transaction costs in the economic sector, thus reducing barriers to growth. Citizen inclusion and participation in government agenda have been historically low in developing countries, particularly due to lack of information and infrastructure. Increasing such citizen participation has proven to be essential for the establishment of stable democratic processes. Data on government services is capable of attracting groups and organizations to form communities whose activities can improved social capital and economic growth. Development of GODI Executive level The WF feasibility report indicated that the Government of Ghana has the political will to make information transparently available to its citizens. In January 2012, then vice president of Ghana, John Mahama, informed a delegation from the WF that he was committed to championing the GODI at the cabinet level. Public administration level The feasibility report of the WF indicated that government departments and agencies are in support of OGD initiatives. Substantial information is already available in digital format; however end user access still remains on paper which imparts negatively on the access and reuse of information. Since January 2012 when the GODI started, several public administrators have indicated their willingness to provide data in the format required to contribute to the project. Civil society level The WF feasibility report indicated that there is already a movement towards reuse of information driven by organizations like the Population Council as well as universities that are advocating the access to raw data for their studies. Organogram Implementation The National Information Technology Agency (NITA) is the implementing agency for the Ghana Open Data Initiative (GODI). The GODI portal was developed on the fourth thematic area of the Open Government Partnership. The objectives of the GODI project are to: Provide a central platform for access to public government data. Bring about the development of the open data community. Promote participation between government, civil society organizations, academia, media practitioners, industry, developer community and the citizen. Serve as a concrete action plan of the fourth thematic area of the Open Government Partnership (technology and innovation) for the republic of Ghana. Promote transparency, accountability and efficiency in government through citizen feedback. Benefits Ghana's decision to create an open data portal was informed by the fact the Open Government Data (OGD) programs around the world have demonstrated multiple benefits. GODI perceives that Ghana can gain all the benefit of open data which are grouped around three main themes: Transparency and accountability Increased transparency of governments Greater accountability of officials by citizens being able to see and challenge individual spending and purchasing decisions Behaviour change due to the possibility of greater scrutiny Better understanding by civil society of the reasons for government decisions Better democracy and increased civic capital Better informed and balanced journalism; i.e. data journalism Improved public services Increased number of services to people due to an increased base of potential service providers New synergies among government, public administration and civil society organizations Increased citizen participation and inclusion through extended offers of services closer to people's needs Closer cooperation between central and local government Increased internal government efficiency and effectiveness Economic growth New business opportunities for services to businesses and citizens using government data Better functioning of the economy through easy access to core reference information held by the government Increased employment for application and service developers New innovative uses of OGD that can help spur innovation and development in the IT sector References External links NITA website GODI website Open data Access to Knowledge movement Open content Public domain Science and technology in Ghana Organizations established in 2012 2012 establishments in Ghana
18017448
https://en.wikipedia.org/wiki/NSDG
NSDG
NSDG (National e-Governance Services Delivery Gateway) is one of India's Mission Mode Projects (MMP). The initiative taken by the Department of Information Technology (DIT), Ministry of Communications & IT. CDAC Mumbai has been entrusted with building of NSDG and NSD (National Services Directory). NSDG (National e-Governance Services Delivery Gateway) is a standard based messaging middleware for e-Governance services. This is classified under Integrated MMP (Mission Mode Projects) of Department of Information Technology, Govt. of India under NeGP(The National e-Governance Plan) and is the Second Mission Mode Project to enter operational phase starting from 14 August 2008. Apart from many path breaking innovations that NSDG has undertaken itself and the ambitious objective it has embarked upon, NSDG is probably the only one which has won its first appreciation three months before its golive. NSDG won The World is Open Award 2008 in the e-Governance category at The World is Open Award 2008 function organized by Skoch Consultancy Services and Red Hat. The solution is already ready and the Go-Live date is 14/08/08. The National e-Governance Plan (NeGP) of the Govt. of India aims to cooperate, collaborate and integrate information across different departments in the Centre, States and Local Government. Government systems are characterized by islands of legacy systems using heterogeneous platforms and technologies and spread across diverse geographical locations, in varying state of automation, make this task very challenging. NSDG can simplify the above task by acting as a standards-based messaging switch and providing seamless interoperability and exchange of data across the departments. NSDG acting as a nerve centre, would handle large number of transactions and would help in tracking and time stamping all transactions of the Government. Other benefits Enable transaction logging and time stamping for tracking of transactions and centralized control Departments which do not have the complete automation or work flow at the back can still deliver e-Service to the citizens in a limited manner through the Gateway Help protect the legacy investments in software and hardware by easily integrating them with other technology platforms and software implementations Can act as the Shared services hub by supporting value added service interfaces like the Payment gateway and Authentication interface Organizations involved Apart from the Department of IT, Government of India which is the owner of the project, other parties involved are: CDAC (Centre for Development of Advanced Computing), Mumbai STQC (Standardisation Testing and Quality Certification) NIC (National Informatics Centre) National Institute for Smart Government CDAC has shouldered responsibility of designing and implementing NSDG. CDAC will be coordinating Operations and Maintenance for a period of 5 years starting from date of go-live. STQC is the Testing and Certifying authority. NSDG solution infrastructure is hosted at NIC Data Centre and Data Recovery Centre. NISG has served as advisory and consulting agency to DIT. DIT has also envisaged the Constellation of Gateways which will include NSDG at the center, various SSDGs (State e-Governance Services Delivery Gateway) and the Domain Gateways. Each gateway will have a service directory called Gateway Services Directory (GSD) which will keep all information regarding services which is available by that gateway. Apart from the GSD, there will be a National Services Directory (NSD) which will serve as central registry for Gateway address resolution to facilitate inter Gateway communication. Standards behind the whole Gateway Constellation Interoperability Interface Protocol (IIP) Interoperability Interface Specifications (IIS) Inter-Gateway Interconnect Specifications (IGIS) Gateway Common Services Specifications (GCSS) Platform This project is on J2EE platform and based on the Service Oriented Architecture. It uses web services extensively for publishing its services. Open Source Technologies have been given the highest priority while choosing the Operating System, Application Server, Database and other required tools. The gateway is completely secure. Security compliance of the NSDG as per ISO 17799:2005/BS7799-1:2005. Other Gateways NSDG will work at the central level. Apart from this, each state will have its own gateway SSDG (State e-Governance Services Delivery Gateway). See also My Gov Further reading http://www.nsdg.gov.in (Official Website) http://www.mit.gov.in/default.aspx?id=850 http://mutiny.in/2008/08/09/national-e-governance-services-delivery-gateway/ Notes NSDG and CDAC logos are trademarks of DIT, Govt. of India and C-DAC Mumbai, respectively. Ministry of Communications and Information Technology (India) Internet in India E-government in India
253848
https://en.wikipedia.org/wiki/Apple%20DOS
Apple DOS
Apple DOS is the family of disk operating systems for the Apple II series of microcomputers from late 1978 through early 1983. It was superseded by ProDOS in 1983. Apple DOS has three major releases: DOS 3.1, DOS 3.2, and DOS 3.3; each one of these three releases was followed by a second, minor "bug-fix" release, but only in the case of Apple DOS 3.2 did that minor release receive its own version number, Apple DOS 3.2.1. The best-known and most-used version is Apple DOS 3.3 in the 1980 and 1983 releases. Prior to the release of Apple DOS 3.1, Apple users had to rely on audio cassette tapes for data storage and retrieval. Version history When Apple Computer introduced the Apple II in April 1977, the new computer had no disk drive or disk operating system (DOS). Although Apple co-founder Steve Wozniak designed the Disk II controller late that year, and believed that he could have written a DOS, his co-founder Steve Jobs decided to outsource the task. The company considered using Digital Research's CP/M, but Wozniak sought an operating system that was easier to use. On 10 April 1978 Apple signed a $13,000 contract with Shepardson Microsystems to write a DOS and deliver it within 35 days. Apple provided detailed specifications, and early Apple employee Randy Wigginton worked closely with Shepardson's Paul Laughton as the latter wrote the operating system with punched cards and a minicomputer. There was no Apple DOS 1 or 2. Versions 0.1 through 2.8 were serially enumerated revisions during development, which might as well have been called builds 1 through 28. Apple DOS 3.0, a renamed issue of version 2.8, was never publicly released due to bugs. Apple published no official documentation until release 3.2. Apple DOS 3.1 was publicly released in June 1978, slightly more than one year after the Apple II was introduced, becoming the first disk-based operating system for any Apple computer. A bug-fix release came later, addressing a problem by means of its utility, which was used to create Apple DOS master (bootable) disks: The built-in command created disks that could be booted only on machines with at least the same amount of memory as the one that had created them. includes a self-relocating version of DOS that boots on Apples with any memory configuration. Apple DOS 3.2 was released in 1979 to reflect changes in computer booting methods that were built into the successor of the Apple II, the Apple II Plus. New firmware included an auto-start feature which automatically found a disk controller and booted from it when the system was powered up—earning it the name "Autostart ROM". DOS 3.2.1 was then released in July of 1979 with some minor bug fixes. Apple DOS 3.3 was released in 1980. It improves various functions of release 3.2, while also allowing for large gains in available floppy disk storage; the newer P5A/P6A PROMs in the disk controller enabled the reading and writing of data at a higher density, so instead of 13 sectors (3.25 KiB), 16 sectors (4 KiB) of data can be stored per disk track, increasing the capacity from 113.75 KB to 140 KB per disk side 16 KB of which is used by filesystem overhead and a copy of DOS, on a DOS 3.3-formatted disk, leaving 124 KB for user programs and data. DOS 3.3 is, however, not backward compatible; it cannot read or write DOS 3.2 disks. To address this problem, Apple Computer released a utility called "MUFFIN" to migrate Apple DOS 3.2 files and programs to version 3.3 disks. Apple never offered a utility to copy in the other direction. To migrate Apple DOS 3.3 files back to version 3.2 disks, someone wrote a "NIFFUM" utility. There are also commercial utilities (such as Copy II Plus) that can copy files from and to either format (and eventually ProDOS as well). Release 3.3 also improves the ability to switch between Integer BASIC and Applesoft BASIC, if the computer has a language card (RAM expansion) or firmware card. Technical details Apple DOS 3.1 disks use 13 sectors of data per track, each sector being 256 B. It uses 35 tracks per disk side, and can access only one side of the floppy disk, unless the user flipped the disk over. This gives the user a total storage capacity of 113.75 KB per side, of which about 10 KB are used to store DOS itself and the disk directory, leaving about 100 KB for user programs. The first layer of the operating system is called RWTS, which stands for "read/write track sector". This layer consists of subroutines for track seeking, sector reading and writing, and disk formatting. An API called the File Manager was built on top of this, and implements functions to open, close, read, write, delete, lock (i.e. write-protect), unlock (i.e. write-enable), and rename files, and to verify a file's structural integrity. There is also a catalog function, for listing files on the diskette, and an "init" function, which formats a disk for use with DOS, storing a copy of DOS on the first three tracks, and storing a startup program (usually called HELLO) that is auto-started when this disk is booted from. On top of the File Manager API, the main DOS routines are implemented which hook into the machine's BASIC interpreter and intercept all disk commands. It provides BLOAD, BSAVE, and BRUN for storing, loading, and running binary executables. LOAD, RUN, and SAVE are provided for BASIC programs, and an EXEC was provided for running text-based batch files consisting of BASIC and DOS commands. Finally, four types of files exist, identified by letters in a catalog listing: I – Integer BASIC programs (stored in a compact format, not plain-text) A – Applesoft BASIC programs (also stored in a packed, space-saving format) B – Binary files, either executable machine-language programs, or data files T – ASCII text files (or plain-text, unpacked batch files) There are four additional file types; "R", "S", and an additional "A" and "B", none of which are fully supported. DOS recognizes these types for catalog listings only, and there are no direct ways to manipulate these types of files. The "R" type found some use for relocatable binary executable files. A few programs support the "S" type as data files. A call vector table in the region of $03D0–03FF allows programs to find DOS wherever it is loaded in the system memory. For example, if the DOS hooked into the BASIC CLI stops functioning, it can be reinitialized by calling location $03D0 (976) hence the traditional "3D0G" ("3D0 go") command to return to BASIC from the System Monitor. Boot loader The process of loading Apple DOS involves a series of very tiny programs, each of which carries the loading process forward a few steps before passing control to the next program in the chain. Originally, the Apple II ROM did not support disk booting at all. At power-up it would display the System Monitor prompt. Both the Monitor and Integer BASIC have commands to redirect printing to a printer driver in a designated slot, so the conventional way to boot from disk then was to command the computer to start "printing" to the disk interface card, typically installed in slot 6, using the command 6 Control-P (from the ML monitor) or PR#6 (from BASIC). When the monitor or BASIC issued the next prompt character, the computer would call the ROM routines on the disk card to "print" to it, which would then proceed with the boot sequence. (One could use input redirection to similar ends.) Alternatively, from the ML monitor, the user could type the slot number, typing C600G to invoke the controller's boot code directly. When the Apple II Plus was introduced, it included the ability to scan each expansion slot (working downward from slot 7 to slot 1) for a bootable expansion card ROM, and automatically call it. The expansion card ROM boot code attempts to boot from drive 1 of the controller by moving the read/write arm to track zero and attempting to read 256 bytes from sector zero of that track. (If no readable disk is available, the drive spins indefinitely until one is provided and the drive door is closed.) Sector zero contains a small program which instructs the computer to read sectors 0 through 9 of track zero into memory using part of the ROM boot code (rereading sector 0 in the process). The program in sectors 1–9 of track 0, including the complete RWTS code, then proceeds to load tracks 1 and 2, which contain the rest of DOS. On a system master disk, code is also included to determine the computer's RAM configuration and relocate DOS as high into system memory as possible, up to the 48 KB limit of the Apple II's main memory ($BFFF). Once DOS is loaded into memory, it attempts to load and execute a startup program as indicated in the DOS program code. This is commonly a BASIC language program named HELLO (or some other name) but DOS can be modified to run other types of programs at startup, such as an executable binary file. The appearance of the right-hand bracket (]) on the screen is an indication to the user that an Applesoft BASIC startup program is loading, while a greater-than symbol (>) indicates that an Integer BASIC program is loading. (These are the prompts for the respective versions of BASIC, which are being initialized at this point.) The startup program then begins executing. Integer BASIC and Applesoft BASIC support The original Apple II included BASIC interpreter in ROM known originally as Apple BASIC and later as Integer BASIC. Variables in this language can only handle integer numbers ranging from −32,768 to +32,767 (16-bit binary values); floating point numbers are not supported. Apple commissioned Microsoft to develop Applesoft BASIC, capable of handling floating-point numbers. Applesoft BASIC cannot run Integer BASIC programs, causing some users to resist upgrading to it. DOS 3.3 was released when Applesoft BASIC was standard in ROM on the Apple II Plus, so Apple designed it to support switching back and forth between the two BASIC interpreters. Integer BASIC is loaded into RAM on the language card of Apple IIs (if present) and by typing FP or INT from BASIC, the user can switch between either version. Decline After 1980, Apple DOS entered into a state of stagnation as Apple concentrated its efforts on the ill-fated Apple III computer and its SOS operating system. Two more versions of Apple DOS, both still called DOS 3.3 but with some bug fixes and better support for the new Apple IIe model, were released in early and mid-1983. Without third-party patches, Apple DOS can only read floppy disks running in a 5.25-inch Disk II drive and cannot access any other media, such as hard disk drives, virtual RAM drives, or 3.5-inch floppy disk drives. The structure of Apple DOS disks (particularly the free sector map, which was restricted to part of a single sector) is such that it is not possible to have more than 400 KB available at a time per drive without a major rewrite of almost all sections of the code; this is the main reason Apple abandoned this iteration of DOS in 1983, when Apple DOS was entirely replaced by ProDOS. ProDOS retains the 16-sector low-level format of DOS 3.3 for 5.25 inch disks, but introduces a new high-level format that is suitable for devices of up to 32 MB; this makes it suitable for hard disks from that era and 3.5-inch floppies. All the Apple computers from the II Plus onward can run both DOS 3.3 and ProDOS, the Plus requiring a "Language Card" memory expansion to use ProDOS; the e and later models have built-in Language Card hardware, and so can run ProDOS straight. ProDOS includes software to copy files from Apple DOS disks. However, many people who had no need for the improvements of ProDOS (and who did not like its much higher memory footprint) continued using Apple DOS or one of its clones long after 1983. The Apple convention of storing a bootable OS on every single floppy disk means that commercial software can be used no matter what OS the user owns. A program called DOS.MASTER enables users to have multiple virtual DOS 3.3 partitions on a larger ProDOS volume, which allows the use of many floppy-based DOS programs with a hard disk. Shortly after ProDOS came out, Apple withdrew permission from third parties to redistribute DOS 3.3, but granted one company, Syndicomm, an exclusive license to resell DOS 3.3. Commercial games usually did not use Apple DOS, instead having their own custom disk routines for copy protection purposes as well as for performance. Performance improvements DOS's RWTS routine can read or write a track in two revolutions with proper interleaving. A sector of the spinning disk passes under the read/write head while the RWTS routine is decoding the just-read sector (or encoding the next one to be written), and if this missed sector is the next one needed, DOS needs to wait nearly an entire revolution of the disk for the sector to come around again. This is called "blowing a rev" and is a well-understood performance bottleneck in disk systems. To avoid this, the sectors on a DOS disk are arranged in an interleaved order: 0 7 e 6 d 5 c 4 b 3 a 2 9 1 8 f Later, ProDOS arranged the sectors in this order: 0 8 1 9 2 a 3 b 4 c 5 d 6 e 7 f When reading and decoding sector 0, then sector 8 passes by, so that sector 1, the next sector likely to be needed, will be available without waiting. When reading sector 7, two unneeded sectors, f and 0, pass by before sector 8 is available, and when reading sector 15, the drive will always have to wait an extra revolution for sector 0 on the same track. However, the sector 0 actually needed in most cases will be on the next-higher track, and that track can be arranged relative to the last one to allow the needed time to decode the just-read sector and move the head before sector 0 comes around. On average, a full track can be read in two revolutions of the disk. Unfortunately, the early DOS File Manager subverted this efficiency by copying bytes read from or written to a file one at a time between a disk buffer and main memory, requiring more time and resulting in DOS constantly blowing revs when reading or writing files. Programs became available early on to format disks with modified sector interleaves; these disks give DOS more time between sectors to copy the data, ameliorating the problem. Later, programmers outside Apple rewrote the File Manager routines to avoid making the extra copy for most sectors of a file; RWTS was instructed to read or write sectors directly to or from main memory rather than from a disk buffer whenever a full sector was to be transferred. An early "patch" to provide this functionality was published in Call-A.P.P.L.E.. Speedups in the LOAD command of three to five times were typical. This functionality soon appeared in commercial products, such as Pronto-DOS, Diversi-DOS, Hyper-DOS, and David-DOS, along with additional features, but it was never used in an official Apple DOS release. Similar functionality was, however, employed by Apple's successor operating system, ProDOS. The Apple IIGS-specific operating system GS/OS would eventually employ an even more efficient "scatter read" technique that would read any sector that happened to be passing under the read head if it was needed for the file being read. Source code release In 2013, more than 35 years after the Apple II debuted, the original Apple DOS source code was released by the Computer History Museum on its website. It was donated by the original author, Paul Laughton. References Further reading External links Paul Laughton's account of writing DOS 3.1 Apple II History: DOS A2Central.com - Apple II news and downloads Everything2.com's DOS 3.1 Article Apple II DOS version 3.1 source code (1978, released in 2013 with the permission of Apple Inc.) DOS DOS, Apple Discontinued operating systems Disk operating systems 1978 software
2183606
https://en.wikipedia.org/wiki/Sleep%20mode
Sleep mode
Sleep mode (or suspend to RAM) is a low power mode for electronic devices such as computers, televisions, and remote controlled devices. These modes save significantly on electrical consumption compared to leaving a device fully on and, upon resume, allow the user to avoid having to reissue instructions or to wait for a machine to reboot. Many devices signify this power mode with a pulsed or red colored LED power light. Computers In computers, entering a sleep state is roughly equivalent to "pausing" the state of the machine. When restored, the operation continues from the same point, having the same applications and files open. Sleep Sleep mode has gone by various names, including Stand By, Suspend and Suspend to RAM. Machine state is held in RAM and, when placed in sleep mode, the computer cuts power to unneeded subsystems and places the RAM into a minimum power state, just sufficient to retain its data. Because of the large power saving, most laptops automatically enter this mode when the computer is running on batteries and the lid is closed. If undesired, the behavior can be altered in the operating system settings of the computer. A computer must consume some energy while sleeping in order to power the RAM and to be able to respond to a wake-up event. A sleeping PC is on standby power, and this is covered by regulations in many countries, for example in the United States limiting such power under the One Watt Initiative, from 2010. In addition to a wake-up press of the power button, PCs can also respond to other wake cues, such as from keyboard, mouse, incoming telephone call on a modem, or local area network signal. Hibernation Hibernation, also called Suspend to Disk on Linux, saves all computer operational data on the fixed disk before turning the computer off completely. On switching the computer back on, the computer is restored to its state prior to hibernation, with all programs and files open, and unsaved data intact. In contrast with standby mode, hibernation mode saves the computer's state on the hard disk, which requires no power to maintain, whereas standby mode saves the computer's state in RAM, which requires a small amount of power to maintain. Hybrid sleep Sleep mode and hibernation can be combined: the contents of RAM are first copied to non-volatile storage like for regular hibernation, but then, instead of powering down, the computer enters sleep mode. This approach combines the benefits of sleep mode and hibernation: The machine can resume instantaneously, but it can also be powered down completely (e.g. due to loss of power) without loss of data, because it is already effectively in a state of hibernation. This mode is called "hybrid sleep" in Microsoft Windows other than Windows XP. A hybrid mode is supported by some portable Apple Macintosh computers, compatible hardware running Windows Vista or newer, and Linux distributions running kernel 3.6 or newer. ACPI ACPI (Advanced Configuration and Power Interface) is the current standard for power management, superseding APM (Advanced Power Management) and providing the backbone for sleep and hibernation on modern computers. Sleep mode corresponds to ACPI mode S3. When a non-ACPI device is plugged in, Windows will sometimes disable stand-by functionality for the whole operating system. Without ACPI functionality, as seen on older hardware, sleep mode is usually restricted to turning off the monitor and spinning down the hard drive. Microsoft Windows Microsoft Windows 2000 and later support sleep at the operating system level (ACPI S3 state) without special drivers from the hardware manufacturer. Windows Vista's Fast Sleep and Resume feature saves the contents of volatile memory to hard disk before entering sleep mode (aka Hybrid sleep). If power to memory is lost, it will use the hard disk to wake up. The user has the option of hibernating directly if they wish. In older versions prior to Windows Vista, sleep mode was under-used in business environments as it was difficult to enable organization-wide without resorting to third-party PC power management software. As a result, these earlier versions of Windows were criticized for wasting energy. There remains a market in third-party PC power management software for newer versions of Windows, offering features beyond those built into the operating system. Most products offer Active Directory integration and per-user/per-machine settings with the more advanced offering multiple power plans, scheduled power plans, anti-insomnia features and enterprise power usage reporting. Vendors include 1E NightWatchman, Data Synergy PowerMAN (Software) and Verdiem SURVEYOR. macOS Sleep on Macs running macOS consists of the traditional sleep, Safe Sleep, and Power Nap. In System Preferences, Safe Sleep is referred to as sleep. Since Safe Sleep also allowed state to be restored in an event of a power outage, unlike other operating systems, hibernate was never offered as an option. In 2005, some Macs running Mac OS X v10.4 began to support Safe Sleep. The feature saves the contents of volatile memory to the system hard disk each time the Mac enters Sleep mode. The Mac can instantaneously wake from sleep mode if power to the RAM has not been lost. However, if the power supply was interrupted, such as when removing batteries without an AC power connection, the Mac would wake from Safe Sleep instead, restoring memory contents from the hard drive. Safe Sleep capability is found in Mac models starting with the October 2005 revision of the PowerBook G4 (Double-Layer SD). Mac OS X v10.4 or higher is also required. In 2012, Apple introduced Power Nap with OS X Mountain Lion (10.8) and select Mac models. Power Nap allows the Mac to perform tasks silently, such as iCloud syncing and Spotlight indexing. Only low energy tasks are performed when on battery power, while higher energy tasks are performed with AC power. Unicode Because of widespread use of this symbol, a campaign was launched to add a set of power characters to Unicode. In February 2015, the proposal was accepted by Unicode and the characters were included in Unicode 9.0. The characters are in the "Miscellaneous Technical" block, with code points 23FB-FE. The symbol is ⏾ (⏾)—defined as "Power Sleep Symbol". See also Shutdown (computing) Modern Standby References Energy conservation Operating system technology Windows administration
19639
https://en.wikipedia.org/wiki/Marvin%20Minsky
Marvin Minsky
Marvin Lee Minsky (August 9, 1927 – January 24, 2016) was an American cognitive and computer scientist concerned largely with research of artificial intelligence (AI), co-founder of the Massachusetts Institute of Technology's AI laboratory, and author of several texts concerning AI and philosophy. Minsky received many accolades and honors, including the 1969 Turing Award. Biography Marvin Lee Minsky was born in New York City, to an eye surgeon father, Henry, and to a mother, Fannie (Reiser), who was a Zionist activist. His family was Jewish. He attended the Ethical Culture Fieldston School and the Bronx High School of Science. He later attended Phillips Academy in Andover, Massachusetts. He then served in the US Navy from 1944 to 1945. He received a B.A. in mathematics from Harvard University in 1950 and a Ph.D. in mathematics from Princeton University in 1954. His doctoral dissertation was titled "Theory of neural-analog reinforcement systems and its application to the brain-model problem." He was a Junior Fellow of the Harvard Society of Fellows from 1954 to 1957. He was on the MIT faculty from 1958 to his death. He joined the staff at MIT Lincoln Laboratory in 1958, and a year later he and John McCarthy initiated what is, , named the MIT Computer Science and Artificial Intelligence Laboratory. He was the Toshiba Professor of Media Arts and Sciences, and professor of electrical engineering and computer science. Contributions in computer science Minsky's inventions include the first head-mounted graphical display (1963) and the confocal microscope (1957, a predecessor to today's widely used confocal laser scanning microscope). He developed, with Seymour Papert, the first Logo "turtle". Minsky also built, in 1951, the first randomly wired neural network learning machine, SNARC. In 1962, Minsky worked in small universal Turing machines and published his well-known 7-state, 4-symbol machine. Minsky's book Perceptrons (written with Seymour Papert) attacked the work of Frank Rosenblatt, and became the foundational work in the analysis of artificial neural networks. The book is the center of a controversy in the history of AI, as some claim it to have had great importance in discouraging research of neural networks in the 1970s, and contributing to the so-called "AI winter". He also founded several other AI models. His book A framework for representing knowledge created a new paradigm in programming. While his Perceptrons is now more a historical than practical book, the theory of frames is in wide use. Minsky also wrote of the possibility that extraterrestrial life may think like humans, permitting communication. In the early 1970s, at the MIT Artificial Intelligence Lab, Minsky and Papert started developing what came to be known as the Society of Mind theory. The theory attempts to explain how what we call intelligence could be a product of the interaction of non-intelligent parts. Minsky says that the biggest source of ideas about the theory came from his work in trying to create a machine that uses a robotic arm, a video camera, and a computer to build with children's blocks. In 1986, Minsky published The Society of Mind, a comprehensive book on the theory which, unlike most of his previously published work, was written for the general public. In November 2006, Minsky published The Emotion Machine, a book that critiques many popular theories of how human minds work and suggests alternative theories, often replacing simple ideas with more complex ones. Recent drafts of the book are freely available from his webpage. Role in popular culture Minsky was an adviser on Stanley Kubrick's movie 2001: A Space Odyssey; one of the movie's characters, Victor Kaminski, was named in Minsky's honor. Minsky is mentioned explicitly in Arthur C. Clarke's derivative novel of the same name, where he is portrayed as achieving a crucial break-through in artificial intelligence in the then-future 1980s, paving the way for HAL 9000 in the early 21st century: Personal life In 1952, Minsky married pediatrician Gloria Rudisch; together they had three children. Minsky was a talented improvisational pianist who published musings on the relations between music and psychology. Opinions Minsky was an atheist. He was a signatory to the Scientists' Open Letter on Cryonics. He was a critic of the Loebner Prize for conversational robots, and argued that a fundamental difference between humans and machines was that while humans are machines, they are machines in which intelligence emerges from the interplay of the many unintelligent but semi-autonomous agents that comprise the brain. He argued that "somewhere down the line, some computers will become more intelligent than most people," but that it was very hard to predict how fast progress would be. He cautioned that an artificial superintelligence designed to solve an innocuous mathematical problem might decide to assume control of Earth's resources to build supercomputers to help achieve its goal, but believed that such negative scenarios are "hard to take seriously" because he felt confident that AI would go through a lot of testing before being deployed. Association with Jeffrey Epstein Minsky received a $100,000 research grant from Jeffrey Epstein in 2002, four years before Epstein's first arrest for sex offenses; it was the first from Epstein to MIT. Minsky received no further research grants from him. Minsky organized two academic symposia on Epstein's private island Little Saint James, one in 2002 and another in 2011, after Epstein was a registered sex offender. Virginia Giuffre testified in a 2015 deposition in her defamation lawsuit against Epstein's associate Ghislaine Maxwell that Maxwell "directed" her to have sex with Minsky among others. There has been no allegation that sex between them took place nor a lawsuit against Minsky's estate. Minsky's widow, Gloria Rudisch, says that he could not have had sex with any of the women at Epstein's residences, as they were always together during all of the visits to Epstein's residences. Death In January 2016 Minsky died of a cerebral hemorrhage, at the age of 88. Minsky was a member of Alcor Life Extension Foundation's Scientific Advisory Board. Alcor will neither confirm nor deny whether Minsky was cryonically preserved. Bibliography (selected) 1967 – Computation: Finite and Infinite Machines, Prentice-Hall 1986 – The Society of Mind 2006 – The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind Awards and affiliations Minsky won the Turing Award (the greatest distinction in computer science) in 1969, the Golden Plate Award of the American Academy of Achievement in 1982, the Japan Prize in 1990, the IJCAI Award for Research Excellence for 1991, and the Benjamin Franklin Medal from the Franklin Institute for 2001. In 2006, he was inducted as a Fellow of the Computer History Museum "for co-founding the field of artificial intelligence, creating early neural networks and robots, and developing theories of human and machine cognition." In 2011, Minsky was inducted into IEEE Intelligent Systems' AI Hall of Fame for the "significant contributions to the field of AI and intelligent systems". In 2014, Minsky won the Dan David Prize for "Artificial Intelligence, the Digital Mind". He was also awarded with the 2013 BBVA Foundation Frontiers of Knowledge Award in the Information and Communication Technologies category. Minsky was affiliated with the following organizations: United States National Academy of Engineering United States National Academy of Sciences Extropy Institute's Council of Advisors Alcor Life Extension Foundation's Scientific Advisory Board kynamatrix Research Network's Board of Directors Media appearances Future Fantastic (1996) Machine Dreams (1988) See also List of pioneers in computer science Notes References External links Oral history interview with Marvin Minsky at Charles Babbage Institute, University of Minnesota, Minneapolis. Minsky describes artificial intelligence (AI) research at the Massachusetts Institute of Technology (MIT). Topics include: the work of John McCarthy; changes in the MIT research laboratories with the advent of Project MAC; research in the areas of expert systems, graphics, word processing, and time-sharing; variations in the Advanced Research Projects Agency (ARPA) attitude toward AI. Oral history interview with Terry Winograd at Charles Babbage Institute, University of Minnesota, Minneapolis. Winograd describes his work in computer science, linguistics, and artificial intelligence at the Massachusetts Institute of Technology (MIT), discussing the work of Marvin Minsky and others. Scientist on the Set: An Interview with Marvin Minsky Marvin Minsky Playlist Appearance on WMBR's Dinnertime Sampler radio show November 26, 2003 Consciousness Is A Big Suitcase: A talk with Marvin Minsky Video of Minsky speaking at the International Conference on Complex Systems, hosted by the New England Complex Systems Institute (NECSI) "The Emotion Universe": Video with Marvin Minsky Marvin Minsky's thoughts on the Fermi Paradox at the Transvisions 2007 conference "Health, population and the human mind": Marvin Minsky talk at the TED conference "The Society of Mind" on MIT OpenCourseWare Marvin Minsky tells his life story at Web of Stories (video) 1927 births 2016 deaths Jewish American atheists American computer scientists 20th-century American Jews United States Navy personnel of World War II Artificial intelligence researchers Cognitive scientists Consciousness researchers and theorists Cryonically preserved people Ethical Culture Fieldston School alumni Fellows of the Association for the Advancement of Artificial Intelligence Harvard College alumni Harvard Fellows Massachusetts Institute of Technology faculty Members of the United States National Academy of Engineering Members of the United States National Academy of Sciences Phillips Academy alumni Princeton University alumni Scientists from New York City The Benjamin Franklin Medal in Computer and Cognitive Science laureates The Bronx High School of Science alumni Turing Award laureates Fellows of the Cognitive Science Society MIT Lincoln Laboratory people MIT Media Lab people Presidents of the Association for the Advancement of Artificial Intelligence Deaths by intracerebral hemorrhage 21st-century American Jews
242079
https://en.wikipedia.org/wiki/Harry%20Yount
Harry Yount
Henry S. Yount (March 18, 1839May 16, 1924) was an American Civil War soldier, mountain man, professional hunter and trapper, prospector, wilderness guide and packer, seasonal employee of the United States Department of the Interior, and the first game warden in Yellowstone National Park. He was nicknamed "Rocky Mountain Harry Yount". Yount served two terms in the Union Army during the American Civil War. He first enlisted for a six-month term in November 1861. He was wounded and taken prisoner by the Confederate States Army in an opening skirmish of the Battle of Pea Ridge in Arkansas in March 1862, and held as a prisoner of war for nearly a month until released in a prisoner exchange. He re-enlisted in August 1862 and served until the end of the war. He was promoted three times and was a company quartermaster sergeant when he was discharged in July 1865. He worked as a hunter and a prospector, and as a bullwhacker for the U.S. Army, in the years following the Civil War. For seven years in the 1870s he worked as a guide, hunter and wrangler for the expeditions of the Hayden Geological Surveys, which mapped vast areas of the Rocky Mountains. In 1880, Yount was hired by the United States Secretary of the Interior, Carl Schurz, to be the first gamekeeper in Yellowstone National Park, and during his 14 months in that job wrote two annual reports for Schurz, which were then submitted to Congress. His reports described the challenges of protecting the wildlife in the first U.S. national park and influenced the culture of the National Park Service, which was founded 35 years later in 1916. Horace Albright, the second director of the National Park Service, called Yount the "father of the ranger service, as well as the first national park ranger". Yount was a prospector during much of the last four decades of his life. Family background and early years Harry Yount's paternal ancestors, Hans George Jundt and Anna Marie Jundt, arrived in Philadelphia in 1731, immigrants from Alsace-Lorraine. Their son, Johannes or John Yount, later moved to Lincoln County, North Carolina. Their grandson, Harry's paternal grandfather, Jacob A Yount, moved to present day Missouri with several other families shortly after the Louisiana Purchase. Harry's parents were David Yount (1795–1881) and Catherine Shell Yount. Harry Yount's father, David Yount, was about 44 years old at the time of Harry's birth, and Harry was the couple's tenth child. Harry Yount's uncle, George C. Yount, was a trapper and explorer who moved on from Missouri to Santa Fe and then to California before Harry's birth. In the 1830s, George C. Yount became the first citizen of the United States to settle in Napa Valley in California, which was then Mexican territory. The town of Yountville, California, is named after him. Two of Harry's older brothers, Caleb and John Yount, also moved to the Napa Valley years later. There are conflicting accounts of Harry Yount's place and date of birth. Ernest Ingersoll wrote that he was born in Susquehanna County, Pennsylvania, and different birth years have been mentioned by various writers, such as the anonymous author of a published biographical sketch who wrote that Yount was born in 1847, and Thomas J. Bryant, who interviewed Yount in the latter years of his life and who speculated that 1837 was his birth date in an article published in the Annals of Wyoming, journal of the Wyoming State Historical Society. However, research undertaken by William R. Supernaugh, an employee of the National Park Service, found military enlistment papers, Yount's Army pension file, and the 1840 United States Census records, all of which indicate that Yount was born on March 18, 1839. These records also show that his legal name was Henry S. Yount. His lifelong nickname was "Harry", and his middle name is unknown. Although Yount's place of birth is uncertain, Supernaugh concludes it is highly unlikely he was born in Pennsylvania, but rather in Harmony Township, Washington County, Missouri, because the 1840 census shows his father living there with a baby son. Henry was listed as 11 years old in the 1850 census. Harmony Township is a rural area about southwest of St. Louis. Civil War military service Yount enlisted in the Union Army for a six-month term on November 9, 1861, and served in Company F of Phelps' Regiment of the Missouri Infantry. John S. Phelps was his commanding general. Yount was wounded in the leg in a skirmish just before the Battle of Pea Ridge began on March 6, 1862, and taken prisoner by the Confederates. As a captive, he was marched more than to Fort Smith in his bare feet on cold, wet roads, and was held there as a POW for 28 days before his release in a prisoner exchange. He was discharged from the Union Army in May 1862. Yount re-enlisted in Lebanon, Missouri, on August 9, 1862, and served as a private in Company H of the 8th Regiment Missouri Volunteer Cavalry, a unit involved in 11 combat engagements during his service. On April 14, 1863, he was promoted to corporal and then to sergeant on December 9, 1863; he became company quartermaster sergeant on June 13, 1864. He was discharged in Little Rock, Arkansas, on July 20, 1865, after the war had ended. As a result of his barefoot march to captivity, Yount developed rheumatism in both feet. When the Dependent and Disability Pension Act passed in 1890, he became eligible for a monthly partial disability pension of $6 in 1892, which was raised to $12 a month in 1900 and $25 in 1912. Yount was later an active member of the Grand Army of the Republic, the post-war organization of veterans of the Union Army. Bullwhacker, hunter and trapper After the Civil War ended, Yount became engaged to Estella Braun, a Western Union employee in Detroit, Michigan. She was killed in a train wreck before their wedding could take place, and he never married. Yount traveled to Fort Kearny, along the Oregon Trail in Nebraska. There he was hired as an Army bullwhacker, transporting supplies by wagon along the Bozeman Trail between Fort Laramie in modern-day Wyoming and Fort C.F. Smith in modern-day Montana. There was conflict with Native Americans in this region in those years, and Yount fought against Cheyenne and Sioux warriors several times while on the trail. In one incident, his ox wagon was harassed for four days by a party of Sioux warriors, until he fired his carbine at one warrior, hitting and probably killing the warrior's horse. Supernaugh comments that Yount believed that Indians would kill him if they could, but that he did not "blame the Indians for defending what was their country originally." Yount also worked as a buffalo hunter during this period. He sold buffalo tongues for $1 each to tourists in Cheyenne. Yount believed that "it was a pity to kill off the buffaloes, which were here in immense numbers, but it was the only way to get rid of the Indians, as the buffalo were their main source of subsistence." The Smithsonian Institution engaged Yount's services to collect specimens of animals for taxidermy display in the early 1870s. Because he was successful in this first assignment, Spencer F. Baird of the Smithsonian again retained Yount's services in 1875 to collect specimens of many species of Rocky Mountain mammals. It is likely that many of these were displayed at the Centennial Exposition in Philadelphia in 1876, as extant photos of the exhibit halls show similar specimens. During those years, Yount also began his long career as prospector, achieving some success. In 1877, Yount was the subject of a magazine profile written by Ernest Ingersoll and published in Appleton's Journal in New York. Ingersoll described Yount's expertise as a hunter, including a story that he once killed 70 antelope in one day during a competition with another hunter, but that Yount was ashamed of the accomplishment because "it went against his heart to kill so many innocent creatures just for the glory." Yount would fill a wagon full of freshly killed game, and then sell the meat in towns such as Laramie and Cheyenne. According to Ingersoll, Yount was quite careful about his personal appearance: "his belt, holster, knife-sheath, bridle, and saddle are all set off with a barbaric glitter." Yount paid a Shoshone woman to decorate his buckskin jacket, "a marvel of fringes, fur trimming and intricate embroidery of beads." Ingersoll wrote that Yount was "by nature a gentleman, and under his sinewy frame and tireless strength, there is a heart as tender as a girl's, which hates the cruelty his profession unavoidably occasions. His eye is open to every beautiful feature of the grand world in which he lives; his heart is alive to all the gentle influences of the original wilderness." Ingersoll's 1883 book, Knocking 'Round the Rockies, describes Yount in a similar fashion. Guide for the Hayden Geological Survey In 1872 or 1873, Yount was hired as a seasonal guide, wrangler and packer for geological survey expeditions with the aim of mapping broad swaths of the Rocky Mountains. The Hayden Survey, led by Ferdinand Vandeveer Hayden and funded by the Department of the Interior, was one of several regional survey projects that were combined to form the United States Geological Survey in 1879. Hayden had been one of the leading advocates for the creation of Yellowstone National Park, which President Ulysses S. Grant signed into law on March 1, 1872. Yount worked for Hayden's expeditions each summer for seven years during the 1870s, in what are now the states of New Mexico, Arizona, Utah, Wyoming and Colorado. Each winter during the period, Yount would hunt and trap in the Laramie Range in Wyoming. In 1874, Yount was assigned to hunt for a party led by the geologist Archibald Marvine. Yount was later charged by a "huge" grizzly bear, and fired four gunshots – one through the skull, two through the flanks, and a final shot into the heart – before the bear was brought down. During Hayden's expedition of 1877, Yount engaged in mountaineering with Ingersoll and the cartographer A. D. Wilson in the Wind River Range. They were the first to ascend the south slopes of Wind River Peak, and, with Wilson, Yount was the first to ascend West Atlantic Peak. Hayden's expedition of 1878 conducted surveys in Yellowstone National Park and surrounding areas in 1878. That expedition included the British mountaineer James Eccles and Eccles's favorite Swiss mountain guide, Michel Payot of Chamonix. Eccles wanted to attempt an ascent of the Grand Teton, then unclimbed. This was the third attempt to climb the Grand Teton by members of Hayden's expeditions. Yount served as the guide in a four-man party that included Eccles, Payot and Wilson. Eccles and Payot were held up by the disappearance of two mules carrying their gear, and so were unable to accompany Wilson and Yount on to the higher parts of the mountain. During the climb, Yount slipped on the ice and fell close to a deep chasm in the glacier, where water was streaming down from the cliff above. The hold of his buckskin pants on the ice reportedly prevented him from being carried down into the crevice. Wilson was quoted as saying that Yount was clinging to the rock like "a starfish hanging to a breakwater," and that he himself lowered a rope to assist Yount. Because of the delay and the absence of the experienced Alpine climbers, Yount and Wilson had to turn back a few hundred feet short of the summit, at a spur called The Enclosure. No previous party had come so close to reaching the summit. The undisputed first ascent of the Grand Teton took place 20 years later, in 1898. Gamekeeper in Yellowstone National Park Yount was hired as the first gamekeeper for Yellowstone National Park in 1880, at a salary of $1,000 per year, when the park's entire budget was just $15,000 per year. He was appointed by Carl Schurz, the Secretary of the Interior and a former Union Army general, on June 21, 1880, and reported for duty at Yellowstone on July 6. His supervisor was Philetus Norris, the second park superintendent. Shortly after Yount reported for duty, Yount escorted Schurz and his party on a tour of the park, and then conducted a survey of the park's wildlife. Yount began constructing a winter camp at the junction of the East Fork of the Yellowstone River and Soda Butte Valley, a location he chose because it allowed for the protection of herds of buffalo and elk against poachers. Yount submitted his first Report of Gamekeeper on November 25, 1880, which was included as Appendix A to the annual report of the Secretary of the Interior. His report described his activities since being hired. He recommended: In his report of September 30, 1881, Yount described how he spent the unusually severe winter of 1880–1881, and his efforts to prevent poaching by tourists and Indians, while still hunting to provide food for the park staff. Yount reported that snow had fallen on 66 of 90 days between December 1880 and February 1881. He described the range and habits of Yellowstone's large mammals and expressed regret for "the unfortunate breakage of my thermometer when it could not be replaced," along with a submitted synopsis of the weather the previous winter. In this report, he resigned his position "to resume private enterprises now requiring my personal attention," and concluded with a clear recommendation: There are indications that Yount had a difference of opinion with park superintendent Norris, who wanted him to spend more of his time building roads for the convenience of tourists, while Yount preferred to concentrate on protecting the wildlife. Recognition as first National Park ranger Although Yount's official job title was "gamekeeper" rather than "park ranger", and although he only worked in Yellowstone National Park for 14 months, his two annual reports had a lasting impact on the administration of the national parks in the United States. He is "securely positioned in the legend and culture" of the National Park Service, and is considered a figure of "historic proportion". In Oh, Ranger!, a book published in 1928, Horace Albright, who later became the second director of the National Park Service, wrote that "Harry Yount pointed out in a report that it was impossible for one man to patrol the park. He urged the formation of a ranger force. So Harry Yount is credited with being the father of the ranger service, as well as the first US Park Ranger." Stephen Mather, the first director of the National Park Service, wrote the foreword of the book. Prospecting and later years After Yount resigned from his job in Yellowstone, he lived for a while in Uva, Wyoming. He homesteaded in the area in 1887 but his claim was sold in a sheriff's sale in 1892. He spent nearly 40 years prospecting in the Laramie Mountains, and developed copper and graphite mining claims. He settled in Wheatland, Wyoming, and worked on developing a marble mining claim west of there. Yount was actively involved in prospecting until the day before his death, when he had been looking for a ride to inspect a possible gold deposit. On May 16, 1924, he walked into downtown Wheatland, as was his daily habit, where he collapsed and died of heart failure near a Lutheran church. He was buried in the Lakeview Cemetery in Cheyenne. Legacy Younts Peak, which is high and located in the Absaroka Range at the headwaters of the Yellowstone River, is named after Yount. The peak's name was bestowed by the Hayden Geological Survey. Long after his death, in the 1970s, Yount's marble mining claim finally went into production. It yielded crushed marble for use in landscaping and aquariums. In 1994, the National Park Service established the Harry Yount Award, given annually to an employee whose "overall impact, record of accomplishments, and excellence in traditional ranger duties have created an appreciation for the park ranger profession." This award is given both nationally and regionally. According to Supernaugh, Yount is "credited with setting the standards for performance and service by which the public has come to judge the rangers of today". See also Hayden Geological Survey of 1871 References External links Harry Yount (1837–1924) at the National Park Service 1839 births 1924 deaths American Civil War prisoners of war American hunters American prospectors Burials in Wyoming Explorers of North America Explorers of the United States Mountain men National Park Service personnel People of Missouri in the American Civil War Union Army soldiers Yellowstone region
45404358
https://en.wikipedia.org/wiki/Carbanak
Carbanak
Carbanak is an APT-style campaign targeting (but not limited to) financial institutions, that was discovered in 2014 by the Russian cyber security company Kaspersky Lab. It utilizes malware that is introduces into systems running Microsoft Windows using phishing emails, which is then used to steal money from banks. The hacker group is said to have stolen over 900 million dollars, from the banks as well as from over a thousand private customers. The criminals were able to manipulate their access to the respective banking networks in order to steal the money in a variety of ways. In some instances, ATMs were instructed to dispense cash without having to locally interact with the terminal. Money mules would collect the money and transfer it over the SWIFT network to the criminals’ accounts, Kaspersky said. The Carbanak group went so far as to alter databases and pump up balances on existing accounts and pocketing the difference unbeknownst to the user whose original balance is still intact. Their intended targets were primarily in Russia, followed by the United States, Germany, China and Ukraine, according to Kaspersky Lab. One bank lost $7.3 million when its ATMs were programmed to spew cash at certain times that henchmen would then collect, while a separate firm had $10 million taken via its online platform. Kaspersky Lab is helping to assist in investigations and countermeasures that disrupt malware operations and cybercriminal activity. During the investigations they provide technical expertise such as analyzing infection vectors, malicious programs, supported command and control infrastructure and exploitation methods. FireEye published research tracking further activities, referring to the group as FIN7, including an SEC-themed spear phishing campaign. Proofpoint also published research linking the group to the Bateleur backdoor, and expanded the list of targets to U.S.-based chain restaurants, hospitality organizations, retailers, merchant services, suppliers and others beyond their initial financial services focus. On 26 October 2020, PRODAFT (Switzerland) started publishing internal details of the Fin7/Carbanak group and tools they use during their operation. Published information is claimed to be originated from a single OPSEC failure on the threat actor’s side. On March 26, 2018, Europol claimed to have arrested the "mastermind" of the Carbanak and associated Cobalt or Cobalt Strike group in Alicante, Spain, in an investigation led by the Spanish National Police with the cooperation of law enforcement in multiple countries as well as private cybersecurity companies. The group's campaigns appear to have continued, however, with the Hudson's Bay Company breach using point of sale malware in 2018 being attributed to the group. Controversy Some controversy exists around the Carbanak attacks, as they were seemingly described several months earlier in a report by the Internet security companies Group-IB (Russia) and Fox-IT (The Netherlands) that dubbed the attack Anunak. The Anunak report shows also a greatly reduced amount of financial losses and according to a statement issued by Fox-IT after the release of The New York Times article, the compromise of banks outside Russia did not match their research. Also in an interview conducted by Russian newspaper Kommersant the controversy between the claims of Kaspersky Lab and Group-IB come to light where Group-IB claims no banks outside of Russia and Ukraine were hit, and the activity outside of that region was focused on Point of Sale systems. Reuters issued a statement referencing a Private Industry Notification issued by the FBI and USSS (United States Secret Service) claiming they have not received any reports that Carbanak has affected the financial sector. Two representative groups of the US banking industry FS-ISAC and ABA (American Bankers Association) in an interview with Bank Technology News say no US banks have been affected. References Malware Hacking in the 2010s 2014 in computing Cyberattacks on banking industry Criminal advanced persistent threat groups
323715
https://en.wikipedia.org/wiki/Troy%20%28film%29
Troy (film)
Troy is a 2004 epic historical war film directed by Wolfgang Petersen and written by David Benioff. Produced by units in Malta, Mexico and Britain's Shepperton Studios, the film features an ensemble cast led by Brad Pitt, Eric Bana, and Orlando Bloom. It is loosely based on Homer's Iliad in its narration of the entire story of the decade-long Trojan War—condensed into little more than a couple of weeks, rather than just the quarrel between Achilles and Agamemnon in the ninth year. Achilles leads his Myrmidons along with the rest of the Greek army invading the historical city of Troy, defended by Hector's Trojan army. The end of the film (the sack of Troy) is not taken from the Iliad, but rather from Quintus Smyrnaeus's Posthomerica as the Iliad concludes with Hector's death and funeral. Troy made over $497 million worldwide, making it the 60th highest-grossing film at the time of its release. It received a nomination for Best Costume Design at the 77th Academy Awards and was the eighth highest-grossing film of 2004. Plot In Ancient Greece, King Agamemnon of Mycenae finally unites the Greek kingdoms after decades of warfare, forming a loose alliance under his rule. Achilles, a heroic Greek warrior who has given Agamemnon many victories, deeply despises him. Meanwhile, Prince Hector of Troy and his younger brother Paris negotiate a peace treaty with Menelaus, King of Sparta. However, Paris is having an affair with Menelaus' wife, Queen Helen, and smuggles her aboard their home-bound vessel. Upon learning of this, Menelaus meets with Agamemnon, his elder brother, and asks him to help take Troy. Agamemnon agrees, as conquering Troy will give him control of the Aegean Sea. Agamemnon has Odysseus, King of Ithaca, persuade Achilles to join them. Achilles eventually decides to go after his mother Thetis tells him that though he will die, he will be forever glorified. In Troy, King Priam welcomes Helen when Hector and Paris return home, and decides to prepare for war. The Greeks eventually invade and take the Trojan beach, thanks largely to Achilles and his Myrmidons. Achilles has the temple of Apollo sacked, and claims Briseis — a priestess and the cousin of Paris and Hector — as a prisoner. He is angered when Agamemnon spitefully takes her from him, and decides that he will not aid Agamemnon in the siege. The Trojan and Greek armies meet outside the walls of Troy. During a parley, Paris offers to duel Menelaus for Helen's hand in exchange for the city being spared. Agamemnon, intending to take the city regardless of the outcome, accepts. Menelaus wounds Paris and almost kills him, but is himself killed by Hector. An enraged Agamemnon orders the Greeks to crush the trojan army. In the ensuing battle thousands of warriors engage in brutal combat. Hector spots and engagesAjax who is killed by Hector after a fierce duel. Many Greek soldiers fall to the Trojan defenses, forcing Agamemnon to retreat. He gives Briseis to the Greek soldiers for their amusement, but Achilles saves her. Later that night, Briseis sneaks into Achilles' quarters to kill him; instead, she falls for him and they become lovers. Achilles then resolves to leave Troy, much to the dismay of Patroclus, his cousin and protégé. Despite Hector's objections, Priam orders him to retake the Trojan beach and force the Greeks home, but the attack unifies the Greeks and the Myrmidons enter the battle. Hector duels a man he believes to be Achilles and kills him, only to discover it was actually Patroclus. Distraught, both armies agree to stop fighting for the day. Achilles is informed of his cousin's death and vows revenge. Wary of Achilles, Hector shows his wife Andromache a secret tunnel beneath Troy. Should he die and the city fall, he instructs her to take their child and any survivors out of the city to Mount Ida. The next day, Achilles arrives outside Troy and challenges Hector; the two duel until Hector is killed, and Achilles drags his corpse back to the Trojan beach. Priam sneaks into the camp and implores Achilles to return Hector's body for a proper funeral. Ashamed of his actions, Achilles agrees and allows Briseis to return to Troy with Priam, promising a twelve day truce so that Hector's funeral rites may be held in peace. He also orders his men to return home without him. Agamemnon declares that he will take Troy regardless of the cost. Concerned, Odysseus concocts a plan to infiltrate the city: he has the Greeks build a gigantic wooden horse as a peace offering and abandon the Trojan beach, hiding their ships in a nearby cove. Priam orders the horse be brought into the city. That night, Greeks hiding inside the horse emerge and open the city gates for the Greek army, commencing the Sack of Troy. While Andromache and Helen guide the Trojans to safety through the tunnel, Paris gives the Sword of Troy to Aeneas, instructing him to protect the Trojans and find them a new home. Agamemnon kills Priam and captures Briseis, who then kills Agamemnon. Achilles fights his way through the city and reunites with Briseis. Paris, seeking to avenge his brother, shoots an arrow through Achilles' heel and then several into his body. Achilles bids farewell to Briseis, and watches her flee with Paris before dying. In the aftermath, Troy is finally taken by the Greeks and a funeral is held for Achilles, where Odysseus personally cremates his body. Cast Production The city of Troy was built in the Mediterranean island of Malta at Fort Ricasoli from April to June 2003. Other important scenes were shot in Mellieħa, a small town in the north of Malta, and on the small island of Comino. The outer walls of Troy were built and filmed in Cabo San Lucas, Mexico. Film production was disrupted for a period after Hurricane Marty affected filming areas. The role of Briseis was initially offered to Bollywood actress Aishwarya Rai, but she turned it down because she was not comfortable doing the lovemaking scenes that were included. The role eventually went to Rose Byrne. Brad Pitt years later expressed disappointment with the film saying "I had to do Troy because [...] I pulled out of another movie and then had to do something for the studio. So I was put in Troy. It wasn't painful, but I realized that the way that movie was being told was not how I wanted it to be. I made my own mistakes in it. What am I trying to say about Troy? I could not get out of the middle of the frame. It was driving me crazy. I'd become spoiled working with David Fincher. It's no slight on Wolfgang Petersen. Das Boot is one of the all-time great films. But somewhere in it, Troy became a commercial kind of thing. Every shot was like, Here's the hero! There was no mystery." Music Composer Gabriel Yared originally worked on the score for Troy for over a year, having been hired by the director, Wolfgang Petersen. Tanja Carovska provided vocals on various portions of the music, as she later would on composer James Horner's version of the soundtrack. However, the reactions at test screenings which used an incomplete version of the score were negative, and in less than a day Yared was off the project without a chance to fix or change his music. James Horner composed a replacement score in about four weeks. He used Carovska's vocals again and also included traditional Eastern Mediterranean music and brass instruments. Horner also collaborated with American singer-songwriter Josh Groban and lyricist Cynthia Weil to write an original song for the film's end credits. The product of this collaboration, "Remember me", was performed by Groban with additional vocals by Carovska. The soundtrack for the film was released on May 11, 2004, through Reprise Records. Director's cut Troy: Director's Cut was screened at the 57th Berlin International Film Festival on February 17, 2007, and received a limited release in Germany in April 2007. Warner Home Video reportedly spent more than $1 million for the director's cut, which includes "at least 1,000 new cuts" or almost 30 minutes extra footage (with a new running time of 196 minutes). The DVD was released on September 18, 2007, in the US. The score of the film was changed dramatically, with many of the female vocals being cut. An addition to the music is the use of Danny Elfman's theme for Planet of the Apes during the pivotal fight between Hector and Achilles in front of the Gates of Troy. Josh Groban's song was removed from the end credits as well. Various shots were recut and extended. For instance, the love scene between Helen and Paris was reframed to include more nudity of Diane Kruger. The love scene between Achilles and Briseis is also extended. Only one scene was removed: the scene where Helen tends to the wound of Paris is taken out. The battle scenes were also extended, depicting more violence and gore, including much more of Ajax's bloody rampage on the Trojans during the initial attack by the Greek Army. Perhaps most significant was the sacking of Troy, barely present in the theatrical cut, but shown fully here, depicting the soldiers raping women and murdering babies. Characters were given more time to develop, specifically Priam and Odysseus, the latter being given a humorous introduction scene. More emphasis is given to the internal conflict in Troy between the priests, who believe in omens and signs from the gods to determine the outcome of the war, and military commanders, who believe in practical battle strategies to achieve victory. Lastly, bookend scenes were added: the beginning being a soldier's dog finding its dead master and the end including a sequence where the few surviving Trojans escape to Mount Ida. There are frequent differences between The Iliad and Troy, most notably relating to the final fates of Paris, Helen, Agamemnon, Achilles and Menelaus. In one of the commentary sequences, the film's writer, David Benioff, said that when it came to deciding whether to follow The Iliad or to do what was best for the film, they always decided with what was best for the film. Home media Troy was released on DVD and VHS on January 4, 2005. The director's cut was released on Blu-ray and DVD on September 18, 2007. The directors cut is the only edition of the film available on Blu-ray, however the theatrical cut was released on HD-DVD. Reception Box office Troy grossed $133.4 million in the United States and Canada, and $364 million in other territories, for a worldwide total of $497.4 million, making the film one of the highest grossing films of 2004, alongside The Passion of the Christ, Spider-Man 2 and Shrek 2. When the film was completed, total production costs were approximately $185 million, making Troy one of the most expensive films produced at that time. It was screened out of competition at the 2004 Cannes Film Festival. The film made $46.9 million in its opening weekend, topping the box office, then $23.9 million in its second weekend falling to second. Critical reception On Rotten Tomatoes, Troy holds an approval rating of 54% based on 229 reviews, with an average rating of 6.00/10. The site's critics consensus reads, "A brawny, entertaining spectacle, but lacking emotional resonance." On Metacritic, the film has a weighted average score of 56 out of 100, based on 43 critics, indicating "mixed or average reviews". Audiences polled by CinemaScore gave the film an average grade of "B" on an A+ to F scale. Roger Ebert rated the film two out of four stars, saying "Pitt is modern, nuanced, introspective; he brings complexity to a role where it is not required." IGN critics Christopher Monfette and Cindy White praised the Director's cut as superior to the early version, evaluating it with eight stars out of ten. Peter O'Toole, who played Priam, spoke negatively of the film during an appearance at the Savannah Film Festival, stating he walked out of the film fifteen minutes into a screening, and criticized the director, slamming him as "a clown". Accolades See also Sword-and-sandal Epic film Greek mythology in popular culture List of films based on poems List of historical period drama films References Further reading Petersen, Daniel (2006). Troja: Embedded im Troianischen Krieg (Troy: Embedded in the Trojan War). HörGut! Verlag. . Winkler, Martin M. (2006). Troy: From Homer's Iliad to Hollywood Epic. Blackwell Publishing. . Proch, Celina/Kleu, Michael (2013). Models of Maculinities in Troy: Achilles, Hector and Their Female Partners, in: A.-B. Renger/J. Solomon (ed.): Ancient Worlds in Film and Television. Gender and Politics, Brill, pp. 175–193, . External links 2004 films 2000s adventure films 2000s romantic drama films 2000s war films American action drama films American romantic drama films American epic films American films American war drama films British epic films British films British war drama films Classical war films 2000s English-language films Films about brothers Films about death American films about revenge Films based on multiple works Films based on poems Films based on the Iliad Films directed by Wolfgang Petersen Films scored by James Horner Films set in ancient Greece Films set in Greece Films set in Turkey Films shot in Malta Films shot in Morocco Maltese films Plan B Entertainment films Romantic epic films Siege films Trojan War films Troy War adventure films War epic films Warner Bros. films Works based on the Aeneid Films based on classical mythology Cultural depictions of Helen of Troy 2000s action drama films 2004 drama films Films with screenplays by David Benioff Agamemnon
956336
https://en.wikipedia.org/wiki/Keith%20Bostic%20%28software%20engineer%29
Keith Bostic (software engineer)
Keith Bostic is an American software engineer and one of the key people in the history of Berkeley Software Distribution (BSD) Unix and open-source software. In 1986, Bostic joined the Computer Systems Research Group (CSRG) at the University of California, Berkeley. He was one of the principal architects of the Berkeley 2BSD, 4.4BSD and 4.4BSD-Lite releases. Among many other tasks, he led the effort at CSRG to create a free software version of BSD Unix, which helped allow the creation of FreeBSD, NetBSD and OpenBSD. Bostic was a founder of Berkeley Software Design Inc. (BSDi), which produced BSD/OS, a proprietary version of BSD. In 1993, the USENIX Association gave a Lifetime Achievement Award (Flame) to the Computer Systems Research Group, honoring 180 individuals, including Bostic, who contributed to the group's 4.4BSD-Lite release. Bostic and his wife Margo Seltzer founded Sleepycat Software in 1996 to develop and commercialize Berkeley DB, an open-source, key-value database. Sleepycat Software was the first company to develop dual-licensed open-source software. In February 2006, the company was acquired by Oracle Corporation, where Bostic worked until 2008. Bostic and Michael Cahill founded WiredTiger in 2010 to create a NoSQL database management system. In November 2014, the company was acquired by MongoDB, which employs Bostic. Bostic is the author of nvi – a re-implementation of the classic text editor vi – and many other standard BSD and Linux utilities. He is a past member of the Association for Computing Machinery, IEEE, and several POSIX working groups, and a contributor to POSIX standards. Publications M. McKusick, K. Bostic, M. Karels, J. Quarterman: The Design and Implementation of the 4.4BSD Operating System, Addison-Wesley, April 1996, . French translation published 1997, International Thomson Publishing, Paris, France, . References External links Free software programmers University of California, Berkeley people BSD people 1959 births Living people Place of birth missing (living people) American technology company founders Businesspeople in software American computer businesspeople
3263128
https://en.wikipedia.org/wiki/ECRM
ECRM
The eCRM or electronic customer relationship management coined by Oscar Gomes encompasses all standard CRM functions with the use of the net environment i.e., intranet, extranet and internet. Electronic CRM concerns all forms of managing relationships with customers through the use of information technology (IT). eCRM processes include data collection, data aggregation, and customer interaction. Compared to traditional CRM, the integrated information for eCRM intraorganizational collaboration can be more efficient to communicate with customers. From RM to CRM The concept of relationship marketing (RM) was established by marketing professor Leonard Berry in 1983. He considered it to consist of attracting, maintaining and enhancing customer relationships within organizations. In the years that followed, companies were engaging more and more in a meaningful dialogue with individual customers. In doing so, new organizational forms as well as technologies were used, eventually resulting in what we know as customer relationship management. The main difference between CRM and e-CRM is that the first does not acknowledge the use of technology, where the latter uses information technology (IT) in implementing RM strategies. The essence of CRM The exact meaning of CRM is still subject of heavy discussions. However, the overall goal can be seen as effectively managing differentiated relationships with all customers and communicating with them on an individual basis. Underlying thought is that companies realize that they can supercharge profits by acknowledging that different groups of customers vary widely in their behavior, desires, and responsiveness to marketing. Loyal customers can not only give operational companies sustained revenue but also advertise for new marketers. To reinforce the reliance of customers and create additional customer sources, firms utilize CRM to maintain the relationship as the general two categories B2B (business-to-business) and B2C (business-to-customer or business-to-consumer). Because of the needs and behaviors are different between B2B and B2C, the implementation of CRM should come from respective viewpoints. Differences from CRM Major differences between CRM and eCRM: Customer contacts CRM – Contact with customer made through the retail store, phone, and fax. eCRM – All of the traditional methods are used in addition to Internet, email, wireless, and PDA technologies. System interface CRM – Implements the use of ERP systems, emphasis is on the back-end. eCRM – Geared more toward front end, which interacts with the back-end through use of ERP systems, data warehouses, and data marts. System overhead (client computers) CRM – The client must download various applications to view the web-enabled applications. They would have to be rewritten for different platform. eCRM – Does not have these requirements because the client uses the browser. Customization and personalization of information CRM – Views differ based on the audience, and personalized views are not available. Individual personalization requires program changes. eCRM – Personalized individual views based on purchase history and preferences. Individual has ability to customize view. System focus CRM – System (created for internal use) designed based on job function and products. Web applications designed for a single department or business unit. eCRM – System (created for external use) designed based on customer needs. Web application designed for enterprise-wide use. System maintenance and modification CRM – More time involved in implementation and maintenance is more expensive because the system exists at different locations and on various servers. eCRM – Reduction in time and cost. Implementation and maintenance can take place at one location and on one server. by MalleBevax eCRM As the Internet is becoming more and more important in business life, many companies consider it as an opportunity to reduce customer-service costs, tighten customer relationships and most important, further personalize marketing messages and enable mass customization. ECRM is being adopted by companies because it increases customer loyalty and customer retention by improving customer satisfaction, one of the objectives of eCRM. E-loyalty results in long-term profits for online retailers because they incur less costs of recruiting new customers, plus they have an increase in customer retention. Together with the creation of sales force automation (SFA), where electronic methods were used to gather data and analyze customer information, the trend of the upcoming Internet can be seen as the foundation of what we know as eCRM today. (Nenad Jukic et al., 2003) As we implement eCRM process, there are three steps life cycle: Data collection: About customers preference information for actively (answer knowledge) and passively (surfing record) ways via website, email, questionnaire. Data aggregation: Filter and analysis for firm's specific needs to fulfill their customers. Customer interaction: According to customer's need, company provide the proper feedback to them. eCRM can be defined as activities to manage customer relationships by using the Internet, web browsers or other electronic touch points. The challenge hereby is to offer communication and information on the right topic, in the right amount, and at the right time that fits the customer's specific needs. Strategy components When enterprises integrate their customer information, there are three eCRM strategy components: Operational: Because of sharing information, the processes in business should make customer's need as first and seamlessly implement. This avoids multiple times to bother customers and redundant process. Analytical: Analysis helps company maintain a long-term relationship with customers. Collaborative: Due to improved communication technology, different departments in company implement (intraorganizational) or work with business partners (interorganizational) more efficiently by sharing information. (Nenad Jukic et al., 2003) Implementing and integrating Non-electronic solution Several CRM software packages exist that can help companies in deploying CRM activities. Besides choosing one of these packages, companies can also choose to design and build their own solutions. In order to implement CRM in an effective way, one needs to consider the following factors: Create a customer-focused culture in the organization. Adopt customer-based managers to assess satisfaction. Develop an end-to-end process to serve customers. Recommend questions to be asked to help a customer solve a problem. Track all aspects of selling to customers, as well as prospects. Furthermore, CRM solutions are more effective once they are being implemented in other information systems used by the company. Examples are transaction processing system (TPS) to process data real-time, which can then be sent to the sales and finance departments in order to recalculate inventory and financial position quick and accurately. Once this information is transferred back to the CRM software and services it could prevent customers from placing an order in the belief that an item is in stock while it is not. Cloud solution Today, more and more enterprise CRM systems move to cloud computing solution, "up from 8 percent of the CRM market in 2005 to 20 percent of the market in 2008, according to Gartner". Moving managing system into cloud, companies can cost efficiently as pay-per-use on manage, maintain, and upgrade etc. system and connect with their customers streamlined in the cloud. In cloud based CRM system, transaction can be recorded via CRM database immediately. Some enterprise CRM in cloud systems are web-based customers don't need to install an additional interface and the activities with businesses can be updated real-time. People may communicate on mobile devices to get the efficient services. Furthermore, customer/case experience and the interaction feedbacks are another way of CRM collaboration and integration information in corporate organization to improve businesses’ services. There are multifarious cloud CRM services for enterprise to use and here are some hints to the your right CRMsystem: Assess your company's needs: some of enterprise CRM systems are featured Take advantage of free trials: comparison and familiarization each of the optional. Do the math: estimate the customer strategy for company budget. Consider mobile options: some system like Salesforce.com can be combined with other mobile device application. Ask about security: consider whether the cloud CRM solution provides as much protection as your own system. Make sure the sales team is on board: as the frontline of enterprise, the launched CRM system should be the help for sales. Know your exit strategy: understand the exit mechanism to keep flexibility. vCRM Channels through which companies can communicate with its customers, are growing by the day, and as a result, their time and attention has turned into a major challenge. One of the reasons eCRM is so popular nowadays is that digital channels can create unique and positive experiences – not just transactions – for customers. An extreme, but ever growing in popularity, example of the creation of experiences in order to establish customer service is the use of Virtual Worlds, such as Second Life. Through this so-called vCRM, companies are able to create synergies between virtual and physical channels and reaching a very wide consumer base. However, given the newness of the technology, most companies are still struggling to identify effective entries in Virtual Worlds. Its highly interactive character, which allows companies to respond directly to any customer's requests or problems, is another feature of eCRM that helps companies establish and sustain long-term customer relationships. Furthermore, Information Technology has helped companies to even further differentiate between customers and address a personal message or service. Some examples of tools used in eCRM: Personalized Web Pages where customers are recognized and their preferences are shown. Customized products or services. CRM programs should be directed towards customer value that competitors cannot match. However, in a world where almost every company is connected to the Internet, eCRM has become a requirement for survival, not just a competitive advantage. Different levels In defining the scope of eCRM, three different levels can be distinguished: Foundational services: This includes the minimum necessary services such as web site effectiveness and responsiveness as well as order fulfillment. Customer-centered services: These services include order tracking, product configuration and customization as well as security/trust. Value-added services: These are extra services such as online auctions and online training and education. Self-services are becoming increasingly important in CRM activities. The rise of the Internet and eCRM has boosted the options for self-service activities. A critical success factor is the integration of such activities into traditional channels. An example was Ford's plan to sell cars directly to customers via its Web Site, which provoked an outcry among its dealers network. CRM activities are mainly of two different types. Reactive service is where the customer has a problem and contacts the company. Proactive service is where the manager has decided not to wait for the customer to contact the firm, but to be aggressive and contact the customer himself in order to establish a dialogue and solve problems. Steps to eCRM Success Many factors play a part in ensuring that the implementation any level of eCRM is successful. One obvious way it could be measured is by the ability for the system to add value to the existing business. There are four suggested implementation steps that affect the viability of a project like this: Developing customer-centric strategies Redesigning workflow management systems Re-engineering work processes Supporting with the right technologies Mobile CRM One subset of Electronic CRM is Mobile CRM (mCRM). This is defined as "services that aim at nurturing customer relationships, acquiring or maintaining customers, support marketing, sales or services processes, and use wireless networks as the medium of delivery to the customers. However, since communications is the central aspect of customer relations activities, many opt for the following definition of mCRM: "communication, either one-way or interactive, which is related to sales, marketing and customer service activities conducted through mobile medium for the purpose of building and maintaining customer relationships between a company and its customer(s). eCRM allows customers to access company services from more and more places, since the Internet access points are increasing by the day. mCRM however, takes this one step further and allows customers or managers to access the systems for instance from a mobile phone or PDA with internet access, resulting in high flexibility. Since mCRM is not able to provide a complete range of customer relationship activities it should be integrated in the complete CRM system. There are three main reasons that mobile CRM is becoming so popular. The first is that the devices consumer use are improving in multiple ways that allow for this advancement. Displays are larger and clearer and access times on networks are improving overall. Secondly, the users are also becoming more sophisticated. The technology to them is nothing new so it is easy to adapt. Lastly, the software being developed for these applications has become worthwhile and useful to end users. There are four basic steps that a company should follow to implement a mobile CRM system. By following these and also keeping the IT department, the end users and management in agreement, the outcome can be beneficial for all. Step 1 – Needs analysis phase: This is the point to take your times and understand all the technical needs and desires for each of the users and stakeholders. It also has to be kept in mind that the mobile CRM system must be able to grow and change with the business. Step 2 – Mobile design phase: This is the next critical phase that will show all the technical concerns that need to be addressed. A few main things to consider are screen size, device storage and security. Step 3 – Mobile application testing phase: This step is mostly to ensure that the users and stakeholders all approve of the new system. Step 4 – Rollout phase: This is when the new system is implemented but also when training on the final product is done with all users. Advantages of mobile CRM The mobile channel creates a more personal direct connection with customers. It is continuously active and allows necessary individuals to take action quickly using the information. Typically it is an opt-in only channel which allows for high and quality responsiveness. Overall it supports loyalty between the customer and company, which improves and strengthens relationships. Failures Designing, creating and implementing IT projects has always been risky. Not only because of the amount of money that is involved, but also because of the high chances of failure. However, a positive trend can be seen, indicating that CRM failures dropped from a failure rate of 80% in 1998, to about 40% in 2003. Some of the major issues relating to CRM failure are the following: Difficulty in measuring and valuing intangible benefits. Failure to identify and focus on specific business problems. Lack of active senior management sponsorship. Poor user acceptance. Trying to automate a poorly defined process. Failure rates in CRM from 2001-2009: 2001- 50% failure rate according to the Gartner group 2002- 70% failure rate according to Butler group 2003- 69.3% according to Selling Power, CSO Forum 2004- 18% according to AMR Research group 2005- 31% according to AMR Research 2006- 29% according to AMR Research 2007- 56% according to Economist Intelligence Unit 2009- 47% according to Forrester Research Differing measurement criteria and methods of the research groups make it difficult to compare these rates. Most of these rates were based on customer response pertaining to questions on the success of CRM implementations. Privacy The effective and efficient employment of CRM activities cannot go without the remarks of safety and privacy. CRM systems depend on databases in which all kinds of customer data is stored. In general, the following rule applies: the more data, the better the service companies can deliver to individual customers. Some known examples of these problems are conducting credit-card transaction online of the phenomenon known as 'cookies' used on the Internet in order to track someone's information and behavior. The design and the quality of the website are two very important aspects that influence the level of trust customers experience and their willingness or reluctance to conduct a transaction or leave personal information. Privacy policies can be ineffective in relaying to customers how much of their information is being used. In a recent study by The University of Pennsylvania and University of California, it was revealed that over half the respondents have an incorrect understanding of how their information is being used. They believe that, if a company has a privacy policy, they will not share the customer's information with third party companies without the customer's express consent. Therefore, if marketers want to use consumer information for advertising purposes, they must clearly illustrate the ways in which they will use the customer's information and present the benefits of this in order to acquire the customer's consent. Privacy concerns are being addressed more and more. Legislation is being proposed that regulates the use of personal data. Also, Internet policy officials are calling for more performance measures of privacy policies. Statistics on privacy: 38% of retailers don't talk about privacy in their sign up or welcome email About 50% of major online retailers discuss privacy concerns during the email subscription process As the use of the Internet, electronic CRM solutions, and even the existence of e-business are rising, so are the efforts to further develop the systems being used and to increase their safety for customers, in order to further reap the benefits of their use. See also Customer relationship management Comparison of CRM systems Customer lifecycle management B2B B2C Cloud computing Enterprise resource planning Notes Further reading Romano, Nicholas C. and Fjermestad, Jerry L. (2009) Preface to the focus theme on eCRM. Electronic Markets 19(2-3) 69-70. Yujong Hwang (2009) The impact of uncertainty avoidance, social norms and innovativeness on trust and ease of use in electronic customer relationship management. Electronic Markets 19 (2-3) 89-98 Pierre Hadaya and Luc Cassivi (2009) Collaborative e-product development and product innovation in a demand-driven network: the moderating role of eCRM. Electronic Markets 19(2-3) 71-87. Customer relationship management software
31062
https://en.wikipedia.org/wiki/Telnet
Telnet
Telnet is an application protocol used on the Internet or local area network to provide a bidirectional interactive text-oriented communication facility using a virtual terminal connection. User data is interspersed in-band with Telnet control information in an 8-bit byte oriented data connection over the Transmission Control Protocol (TCP). Telnet was developed in 1969 beginning with , extended in , and standardized as Internet Engineering Task Force (IETF) Internet Standard STD 8, one of the first Internet standards. The name stands for "teletype network". Historically, Telnet provided access to a command-line interface on a remote host. However, because of serious security concerns when using Telnet over an open network such as the Internet, its use for this purpose has waned significantly in favor of SSH. The term telnet is also used to refer to the software that implements the client part of the protocol. Telnet client applications are available for virtually all computer platforms. Telnet is also used as a verb. To telnet means to establish a connection using the Telnet protocol, either with a command line client or with a graphical interface. For example, a common directive might be: "To change your password, telnet into the server, log in and run the passwd command." In most cases, a user would be telnetting into a Unix-like server system or a network device (such as a router). History and standards Telnet is a client-server protocol, based on a reliable connection-oriented transport. Typically, this protocol is used to establish a connection to Transmission Control Protocol (TCP) port number 23, where a Telnet server application (telnetd) is listening. Telnet, however, predates TCP/IP and was originally run over Network Control Program (NCP) protocols. Even though Telnet was an ad hoc protocol with no official definition until March 5, 1973, the name actually referred to Teletype Over Network Protocol as the RFC 206 (NIC 7176) on Telnet makes the connection clear: Essentially, it used an 8-bit channel to exchange 7-bit ASCII data. Any byte with the high bit set was a special Telnet character. On March 5, 1973, a Telnet protocol standard was defined at UCLA with the publication of two NIC documents: Telnet Protocol Specification, NIC 15372, and Telnet Option Specifications, NIC 15373. Many extensions were made for Telnet because of its negotiable options protocol architecture. Some of these extensions have been adopted as Internet standards, IETF documents STD 27 through STD 32. Some extensions have been widely implemented and others are proposed standards on the IETF standards track (see below) Telnet is best understood in the context of a user with a simple terminal using the local Telnet program (known as the client program) to run a logon session on a remote computer where the user's communications needs are handled by a Telnet server program. Security When Telnet was initially developed in 1969, most users of networked computers were in the computer departments of academic institutions, or at large private and government research facilities. In this environment, security was not nearly as much a concern as it became after the bandwidth explosion of the 1990s. The rise in the number of people with access to the Internet, and by extension the number of people attempting to hack other people's servers, made encrypted alternatives necessary. Experts in computer security, such as SANS Institute, recommend that the use of Telnet for remote logins should be discontinued under all normal circumstances, for the following reasons: Telnet, by default, does not encrypt any data sent over the connection (including passwords), and so it is often feasible to eavesdrop on the communications and use the password later for malicious purposes; anybody who has access to a router, switch, hub or gateway located on the network between the two hosts where Telnet is being used can intercept the packets passing by and obtain login, password and whatever else is typed with a packet analyzer. Most implementations of Telnet have no authentication that would ensure communication is carried out between the two desired hosts and not intercepted in the middle. Several vulnerabilities have been discovered over the years in commonly used Telnet daemons. These security-related shortcomings have seen the usage of the Telnet protocol drop rapidly, especially on the public Internet, in favor of the Secure Shell (SSH) protocol, first released in 1995. SSH has practically replaced Telnet, and the older protocol is used these days only in rare cases to access decades-old legacy equipment that does not support more modern protocols. SSH provides much of the functionality of telnet, with the addition of strong encryption to prevent sensitive data such as passwords from being intercepted, and public key authentication, to ensure that the remote computer is actually who it claims to be. As has happened with other early Internet protocols, extensions to the Telnet protocol provide Transport Layer Security (TLS) security and Simple Authentication and Security Layer (SASL) authentication that address the above concerns. However, most Telnet implementations do not support these extensions; and there has been relatively little interest in implementing these as SSH is adequate for most purposes. It is of note that there are a large number of industrial and scientific devices which have only Telnet available as a communication option. Some are built with only a standard RS-232 port and use a serial server hardware appliance to provide the translation between the TCP/Telnet data and the RS-232 serial data. In such cases, SSH is not an option unless the interface appliance can be configured for SSH (or is replaced with one supporting SSH). Telnet is still used by hobbyists, especially among amateur radio operators. The Winlink protocol supports packet radio via a Telnet connection. Telnet 5250 IBM 5250 or 3270 workstation emulation is supported via custom telnet clients, TN5250/TN3270, and IBM i systems. Clients and servers designed to pass IBM 5250 data streams over Telnet generally do support SSL encryption, as SSH does not include 5250 emulation. Under IBM i (also known as OS/400), port 992 is the default port for secured telnet. Telnet data All data octets except 0xff are transmitted over Telnet as is. (0xff, or 255 in decimal, is the IAC byte (Interpret As Command) which signals that the next byte is a telnet command. The command to insert 0xff into the stream is 0xff, so 0xff must be escaped by doubling it when sending data over the telnet protocol.) Telnet client applications can establish an interactive TCP session to a port other than the Telnet server port. Connections to such ports do not use IAC and all octets are sent to the server without interpretation. For example, a command line telnet client could make an HTTP request to a web server on TCP port 80 as follows: $ telnet www.example.com 80 GET /path/to/file.html HTTP/1.1 Host: www.example.com Connection: close There are other TCP terminal clients, such as netcat or socat on UNIX and PuTTY on Windows, which handle such requirements. Nevertheless, Telnet may still be used in debugging network services such as SMTP, IRC, HTTP, FTP or POP3, to issue commands to a server and examine the responses. Another difference between Telnet and other TCP terminal clients is that Telnet is not 8-bit clean by default. 8-bit mode may be negotiated, but octets with the high bit set may be garbled until this mode is requested, as 7-bit is the default mode. The 8-bit mode (so named binary option) is intended to transmit binary data, not ASCII characters. The standard suggests the interpretation of codes 0000–0176 as ASCII, but does not offer any meaning for high-bit-set data octets. There was an attempt to introduce a switchable character encoding support like HTTP has, but nothing is known about its actual software support. Related RFCs Internet Standards , Telnet Protocol Specification , Telnet Option Specifications , Telnet Binary Transmission , Telnet Echo Option , Telnet Suppress Go Ahead Option , Telnet Status Option , Telnet Timing Mark Option , Telnet Extended Options: List Option Proposed Standards , Telnet End of Record Option , Telnet Window Size Option , Telnet Terminal Speed Option , Telnet Terminal-Type Option , Telnet X Display Location Option , Requirements for Internet Hosts - Application and Support , Telnet Linemode Option , Telnet Remote Flow Control Option , Telnet Environment Option , Telnet Authentication Option , Telnet Authentication: Kerberos Version 5 , TELNET Authentication Using DSA , Telnet Authentication: SRP , Telnet Data Encryption Option , The telnet URI Scheme Informational/experimental , The Q Method of Implementing TELNET Option Negotiation , Telnet Environment Option Interoperability Issues Other RFCs , Telnet 3270 Regime Option , 5250 Telnet Interface , Telnet Com Port Control Option , IBM's iSeries Telnet Enhancements Telnet clients PuTTY and plink command line are a free, open-source SSH, Telnet, rlogin, and raw TCP client for Windows, Linux, and Unix. AbsoluteTelnet is a telnet client for Windows. It also supports SSH and SFTP, RUMBA (Terminal Emulator) Line Mode Browser, a command line web browser NCSA Telnet TeraTerm SecureCRT from Van Dyke Software ZOC Terminal SyncTERM BBS terminal program supporting Telnet, SSHv2, RLogin, Serial, Windows, *nix, and Mac OS X platforms, X/Y/ZMODEM and various BBS terminal emulations Rtelnet is a SOCKS client version of Telnet, providing similar functionality of telnet to those hosts which are behind firewall and NAT. Inetutils includes a telnet client and server and is installed by default on many Linux distributions. telnet.exe command line utility included in default installation of many versions of Microsoft Windows. See also List of terminal emulators Banner grabbing Virtual terminal Reverse telnet HyTelnet Kermit SSH References External links Telnet Options — the official list of assigned option numbers at iana.org Telnet Interactions Described as a Sequence Diagram Telnet configuration Telnet protocol description, with NVT reference Microsoft TechNet:Telnet commands TELNET: The Mother of All (Application) Protocols Troubleshoot Telnet Errors in Windows Operating System Contains a list of telnet addresses and list of telnet clients Application layer protocols History of the Internet Internet Protocol based network software Internet protocols Internet Standards Remote administration software Unix network-related software URI schemes
6901456
https://en.wikipedia.org/wiki/Gpsd
Gpsd
gpsd is a computer software program that collects data from a Global Positioning System (GPS) receiver and provides the data via an Internet Protocol (IP) network to potentially multiple client applications in a server-client application architecture. Gpsd may be run as a daemon to operate transparently as a background task of the server. The network interface provides a standardized data format for multiple concurrent client applications, such as Kismet or GPS navigation software. Gpsd is commonly used on Unix-like operating systems. It is distributed as free software under the 3-clause BSD license. Design gpsd provides a TCP/IP service by binding to port 2947 by default. It communicates via that socket by accepting commands, and returning results. These commands use a JSON-based syntax and provide JSON responses. Multiple clients can access the service concurrently. The application supports many types of GPS receivers with connections via serial ports, USB, and Bluetooth. Starting in 2009, gpsd also supports AIS receivers. gpsd supports interfacing with the Network Time Protocol (NTP) server ntpd via shared memory to enable setting the host platform's time via the GPS clock. Authors gpsd was originally written by Remco Treffkorn with Derrick Brashear, then maintained by Russell Nelson. It is now maintained by Eric S. Raymond. References External links Global Positioning System Free software programmed in C Free software programmed in Python Software using the BSD license
2110105
https://en.wikipedia.org/wiki/MONIAC
MONIAC
The MONIAC (Monetary National Income Analogue Computer) also known as the Phillips Hydraulic Computer and the Financephalograph, was created in 1949 by the New Zealand economist Bill Phillips to model the national economic processes of the United Kingdom, while Phillips was a student at the London School of Economics (LSE). The MONIAC was an analogue computer which used fluidic logic to model the workings of an economy. The MONIAC name may have been suggested by an association of money and ENIAC, an early electronic digital computer. Description The MONIAC is approximately 2 m high, 1.2 m wide and almost 1 m deep, and consisted of a series of transparent plastic tanks and pipes which were fastened to a wooden board. Each tank represented some aspect of the UK national economy and the flow of money around the economy was illustrated by coloured water. At the top of the board was a large tank called the treasury. Water (representing money) flowed from the treasury to other tanks representing the various ways in which a country could spend its money. For example, there were tanks for health and education. To increase spending on health care a tap could be opened to drain water from the treasury to the tank which represented health spending. Water then ran further down the model to other tanks, representing other interactions in the economy. Water could be pumped back to the treasury from some of the tanks to represent taxation. Changes in tax rates were modeled by increasing or decreasing pumping speeds. Savings reduce the funds available to consumers and investment income increases those funds. The MONIAC showed it by draining water (savings) from the expenditure stream and by injecting water (investment income) into that stream. When the savings flow exceeds the investment flow, the level of water in the savings and investment tank (the surplus-balances tank) would rise to reflect the accumulated balance. When the investment flow exceeds the savings flow for any length of time, the surplus-balances tank would run dry. Import and export were represented by water draining from the model and by additional water being poured into the model. The actual flow of the water was automatically controlled through a series of floats, counterweights, electrodes, and cords. When the level of water reached a certain level in a tank, pumps and drains would be activated. To their surprise, Phillips and his associate Walter Newlyn found that MONIAC could be calibrated to an accuracy of 2%. The flow of water between the tanks was determined by economic principles and the settings for various parameters. Different economic parameters, such as tax rates and investment rates, could be entered by setting the valves which controlled the flow of water about the computer. Users could experiment with different settings and note the effect on the model. The MONIAC's ability to model the subtle interaction of a number of variables made it a powerful tool for its time. When a set of parameters resulted in a viable economy the model would stabilise and the results could be read from scales. The output from the computer could also be sent to a rudimentary plotter. MONIAC had been designed to be used as a teaching aid but was discovered also to be an effective economic simulator. At the time that MONIAC was created, electronic digital computers that could run complex economic simulations were unavailable. In 1949, the few computers in existence were restricted to government and military use. Neither did they have adequate visual display facilities, so were unable to illustrate the operation of complex models. Observing the MONIAC in operation made it much easier for students to understand the interrelated processes of a national economy. The range of organisations that acquired a MONIAC showed that it was used in both capacities. Phillips scrounged a variety of materials to create his prototype computer, including bits and pieces from war surplus such as parts from old Lancaster bombers. The first MONIAC was created in his landlady's garage in Croydon at a cost of £400 (). Phillips first demonstrated the MONIAC to a number of leading economists at the LSE in 1949. It was very well received and Phillips was soon offered a teaching position at the LSE. Current locations It is thought that twelve to fourteen machines were built. The prototype was given to the Economics Department at the University of Leeds, where it is currently on exhibition in the reception of the university's Business School. Copies went to three other British universities. Other computers went to the Harvard Business School and Roosevelt College in the United States and Melbourne University in Australia. The Ford Motor Company and the Central Bank of Guatemala are believed to have bought MONIACs. A MONIAC owned by Istanbul University is located in the Faculty Of Economics and can be inspected by interested parties. A MONIAC from the LSE was given to the Science Museum in London and, after conservation, was placed on public display in the museum's mathematics galleries. A MONIAC owned by the LSE was donated to the New Zealand Institute of Economic Research in Wellington, New Zealand. This machine formed part of the New Zealand Exhibition at the Venice Biennale in 2003. The MONIAC was set to model the New Zealand economy. In 2007 this machine was restored and placed on permanent display in the Reserve Bank of New Zealand Museum. A working MONIAC (or Phillips Machine as it is known in the UK) can be found at the Faculty of Economics at Cambridge University in the United Kingdom. This machine was restored by Allan McRobie of the Cambridge University Engineering Department, who holds an annual demonstration to students. A replica of the MONIAC at the central bank of Guatemala was created for a 2005-6 exhibition entitled "Tropical Economies" at the Wattis Institute of the California College of the Arts in San Francisco. The MONIAC at The University of Melbourne, Australia, is on permanent display in the lobby of the Giblin Eunson Library (Ground Floor, Business and Economics Building, 111 Barry st, Carlton, Melbourne). The faculty has extended an invitation to anyone interested in restoring the MONIAC to functional capacity. Erasmus University Rotterdam (EUR) has owned a MONIAC since 1953. It was a gift from the City of Rotterdam for EUR's 40th anniversary. It is located in the THEIL building. Clausthal University of Technology in the faculty of economic sciences. Popular culture The Terry Pratchett novel Making Money contains a similar device as a major plot point. However, after the device is fully perfected, it magically becomes directly coupled to the economy it was intended to simulate, with the result that the machine cannot then be adjusted without causing a change in the actual economy (in parodic resemblance to Goodhart's law). See also Analogue computer Hydraulic macroeconomics Phillips curve Water integrator References . Documentary "The League of Gentlemen". Third Episode of Pandora's Box, a documentary produced by Adam Curtis External links BBC Radio Four programme 'Water on the brain'. NZIER's Moniac Machine Article includes picture of NZIER Moniac Inc. article: When Money Flowed Like Water Wetware article: Money Flows: Bill Phillips' Financephalograph enginuity article A great disappearing act: the electronic analogue computer Chris Bissell, The Open University, Milton Keynes, UK. Presented at IEEE Conference on the History of Electronics, Bletchley Park, UK, 28–30 June 2004. Moniac on pages 6 and 7. Accessed February 2007 Catalogue of the AWH Phillips papers at the Archives Division of the London School of Economics. Video of the Phillips Machine in operation Allan McRobie demonstrates the Phillips Machine at Cambridge University and performs calculations. (A lecture given in 2010). Contains detailed diagrams of the Machine workings The Phillips Machine Article includes links to videos of the machine in operation. LSE Photo of Phillips with the machine Bill Phillips Lecture by Alan Bollard, 16 July 2008 Philips Machine Simulator Phillips Economic Model on display in the Science Museum, London Istanbul University Moniac Meeting 1940s computers Analog computers Computer-related introductions in 1949 Early British computers Economics models Mechanical computers
31800185
https://en.wikipedia.org/wiki/EgoNet
EgoNet
EgoNet (Egocentric Network Study Software) for the collection and analysis of egocentric social network data. It helps the user to collect and analyse all the egocentric network data (all social network data of a website on the Internet), and provide general global network measures and data matrixes that can be used for further analysis by other software. The egonet is the result of the links that it gives and receives certain address on the Internet, and EgoNet is dedicated to collecting information about them and present it in a way useful to the users. Egonet is written in Java, so that the computer where it is going to be used must have the JRE installed. EgoNet is open source software, licensed under GPL. Its creator is Professor Christopher McCarty, of the University of Florida, United States. Features The program allows to create questionnaires, collect data and provide comprehensive measures and arrays of data that can be used for subsequent analysis by other software. Its main benefits are the generation of questionnaires for relational data, the calculation of relevant General measurements for the analysis of social networks and production graphs. Components Egonet is composed of the following modules: EgoNetW, that allows to create formats of questionnaires for the pursuit of studies; EgoNetClientW: used for data load - once defined the relevant questions and the structure of the questionnaires. See also Graphviz GraphStream graph-tool JUNG NetworkX Tulip References External links Social networking services Data analysis software Free software programmed in Java (programming language) Free science software
39834535
https://en.wikipedia.org/wiki/Appcelerator
Appcelerator
Appcelerator is a privately held mobile technology company based in San Jose, California. Its main products are Titanium, an open-source software development kit for cross-platform mobile development, and the Appcelerator Platform. Founded in 2006, Appcelerator serves industries including retail, financial services, healthcare, and government. As of 2014, it raised more than $90 million in venture capital financing. History Jeff Haynie and Nolan Wright met at Vocalocity, an Atlanta-based voice over IP company that Haynie had co-founded. After Haynie sold Vocalocity in 2006, the pair founded Web 2.0 application development company Hakano. In 2007, Hakano, renamed Appcelerator, began creating an open-source platform for developing rich Internet applications (RIAs). Marc Fleury, the founder of JBoss, joined the company as an advisor. In 2008, Appcelerator relocated to Mountain View, California, and later released a preview of its Appcelerator Titanium product, which drew comment as a possible open-source competitor to Adobe AIR. Appcelerator began to focus on mobile apps in 2009. In June, it released a public beta of Titanium, which added support for Android and iOS app development to its existing web and desktop application features. Titanium 1.0 was released in March 2010. Appcelerator increased its employee count fivefold between October 2010 and 2011. The company's 2011 revenue totaled $3.4 million, a 374 percent increase from 2008. Between 2011 and 2013, Appcelerator announced acquisitions, including: Aptana, integrated development environment (IDE) company Particle Code, HTML5 mobile gaming development platform Cocoafish, backend as a service Nodeable, big data analytics company Singly, API management company in August, 2013. Appcelerator moved to its San Jose headquarters in 2015. In January 2016, Appcelerator was acquired by Axway, a company that helps enterprises handle data flows. Products Axway Appcelerator Dashboard offers real-time analytics of the lifecycle and success of apps built on the Axway Appcelerator Mobile Solution or directly via native SDK. Axway Appcelerator Studio is an open extensible development environment for building, testing and publishing native apps across mobile devices and OSs including iOS, Android. Axway API Builder is an opinionated framework for rapidly building APIs with a scalable cloud service for running them. It allows developers to connect, model transform and optimize data for both native or web app clients. API Builder and API Runtime are the backbones of the Axway Appcelerator Platform MBaaS. Axway Mobile Analytics is a Mobile Analytics offering that collects and presents information in real time about an application's user acquisition, engagement, and usage. Titanium Appcelerator Titanium is an open-source framework that allows the creation of native, hybrid, or mobile web apps across platforms including iOS, Android, Windows Phone from a single JavaScript codebase. As of February 2013, 10 percent of all smartphones worldwide ran Titanium-built apps. As of August in the same year, Titanium had amassed nearly 500,000 developer registrations. Alloy Alloy is an Apache-licensed model–view–controller app framework built on top of Titanium that provides a simple model for separating the app user interface, business logic, and data models. Apps built with Appcelerator products are written in JavaScript. Though initially developed as a Web language, JavaScript is increasingly popular for mobility due to its ability to meet the speed, scale, and user experience requirements that mobile development demands. According to Forrester Research, JavaScript adoption is setting the stage for the "biggest shift in enterprise application development" in more than a decade. Funding In December 2008, Appcelerator closed a $4.1 million first venture round led by Storm Ventures and Larry Augustin. Later, in October 2010, the company announced a partnership with PayPal and that it has raised $9 million in Series B funding from investors including Sierra Ventures and eBay. Appcelerator raised $15 million in Series C funding led by Mayfield Fund, Red Hat, and Translink Capital in November 2011, and a further $12.1 million in a round led by EDBI, the venture fund of the Singaporean government's Economic Development Board, in July 2013. On August 25, 2014, Appcelerator announced $22 million in Series D funding led by Rembrandt Venture Partners. Total funding for the mobile engagement platform to date is more than $90 million. Marketing awards 2012 The Wall Street Journal: Technology Innovation Award in Software 2012 The Wall Street Journal: The Next Big Thing 2012 Red Hat Innovation Award Winner: Extensive Partner Ecosystem 2012 Momentum Index: 100 Open Source Companies 2012 Edison Awards Winner 2012 Silicon Valley Business Journal's Best Places to Work in the Bay Area See also Appcelerator Titanium Mobile application development JavaScript Node.js Mobile Backend as a service (MBaaS) Mobile Enterprise Application Platform (MEAP) References External links Official Website Software companies based in the San Francisco Bay Area Privately held companies based in California Development software companies Mobile software programming tools JavaScript libraries Mobile software development Android (operating system) development software BlackBerry development software Software companies of the United States
7357562
https://en.wikipedia.org/wiki/Software%20engine
Software engine
A software engine is a core component of a complex software system. The word "engine" is a metaphor of a car's engine. Thus a software engine is a complex subsystem. There is no formal guideline for what should be called an engine, but the term has become entrenched in the software industry. Notable examples are database engine, graphics engine, physics engine, search engine, plotting engine, and game engine. Moreover, a web browser actually has two components referred to as engines: the browser engine and JavaScript engine. Classically an engine is something packaged as a library, such as a ".sa", ".so", ".dll", that provides functionality to the software that loads or embeds it. Engines may produce graphics, such as the Python matplotlib or the Objective-C Core Plot. But engines do not in and of themselves generally have standalone user interfaces or "main", they are not applications. Engines may be used to produce higher level services that are applications, and the application developers or the management may choose to call the service an "engine". As in all definitions, context is critical. In the context of the packaging of software components, "engine" means one thing. In the context of advertising an online service, "engine" may mean something entirely different. In the arena of "core software development", an engine is a software module that might be included in other software by means of a package manager such as NuGet for C#, Pipenv for Python, and Swift Package Manager for the Swift language. One seeming outlier is a search engine, such as Google Search, because it is a stand-alone service provided to end users. However, for the search provider, the engine is part of a distributed computing system that can encompass many data centers throughout the world. The word "engine" is evolving along with the evolution of computing as it expands into the arena of services offered via the Internet. It is important to note that there is a difference between Google the end-user application and Google the search engine. As an end user, search is done via a user interface, generally a browser, which talks to the "engine". This is but one way of interacting with the engine. Others include a wide range of Google APIs, which are more akin to the classic notion of engine (where an engine module presents via an API, only). There is an overlapping evolution, a service/application style known as microservices. Prior to the Google online search service, there had been multiple search engines that were indeed packaged as software modules. Long before Google, there were online dialup services that used third party search engines, such as Congressional Quarterly's Washington Alert II. Before that there were many desktop products that included third part search engines, especially CD-ROM based encyclopedias from Grollier, Comptons, Bertelsmann, and many others. Mac OS 9 for a long time used a third party search library (Personal Library Software's CPL). Most of the early search engine companies, such as Personal Library Software and their CPL product, are long gone. One of the earliest Web search engines, perhaps the first, was WebCrawler. It was based on the CPL search engine library from Personal Library Software. The CPL engine is long gone, as it was withdrawn from the market when AOL acquired Personal Library Software, and apparently only exists as archival pages in the Internet Archive Wayback Machine. For a software developer, probably the most useful notion of "engine" is that of a module you can use in your own code, a module that provides significant functionality in a focussed domain. One might call the C standard library an "engine", but it does not really have a focus other than to provide a broad range of low level services. Still, it might be called a "foundational services" engine. On the other hand, Gensim more clearly classifies as an engine; it is a high level package offering a range of high level tools for topic modeling, largely based on derivations of the vector space model of information retrieval originally developed by Gerard Salton. Software engineering
1341689
https://en.wikipedia.org/wiki/Stonekeep
Stonekeep
Stonekeep is a role-playing video game developed and released by Interplay Productions for the PC in 1995. It is a first-person dungeon crawler game with pre-rendered environments, digitized characters and live-action cinematic sequences. Repeatedly delayed, the game that was supposed to be finished in nine months took five years to make. Gameplay Stonekeep is a first-person role-playing game in the style of Eye of the Beholder and Dungeon Master. The game is set in a series of underground labyrinths, filled with monsters, treasures and traps. The player uses the keyboard for movement and typing in notes in the journal, and the mouse to interact with objects and characters. The mouse pointer is usually a target indicator for aiming attacks and weapons wherever it is clicked. When the mouse pointer is moved onto a particular something it changes to another icon to indicate a different action. For example, the mouse pointer changes to an eye when the player can examine things (often signs), or a spread-out hand when the player can pick up items. Other tasks performed with the mouse pointers include opening and closing chests, opening and closing panels, pulling levers and switches, pressing buttons, drinking water, and giving items. The protagonist Drake has two starting possessions: the magic scroll and the magic mirror. The magic scroll allows the player to pick up an infinite number of items. Items of the same type can be combined up to a maximum quantity of 99; other items can be combined such as a quiver which can hold 99 arrows. The magic mirror allows the player to equip Drake and other characters with weapons, armor, and accessories, and to consume items to affect their status, such as healing potions or bad smelling Throg food. Drake can also read scrolls used on him. Although Drake can wield any weapon, other characters like Farli and Karzak can only wield hammers, axes, and shields. Certain weapons like polearms and heavy swords require Drake to have two free hands to wield. Some armor can be worn by certain characters. For instance, only dwarves can wear dwarven platemail, and only Drake can wear knight armor. Exceptional characters like Sparkle and Wahooka cannot be outfitted, but can still consume items. The third possession is the journal, which the player must procure. The journal is divided into six sections. The first section records the statistics of Drake, shows the status of his current equipped weapons, and describes the characteristics of his partners. Drake's statistics are strength, agility, and health, and his weaponry skills include polearm, sword, magick, missiles, and others. The second section of the journal records any clues and hints the player may come across. The third section is used for writing notes. The fourth section records items each time the player picks up a new type. The fifth section records runes each time the player comes across a new one. Unlike the items section, the runes do not have their own respective name recorded. The sixth section of the journal records the level maps that the players journeys through. Spots on the map can be clicked and notes referring to them can be written. Stonekeep features an elaborate "magick" system where four types of runes are inscribed onto a spellcaster (wand): Mannish, Fae, Throggish, and Meta. The first three runes are used for offensive, defensive and special interaction purposes. The Meta runes enhance the effectiveness of the base runes—for example, double power multiplies a single firebolt in two. To use a spellcaster, it must contain adequate mana, and runes must be inscribed onto its shaft. To inscribe runes on the spellcaster, the player needs to equip the spellcaster on either one of Drake's hands, open the journal onto the Runes section, take out the spellcaster, and copy the runes onto the spell slots of the spellcaster. Then the runecaster can be taken out at any time and the spell that can be used has the spell slot highlighted and launched on the indicated target point. Using the magic mirror, some spells can be aimed at the characters, especially healing and quickness spells. Plot Stonekeep'''s mythology revolves around a variety of gods associated with planets of the solar system. In order, they are Helion (Mercury), Aquila (Venus), Thera (Earth), Azrael (Mars), Marif (Jupiter), Afri (Saturn), Saffrini (Uranus), Yoth-Soggoth (Neptune) the Master of Magick, and Kor-Soggoth (Pluto) the Brother to Magick. These gods were captured and imprisoned in nine orbs by the dark god Khull-Khuum 1000 years before the events of the game, during a cataclysm referred to as "The Devastation".Stonekeep is centered on a hero, Drake. Ten years before the events of the game, Drake's home, the castle of Stonekeep, was destroyed by the insane god Khull-Khuum, the Shadowking. Drake, at this time just a boy, was saved from the castle by a mysterious figure. Returning to the ruins of Stonekeep, Drake is visited by the goddess Thera, who sends his spirit out of his body into the ruins itself to explore, find the mystical orbs containing the other gods, and reclaim the land. Along the way, Drake makes many friends, including Farli, Karzak, and Dombur the dwarves; the great dragon Vermatrix; the elf Enigma; and the mysterious Wahooka, the King of goblins. Together, they embark on a quest of ridding the world of Khull-Khuum and his consort the Ice Queen. Development Budget and technology The earliest development of Stonekeep dated back in October 1988, discussed between Brian Fargo and Todd Camasta with the simple title "Dungeon Game". Producer Peter Oliphant and lead programmer Michael Quarles joined the company in 1990 and 1991, respectively. The game development was planned for a minimum period of nine months and a minimum budget of $50K. However, because the initial stages of the game looked good, it exceeded nine months, lasting a total of five years. Stonekeeps final cost was $5 million; its production crew had grown to 200 members by the time of the game's release. The intro sequence was the most expensive part of the production, costing nearly half a million dollars to produce, which was ten times more than the initial budget for the entire project. The initial story line was written by Oliphant, who also designed and programmed the graphics and artificial intelligence engine for the game. The project started out being called Brian's Dungeon (named after Brian Fargo, the president of Interplay Entertainment at the time). Fargo came up with the final name, Stonekeep. The production took much longer than expected because of the rapid advancement of personal computer hardware at the time; specifically, PC CPUs advancing from 80386, to 80486, to Pentiums in the years the game was being developed. Oliphant, who originally designed the game and was lead programmer, left the game as the project passed its fourth year in development. He felt his continued presence was resulting in the constant addition of feature creep and changes (he was a contractor, and had initially only signed up for a nine-month project). After he left, the design became finalized and the product was shipped one year later. Quarles, who was an Interplay employee, stayed as the game's producer and saw it through to the end. The initial specification for the game included that it could not require a hard drive or a mouse, run on an 80286 CPU, use 640K, and run off floppy disks. At the project's end, the game had been upgraded to requiring a mouse, a hard drive, a 386 CPU, and ran off CD-ROM. As a result, the engine had to be extensively modified throughout the production. About three years into the project, Oliphant suggested to Fargo that the product be delivered on CD-ROM. Fargo rejected this idea at the time, citing the failure of previous Interplay CD-ROM projects that had gone this route. Oliphant suggested this after Fargo requested him to drop his percentage of royalties by half due to the high cost of production and goods to create the product, as it was at that time to be shipped on eight floppy disks. The cost of one CD was about the cost of one floppy disk, and the possibilities for eight floppy disks having problems is much greater than a single CD, so the solution seemed obvious to Oliphant. And, in fact, six months later Fargo changed his mind and made the same decision. Graphics and audio The 3D rendering was accomplished by using the Strata Vision application to create the room layouts, monsters, and objects. The initial motions of the monsters in the game were captured by using a blue screen outside with the sunlight. This resulted in uneven lighting from take to take, so eventually all that work was scrapped. Later, a professional studio with controlled lighting was used. The earliest film footage was taken with a standard film camera and Macintosh computer for editing. This technology proved to be inadequate. After two years of failed filming, the team turned their attention to Hollywood. Aided by the production company Dia Quest and new digitizing technology, successful filming was finally implemented into the project. The entire five months of successful filming was soon met with several setbacks including toning, lighting, and digitization problems. When the team finally obtained the Betacam technology, the development was back on track. According to Oliphant, when the project was taken over by Quarles, two questionable decisions were made. The game was always designed to be grid-based, where the player moved from grid to grid (in contrast to today's full freedom of motion 3D environments). Oliphant wanted the movement from center of grid to center of grid, but Quarles changed this to edge of grid to edge of grid. This resulted in the problem that turning within a grid moved the player to the other side of the grid. Much of the long production was a result of correcting this lack of symmetry. The other questionable decision was to not include Oliphant in the production of the motion graphics (Oliphant had an extensive Hollywood background before becoming a game developer). One consequence was that the original combat graphics had been captured from the waist up only, as Quarles had reasoned one must be close to a monster to fight it. Peter Oliphant, upon being delivered these graphics and seeing them for the first time, pointed out that the player could back away during a fight, which would result in seeing their legs. The legs therefore had to be drawn in by hand frame-by-frame to fix this, until these graphics were scrapped for a professional green-screen treatment used later on. The original skeleton in the game was an actual skeleton being worn by one of the artists, and was filmed against a green screen. Because of this, there were no images or animations of the skeleton walking away from the player during game play. A few months before the game's release the skeleton was replaced with the 3D model which was used on the packaging. Due to the complexity of the graphics, during play the computer would have to constantly load graphics from the CD. This prohibited the use of the CD for music, so the developers used chip music for the soundtrack. The game features the voice of Arthur Burghardt—well known as the character Destro in the 1985 G.I. Joe cartoon series—in the role of Khull-Khuum.Stonekeep was originally released for the PC DOS and Windows 95 in 1995, packaged in an elaborate gravestone-style illustrated box, and came with a white hardback novella Thera Awakening, coauthored by Steve Jackson and David L. Pulver (all rights of the novel went to Interplay). The CD-ROM also included a file called "muffins.txt" which contained a recipe for "Tim Cain's Chocolate Chip Pumpkin Muffins". Years later, Stonekeep was later made available for purchase through GOG.com's digital distribution system for Windows XP and Windows Vista. Reception Sales In preparation for Stonekeeps launch, Interplay shipped 175,000 units to retailers. This was the company's biggest shipment ever, according to Greg Miller of the Los Angeles Times. Interplay dedicated $1.5 million to the game's marketing budget, also the largest for any of its games by that point. In response to public reception of Stonekeeps pre-release demo, the company "stepped up the initial release forecast to 200,000 units". In the days before Stonekeeps release, Next Generation reported, "With store orders already topping 90,000, Interplay says it's set to become the fastest selling game in the company's 12-year history." Upon its release, the game placed fourth on PC Data's monthly computer game sales chart for November 1995, but was absent on the following month's chart. According to Interplay, its global sales surpassed 300,000 copies by June 1998. Author Erik Bethke later described Stonekeeps commercial performance as "weak", which he blamed on its five-year development cycle. Critical reviews A Next Generation critic deemed it "well worth the wait. The graphics are stunning, the music is eerie, and the interface is a pleasure to use." He made particular note of the automap, strong puzzle element, and original visual designs for the monsters. Maximum praised the game's "atmospheric 3D rendered world" and sound effects, but criticized the lack of challenging puzzles, low amount of gore, and "sluggish" combat. The editors of Computer Games Strategy Plus named Stonekeep the best computer role-playing game of 1995. Petra Schlunk of Computer Gaming World called Stonekeep "successful on many levels; both hard-core and newcomers to role-playing should enjoy this." She wrote that "a lot of thought and heart went into the game's design and production", and she singled out its writing, characters and storyline for praise. The game was later nominated for Computer Gaming Worlds 1995 Role-Playing Game of the Year award. The editors praised it as "a milestone in computer role-playing games", with "beautiful SGI rendered characters and fascinating story development", but noted that it was held back by bugs. It won the magazine's readers' choice award in its genre.PC Gamer USs Mike Wolf called it "a decent offering for roleplaying novices [... but] far from the magnum opus you'd expect". A review by Bernard Yee of GameSpot did not offer similar praises, concluding that "Stonekeep is a dated first-person RPG that suffers from a poor interface, little depth, and few frills." In Fusion, Arnie Katz called Stonekeep "a good antidote to the idiosyncratic complexities that pass for depth" among role-playing games, and praised its interface and audiovisual presentation. He summarized it as "an exciting quest that doesn't concede much to rival RPG releases." Paul Presley of PC Zone called Stonekeep "a case of gloss but no substance", which failed to offer "something over and above that which Eye of the Beholder was delivering all that time ago." Andy Butcher reviewed Living Legends for Arcane magazine, rating it a 7 out of 10 overall. Butcher comments that "Although the 'plot' as such is somewhat weak, and the game quickly falls into an 'explore the level, kill the monsters, solve the puzzles' affair, Stonekeep is engaging and surprisingly addictive." In 1996, editors of Computer Gaming World ranked it as the tenth top vaporware title in computer game history (due 1991, delivered 1996), stating that "after seeing the same basic demo for years, the game finally shipped, as an anti-climax." Stonekeep was also ranked at number six on GameSpot's top ten vaporware hall of shame. In 2009, GamesRadar also included Stonekeep among the games "with untapped franchise potential" due to the cancelation of Stonekeep 2: Godmaker. LegacyThe Oath of Stonekeep, a novel set in the world of Stonekeep, was written by Troy Denning and published 1999 by Berkley Boulevard Books. Interplay's Black Isle Studios worked on a sequel, Stonekeep 2: Godmaker, for roughly five years, before eventually cancelling it in 2001 in order to work on Icewind Dale II and Baldur's Gate III: The Black Hound. The game and its novelization would remain the only entry in its series until the 2010 announcement of Stonekeep: Bones of the Ancestors, a game developed for Interplay by Alpine Studios. It is not a sequel to Stonekeep, but rather an all-new game and a standalone entry in the franchise. Bones of the Ancestors'' was released in 2012 as downloadable content at WiiWare. References External links Stonekeep Stonekeep on Steam Stonekeep Stonekeep on Gog.com 1995 video games DOS games Fantasy novels Fantasy video games First-person party-based dungeon crawler video games Games commercially released with DOSBox Interplay Entertainment games Novels based on video games Role-playing video games Single-player video games Video games developed in the United States Windows games Video games about witchcraft
7466236
https://en.wikipedia.org/wiki/DSLink
DSLink
The DSLink is a 1st generation storage device used to run Nintendo DS homebrew. It allows the running Nintendo DS games and programs created by unofficial developers. It also allows the running of Nintendo DS game ROMs. Unlike most similar devices at the time, it uses Slot 1 (the DS card slot) instead of Slot 2 (as most Game Boy Advance flash cartridges do) marking it as one of the first entries in the market to do this, allowing for the use of other GBA slot devices such as the Rumble Pak, Nintendo DS Browser RAM expansion cart, etc. Considerations As with all flash carts, there are many considerations that a user will need to be aware of before purchasing. 'Time to play' speed concerns This device has a fairly long "time to play" time, meaning the time from "power on" until you're engaged in a selected program running on it is considerably longer than similar devices. Customizable interface The product can be "skinned" with user-created artwork and sound effects; no editor of any kind is provided for this task. Flash media pricing and availability TransFlash memory is a fairly recent technology and hence will cost more than comparable sized SD memory such as the MicroSD or MiniSD. FATLib support Many homebrew developers use a library called FatLib to create Nintendo DS programs that read/write to the FAT file system used for the SD flash memory. Currently, this library is not supported on the DSLink and will cause many programs that use it not to function properly. Known issues There are several known issues with this product, all of which can presumably be solved with firmware/software upgrades. Non-working images There are many titles that function incorrectly or not at all with the DSLink. New Super Mario Bros. (Mini-Games do not run) The Chronicles of Narnia: The Lion, the Witch and the Wardrobe save files from other products do not appear to convert to DSLink format Animal Crossing: Wild World (slowdown during game) Ultimate Spider-Man (cannot get past intro scene - product website has a saver file fix) Working homebrew software Requires further investigation. Software used with the hardware The product comes with a Windows-based program, known as a 'patcher', that will patch game images and other software to make it compatible with the DSLink. The software also generates and copies over support files the DSLink needs to generate the menus, saver files, etc. It will also convert saver files from other similar devices and makes them compatible with the DSLink. Legality of backup devices Overview The legality of such products has been challenged and laws vary from country to country on their ownership and usage. In the United States, the Digital Millennium Copyright Act would arguably prevent the legal use of this device. Philosophy of legitimate uses While legal or not, the moral uses of these devices could be easily argued. In addition to allowing the freedom to run 'unsigned' third party code (such as media players, PDA software, etc). The most prolific use of these devices is the ability for legitimate owners of software to "multi-boot" to several programs stored on the card at any given time without the need to carry easily-lost SD card sized Nintendo DS programs. With substantial investments in software, users should have the convenience of such products; taking copies on the go while leaving the original product in a safe location. Specifications 1st generation Slot 1 storage device Uses TransFlash (aka MicroSD) - No capacity limit mentioned 4M built in saver memory (no battery is used) Supports GBA game linking NoPassMe function, FlashMe V7x required! Requires a "FlashMe'd" Nintendo DS (classic or Lite model) Supports Moonshell (media player) External links PHWiki:GBA Flash Cards DSLink Website - The official product website Nintendo DS accessories
26262509
https://en.wikipedia.org/wiki/PKWare
PKWare
PKWARE, Inc. is an enterprise data protection software company that provides discovery, classification, masking and encryption solutions, along with data compression software, used by thousands of organizations in financial services, manufacturing, military, healthcare and government. These solutions are used by enterprises that need to comply with data protection regulations such as GDPR, CCPA, HIPAA, PCI DSS, TISAX, ITAR, CDPA, LGPD and other emerging laws. The company is headquartered in Milwaukee, Wisconsin with additional offices in the US, UK and India. PKWARE was founded in 1986 by Phil Katz, co-inventor of the ZIP standard. Katz's ZIP innovations were a rallying point for early online bulletin board system and shareware communities. More recently, PKWARE has focused on enterprise data protection, developing products that integrate data discovery, encryption, and encryption key management. As of 13 May 2020, Thompson Street Capital Partners has acquired PKWARE Inc. History Compression software (1986–2000) PKWARE was founded in 1986 by Phil Katz, a software developer who had begun distributing a new file compression utility, called PKARC, as shareware. PKARC represented a radical improvement over existing compression software (including the ARC utility, on which it was based) and rapidly gained popularity among individuals and corporations. Following a legal settlement with Systems Enhancement Associates Inc., the owners of ARC, Katz stopped distributing PKARC. He released his own compression program, which he called PKZIP, in 1989. PKZIP was the first program to use the new ZIP file format, which Katz developed in conjunction with Gary Conway and subsequently released into the public domain. PKWARE grew rapidly in its early years, fueled by enthusiasm from the bulletin board and shareware communities, along with steady business from large corporations, who were eager to minimize the demands on their limited computing resources. The ZIP format proved so popular that it became the de facto standard for data compression and remains in use throughout the world after more than 30 years. Purchase and expansion (2001–2008) After Katz died in 2000, his family sold the company to a new management team led by George Haddix and backed by investment-banking firm Grace Matthews. Two years later, the company acquired Ascent Solutions, a large-platform software firm based in Dayton, Ohio. SecureZIP, a program that combined PKZIP's data compression with enhanced encryption functionality, was released in 2004. In the following years, PKWARE continued to add support for large and small platform operating systems and introduced new features for both PKZIP and SecureZIP. Shift toward data protection (2009–2015) A new ownership group including company management, Novacap Technologies, and Maranon Capital acquired PKWARE in 2009. The company's new CEO, V. Miller Newton, steered the company toward an increased focus on its encryption products, in response to growing concerns about data security among PKWARE's customers in industries such as healthcare and government. In 2012, PKWARE released Viivo, a cloud storage encryption product to help customers secure data stored on Dropbox and other cloud storage services. Viivo received attention for having been developed outside of traditional methods in an effort toward "disruptive innovation" in the emerging cloud security market. Developing and purchasing expanded capabilities (2016–present) PKWARE released Smartcrypt, a data protection platform combining encryption, data discovery, and encryption key management, in 2016. In 2018, PKWARE added Data Classification with Data Redaction to support PCI DSS compliance added one year later in 2019. PKWARE was then purchased by Thompson Street Capital Partners in May 2020. After acquiring Dataguise in November 2020, the companies were merged under the single PKWARE name. Products from both legacy companies were renamed according to the new PK branding structure and the combined company received new branding to support the changes. These expanded capabilities enable PKWARE to offer data protection for structured data, unstructured data and semi-structured data. Acquisitions On 13 May 2020, Thompson Street Capital Partners acquired PKWARE for an undisclosed sum. Under Thompson Street, PKWARE acquired Dataguise on 10 November 2020 for their sensitive information detection technologies ("data discovery"). Products In addition to its data compression and encryption products, PKWARE continues to maintain the ZIP file format standard in the public domain. The company publishes an Application Note on the ZIP file format, providing developers a general description and technical details of the ZIP file storage specification. This Application Note ensures continued interoperability of the ZIP file format for all users. Patents Phil Katz was granted a patent in September, 1991, for his efficient search functions used in the PKZIP compression process. In 2001 and 2005, PKWARE was awarded patents for patching technology used within PKZIP products. In 2005, PKWARE was granted a patent for methods used to manage .ZIP files within the Windows file manager and Outlook. In total PKWARE holds four patents, has over fourteen pending patents and, as of May 2020, is referenced in over two hundred patents. Awards 2002 PKWARE was awarded PC Magazine's Editors' Choice for data compression software. 2016 PKWARE Smartcrypt was awarded Security Products' GOVIES Government Security Awards for Encryption 2020 PKWARE was awarded the gold 2020 Cybersecurity Excellence Award for Data Redaction and the bronze 2020 Award for Data Redaction 2021 PKWARE was awarded the gold 2021 Cybersecurity Excellence Award for Database Security, the silver Award for Compliance Solution and the Silver Award for Data-Centric Security PKWARE was awarded the American Business Awards Gold Stevie® for Governance, Risk & Compliance Solution, and the American Business Awards Gold Stevie® for International Data Protection Solution See also Phil Katz PKLite External links References Companies based in Milwaukee Companies established in 1986 Privately held companies based in Wisconsin Software companies based in Wisconsin 1986 establishments in Wisconsin Software companies of the United States
6603087
https://en.wikipedia.org/wiki/Employee%20scheduling%20software
Employee scheduling software
Employee scheduling software automates the process of creating and maintaining a schedule. Automating the scheduling of employees increases productivity and allows organizations with hourly workforces to re-allocate resources to non-scheduling activities. Such software will usually track vacation time, sick time, compensation time, and alert when there are conflicts. As scheduling data is accumulated over time, it may be extracted for payroll or to analyze past activity. Although employee scheduling software may or may not make optimization decisions, it does manage and coordinate the tasks. Today's employee scheduling software often includes mobile applications. Mobile scheduling further increased scheduling productivity and eliminated inefficient scheduling steps. It may also include functionality including applicant tracking and on-boarding, time and attendance, and automatic limits on overtime. Such functionality can help organizations with issues like employee retention, compliance with labor laws, and other workforce management challenges. Purpose A theoretical underpinning of an employee scheduling problem can be represented as the Nurse scheduling problem, which is NP-hard. The theoretical complexity of the problem is a significant factor in the development of various software solutions. This is because systems must take into account many different forms of schedules that could be worked, and allocate employees to the correct schedule. Ultimately, optimization of scheduling is to minimize costs, but also often requires a reciprocal approach from management instead of complete reliance on software. Transitioning to employee scheduling software Prior to employee scheduling software companies would use physical mediums for tracking employee hours and work schedule. This then gave rise to data storage forms that later by the 80s were compatible with computer programs and software. These forms however never actually scheduled the employees, it just kept track of the employees work week, hours, and prior work schedules. This then gave way to the idea of employee scheduling software, which would be an all-inclusive system that would store and track employee work history, along with actually scheduling the employee's work week. Punch cards The earliest form of automated employee scheduling and managing of employee hours was the punch card. The idea first created by Basile Bouchon developed the control of a loom by punched holes in paper tape in 1725. Herman Hollerith improved the design. IBM manufactured and marketed a variety of unit record machines for creating, sorting, and tabulating punched cards, even after expanding into electronic computers in the late 1950s. IBM developed punched card technology into a powerful tool for business data-processing and produced an extensive line of general purpose unit record machines. Magnetic tape During the 1960s, the punched card was gradually replaced as the primary means for data storage by magnetic tape, as better, more capable computers became available. Mohawk Data Sciences introduced a magnetic tape encoder in 1965, a system marketed as a keypunch replacement which was somewhat successful, but punched cards were still commonly used for data entry and programming until the mid-1980s when the combination of lower cost magnetic disk storage, and affordable interactive terminals on less expensive minicomputers made punched cards obsolete for this role as well. However, their influence lives on through many standard conventions and file formats. Auto-scheduling and intelligent rostering In the 2010s, the wide adoption of mobile devices and the rise of 3G, 4G, and 5G networks worldwide has made it possible to approach the task of scheduling differently. In the last decade, many software solutions have sprung up to make the lives of business owners and managers easier and less burdensome. The first wave of solutions helped small business owners to schedule, manage, and communicate with their employees in a more streamlined way. The newer way of solutions go a step further, leveraging machine learning and are being built on even newer cloud technologies. The need for automation and intelligent rostering in workforce management will continue to grow as society's heads into a gig economy. Complexity Algorithms are used within the employee scheduling software in order to determine not only who is working, but also the specific jobs and tasks required of the workers. The system still must be monitored, and any further issues with assigning of specifics is done manually. Within the context of roster problems and models, there are three main factors to work out the differences: the integration of days off scheduling with line of work construction and task assignment, roster construction, and demand type. These complexities thusly require that each and every workplace must optimize employee scheduling software based on their own unique set of rules, issues and needs. Additionally, it is difficult to determine optimal solution that minimize costs, meet employee preferences, distribute shifts equitably among employees and satisfy all the workplace constraints. In many organizations, the people involved in developing rosters need decision support tools to help provide the right employees at the right time and the right cost while achieving a high level of employee satisfaction. Due to constant change within work environments, new models and algorithms must be created in order to allow for flexibility as needs and demands arise. For example, when a large number of new employees are hired, as in the total workforce is increased, the scheduling software likely will need to be updated in order to allow for such a change. Features Although employee scheduling software won't necessarily improve business practices by itself, it does automate typically tedious business administration. It can also have positive effects on aspects of the business indirectly, including employee engagement, employee retention, and lowered labor costs. By providing management with large amounts of data, this software can assist management in making decisions and automatically create a work schedule that fits as many constraints as possible. Also, the software may be a part of an ERP package or other human resource management system. Features vary depending on software vendor, but some typical features include: Gantt chart or calendar view of the schedule Approve employee requests for time off Reduce unproductive workforce due to over scheduling Use weather forecasts to predict staffing needs Days off scheduling Allow employees to swap shifts. Templates to roll out shift plans over medium term Interface to payroll and/or management accounting software Ability to easily identify unassigned shifts. Ability to create reports for invoicing and payroll. Manage the task of automation and data collection. Workplace analysis Mobile application integration Interface agents Future trends As the modern workplace becomes more complex, it is likely that rostering will need to be more flexible to cater to more individualistic preferences. Artificial intelligence also looks to play a bigger role in scheduling software, requiring less oversight by management to correct issues. See also Appointment scheduling software Automated planning and scheduling Field service management Gantt chart Meeting scheduling tool Schedule (workplace) Time tracking software Timesheet Workforce management Applicant Tracking System Rostering References Administrative software Business software Project management software Automated planning and scheduling
22182826
https://en.wikipedia.org/wiki/Heteropalpia
Heteropalpia
Heteropalpia is a genus of moths in the family Erebidae. The genus was erected by Emilio Berio in 1939. Species Heteropalpia acrosticta Püngeler, 1904 Heteropalpia cortytoides Berio, 1939 Heteropalpia makabana Hacker & Fibiger, 2006 Heteropalpia profesta Christoph, 1887 Heteropalpia rosacea Rebel, 1907 Heteropalpia vetusta Walker, 1865 Heteropalpia wiltshirei Hacker & Ebert, 2002 References Hacker, H. & Fibiger, M. (2006). "Updated list of Micronoctuidae, Noctuidae (s.l.), and Hyblaeidae species of Yemen, collected during three expeditions in 1996, 1998 and 2000, with comments and descriptions of species". Esperiana Buchreihe zur Entomologie. 12: 75-166. Ophiusini Moth genera
18787443
https://en.wikipedia.org/wiki/Acamas%20%28son%20of%20Antenor%29
Acamas (son of Antenor)
In Greek mythology, Acamas or Akamas (; Ancient Greek: , folk etymology: 'unwearying'), was the son of Trojan elder Antenor and Theano, was a participant in the Trojan War, and fought on the side of the Trojans. Family Acamas was the brother of Crino, Agenor, Antheus, Archelochus, Coön, Demoleon, Eurymachus, Glaucus, Helicaon, Iphidamas, Laodamas, Laodocus, Medon, Polybus, and Thersilochus. Mythology Trojan War With his brother Archelochus and his cousin Aeneas, Acamas was lieutenant of the Dardanian contingent to assist King Priam. Along with Aeneas and Archelochus he led one of the five divisions attacking the Argive wall in the battle for the ships. Homer's Iliad, Book 2, describes the troops of the Dardanians and its leaders: "The Dardanians were led by brave Aeneas, whom Aphrodite bore to Anchises, when she, goddess though she was, had lain with him upon the mountain slopes of Ida. He was not alone, for with him were the two sons of Antenor, Arkhilokhos and Akamas, both skilled in all the arts of war." While in Book 14, Acamas avenged the death of his brother, who had been killed by Ajax, by slaying Promachus the Boeotian. "But he knew well who it was, and the Trojans were greatly vexed with grief [akhos]. Akamas then bestrode his brother's body and wounded Promakhos the Boeotian with his spear, for he was trying to drag his brother's body away. Akamas vaunted loudly over him saying, "Argive archers, braggarts that you are, toil [ponos] and suffering shall not be for us only, but some of you too shall fall here as well as ourselves. See how Promakhos now sleeps, vanquished by my spear; payment for my brother's blood has not long delayed; a man, therefore, may well be thankful if he leaves a kinsman in his house behind him to avenge his fall." Death Two sources tackles the versions of the myth regarding Acamas' death. He was killed possibly by Meriones of Crete, half-brother of King Idomeneus in book 16 of the Iliad, but the Acamas killed there was not specifically identified as a son of Antenor. Quintus of Smyrna describes him as having been killed by the Greek hero Philoctetes. Homer's account "Meriones gave chase to Akamas on foot and caught him up just as he was about to mount his chariot; he drove a spear through his right shoulder so that he fell headlong from the car, and his eyes were closed in darkness." Quintus' account Now Poeas' son [i.e. Philoctetes] the while slew Deioneus and Acamas, Antenor's warrior son: Yea, a great host of strong men laid he low..' Notes References Apollodorus, The Library with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes, Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. ISBN 0-674-99135-4. Online version at the Perseus Digital Library. Greek text available from the same website. Dictys Cretensis, from The Trojan War. The Chronicles of Dictys of Crete and Dares the Phrygian translated by Richard McIlwaine Frazer, Jr. (1931-). Indiana University Press. 1966. Online version at the Topos Text Project. Graves, Robert, The Greek Myths, Harmondsworth, London, England, Penguin Books, 1960. Graves, Robert, The Greek Myths: The Complete and Definitive Edition. Penguin Books Limited. 2017. Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. . Online version at the Perseus Digital Library. Homer, Homeri Opera in five volumes. Oxford, Oxford University Press. 1920. . Greek text available at the Perseus Digital Library. Pausanias, Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1918. . Online version at the Perseus Digital Library Pausanias, Graeciae Descriptio. 3 vols. Leipzig, Teubner. 1903. Greek text available at the Perseus Digital Library. Publius Vergilius Maro, Aeneid. Theodore C. Williams. trans. Boston. Houghton Mifflin Co. 1910. Online version at the Perseus Digital Library. Publius Vergilius Maro, Bucolics, Aeneid, and Georgics. J. B. Greenough. Boston. Ginn & Co. 1900. Latin text available at the Perseus Digital Library. Quintus Smyrnaeus, The Fall of Troy translated by Way. A. S. Loeb Classical Library Volume 19. London: William Heinemann, 1913. Online version at theio.com Tzetzes, John, Allegories of the Iliad translated by Goldwyn, Adam J. and Kokkini, Dimitra. Dumbarton Oaks Medieval Library, Harvard University Press, 2015. Trojans cs:Akamás#Akamás - Trójan ja:アカマース#アンテーノールの子 People of the Trojan War Characters in Greek mythology
26464924
https://en.wikipedia.org/wiki/Cobalt%20%28CAD%20program%29
Cobalt (CAD program)
Cobalt is a parametric-based computer-aided design (CAD) and 3D modeling program that runs on both Macintosh and Microsoft Windows operating systems. The program combines the direct-modeling way to create and edit objects (exemplified by programs such as SpaceClaim) and the highly structured, history-driven parametric way exemplified by programs like Pro/ENGINEER. A product of Ashlar-Vellum, Cobalt is Wireframe-based and history-driven with associativity and 2D equation-driven parametrics and constraints. It offers surfacing tools, mold design tools, detailing, and engineering features. Cobalt includes a library of 149,000 mechanical parts. Cobalt's interface, which the company named the "Vellum interface" after its eponymous flagship product, was designed in 1988 by Dr. Martin Newell (who created the Utah teapot in 1975 and went on to work at Xerox PARC, where the WIMP paradigm for graphical user interfaces was invented) and Dan Fitzpatrick. The central feature of the Vellum interface is its "Drafting Assistant," which facilitates the creation and alignment of the new geometry. Cobalt has received praise for its free-form surfaces on solid modeled objects. Design The distinguishing characteristics of Cobalt are its ease of use and the quick learning curve for new users. Cobalt inherited its 2D and 3D wire frame features from "Vellum." However, with Cobalt, wire frame geometry—which does not have to be planar—can be subsequently revolved or extruded relative to any plane or along a curved path to create 3D solids. Cobalt also allows 3D objects to be created directly using 3D tools while still retaining the designer's ability to edit those objects via history-driven parametrics and later to add further constraints. Both types of solids—extruded 2D wire frame and directly created 3D solids—can be seamlessly mixed in the same drawing. Whereas most history-based parametric solid modelers require the designer to rigorously follow a logical progression while creating models and tend to require that the designer think ahead about the planned order of transmutations of the solid model, Cobalt has a more freeform, less structured way of solid modeling that the developer refers to as "Organic Workflow". Cobalt's less structured modeling environment coupled with an integral ray-tracing capability makes it suitable for brainstorming and product development. The program's history-driven modeling and equation-driven parametrics and constraints permit designers to edit the dimensions and locations of key features in models without the need for major redesign—much like changing the value of a single cell in a complex spreadsheet. Drafting Assistant Ashlar-Vellum's patented, -year-old "Drafting Assistant" is the central component of Ashlar's "Vellum interface". The Drafting Assistant tracks the position of the designer's cursor and looks for nearby geometry. It then automatically displays information alongside the cursor regarding nearby geometric features to which the designer can snap. The designer can create new geometry at those snap points, or create construction lines to serve as guides. The Drafting Assistant is sensitive to the following geometric attributes: Centers Endpoints Intersections Midpoints Perpendicularity Quadrants Tangents Vertexes Drafting Assistant remembers the last snaps with a weighted algorithm to intuit the designer's intentions; thus, it is easy to snap to intersections in empty 3D space. In the animation at right, the designer first snaps to the X-, Y-, and Z-axis coordinates at the midpoint of the top edge and then snaps to the same spot on the leading edge, which has different X- and Z-axis coordinates. He moves his cursor to a point in 3D space where there are no geometric attributes to snap to. Although there may be 3D surfaces underneath the cursor, Drafting Assistant intuits the designer's intent and offers an intersection point comprising the Y- and Z-axis coordinates of the first edge and the X-axis coordinate of the nearest edge. At this location, the designer adds a circle freehand and then specifies a diameter of 200 millimeters by typing it into the box at bottom right. Last, the designer uses the "Remove profile from solid" tool to cut through the block. Here again, Drafting Assistant enables prompt definition of the depth of the cut by snapping to the back quadrant of the intersecting hole. The Drafting Assistant also provides a "Message line" at the top. This displays instructions appropriate for the selected tool, prompts the designer with what he should do next with any given tool, and reminds the designer of optional modes for those tools. Cobalt's parametrics and history tracking work permit the designer to edit later the diameter and location of either circle—both of which have dependencies (holes in the block)—and the model updates accordingly. Tool sets Cobalt features the following tool sets: Animation tools Cobalt features several modes for making animation, notably "Static" (where the sun and shadows move in a stationary scene), "Walk-through," and "Fly-by". Cobalt is also capable of six different levels of photorealistic rendering, from "Raytrace Preview Render [Shadows Off]" through "Auto Full Render [Shadows On, Antialias]". Choosing less realistic modes for trial animations allows very quick rendering—even those with several hundred frames—because Cobalt fully exploits multi-core microprocessors during rendering. The click-to-play animation (upper right) shows two industrial pushbutton switches surrounded by a virtual "photo studio" in a Cobalt model. The mirrored hemisphere enables the reader to see the back wall, floor, and ceiling lights, which all contribute to the nature of the light reflecting off the switches. Face-on images of these switches were used in the development of a touchscreen-based human–machine interface (HMI) for use in industrial manufacturing settings. To create fly-by animations, Cobalt prompts the designer to specify a path (a line or curve) for the "camera eye" to follow as well as a point at which the camera should point, and then renders the animation. A designer can specify such attributes as the angle for the camera's field of view and can turn on settings such as perspective, which gives rendered images a vanishing point. Whether the designer is rendering a single image or a multi-frame animation, Cobalt offers broad control of lighting, including the ability to illuminate images with sunlight wherein the date, time of day, latitude, and longitude are all user-adjustable to obtain accurate shadows. Surfacing Cobalt includes freeform Class-A NURBS surface modeling for creating complex, aesthetic, or technical shapes. The self-running animation (lower right) demonstrates two capabilities of Cobalt: 1) how a limited number of control points govern complex NURBS surface geometry, and 2) demonstrates a fly-by animation produced by Cobalt whereby the "camera eye path" was attached to a 360-degree circle. 2D/3D Wireframe Drafting PDF Presentation CAM connections Cobalt exports topologically correct ACIS, Parasolids, and STEP files for finite element analysis (FEA) meshing. Photo-realistic rendering Often used for concept development, wireframe models can be done in both 2D or 3D as necessary. Shapes can be drawn precisely or pushed and pulled as the designer chooses. Solid modelling Cobalt exports topologically correct ACIS, Parasolids, and STEP files for tool-path and Gcode generation using external computer-aided manufacturing (CAM) software. Alternatively IGES and DXF files can be used to send surface or profile data to external CAM software. Product family Cobalt is the high-end member of a four-member family of products. The other three Ashlar-Vellum offerings are "Graphite", "Argon", and "Xenon": Graphite essentially inherited the feature-set of Ashlar's flagship product, Vellum. It offers 2D and 3D wireframe drafting and equation-driven parametrics. Argon is the most affordable, offering 3D solid modeling (but not history-based), ray tracing, and animation. Xenon is a less capable cousin of Cobalt, offering all of the 2D and 3D solid modeling functions of Cobalt as well as ray-trace rendering and animation. However, Xenon lacks Cobalt's geometric and equation-driven parametrics, "Associative Assembly Tools" and the mechanical parts library, nor does it support dimensioning using geometric dimensioning and tolerancing (GD&T). See also 3D computer graphics software 3D computer graphics 3D modeling Comparison of 3D computer graphics software Freeform surface modelling Geometric dimensioning and tolerancing List of computer-aided design editors Comparison of CAD editors for CAE References External links 3D graphics software Computer-aided design software Proprietary software Articles containing video clips
52937927
https://en.wikipedia.org/wiki/Cincinnati%20Slammers
Cincinnati Slammers
The Cincinnati Slammers, originally the Ohio Mixers, were a professional basketball team based in Lima, Ohio from 1982 to 1984 and Cincinnati, Ohio from 1984 to 1987. They were members of the Continental Basketball Association (CBA). The team was admitted into the CBA as an expansion franchise in 1982. Team owner Tom Sawyer served as the Mixers' head coach during their two season. Jerry Robinson underwrote the re-location of the franchise to Cincinnati before the 1984–85 season. Sawyer stayed on as head coach to the newly re-branded Cincinnati Slammers, but resigned during their first season at which point assistant coach Tom Thacker took over the position. Herb Brown was hired as head coach before the 1985–86 season and led the team until they went defunct following the 1986–87 season. History Lima (1982–84) The Continental Basketball Association (CBA) admitted an expansion franchise from Lima, Ohio on May 28, 1982, just before the CBA franchise fee increased from $100,000 to $125,000. They were designated to the Central Division of the CBA. They were branded as the Ohio Mixers. The Mixers played their first game on December 3, 1982. Ohio's Center Rich Kelley was their first player in franchise history to get signed to a National Basketball Association (NBA) contract when he signed a 10-day deal on December 28, 1982 with the Denver Nuggets. Kelly went on to play the rest of the season with in the NBA, eventually joining the Utah Jazz after Denver traded him for Danny Schayes and cash considerations. On December 31, 1982 Mixers' guard Dwight Anderson was signed to a 10-day contract with Denver, but the deal was not extended so he returned to Lima on January 9, 1983. Phil Jackson, who was later inducted into the Naismith Memorial Basketball Hall of Fame as a head coach, made his professional head coaching debut against the Mixers on January 30, 1983 after the Albany Patroons fired Dean Meminger and Jackson was hired to take his place. On February 9, 1983 Ohio guard Kevin Figaro was named to the '83 CBA All-Star First Team. Ohio finished the 1982–83 CBA season with a win-loss record of 17–27. During the off-season in 1983 the Mixers traded power forward DeWayne Scales to the Detroit Spirits in exchange for center Cyrus Mann. It was reported in the Lexington Herald-Leader that the Mixers had a cooperative working agreement to develop players for the Atlanta Hawks and San Antonio Spurs of the NBA, essentially acting their farm team. The 1983–84 Mixers featured NBA players Wes Matthews and Billy Ray Bates. Matthews was called up to the NBA twice that season, first with the Atlanta Hawks and finally with the Philadelphia 76ers. Bates was attempting an NBA comeback, which on top of joining the Mixers included playing for Crispa Redmanizers of the Philippine Basketball Association. Although Bates never made it back on an NBA roster, he did play professional basketball until 1988. At the end of the season, their record was 23–21, which wasn't good enough to make the CBA post-season. During their two seasons in Lima, the Mixers played their home games at Lima Senior High School, which had a capacity for 3,800 persons. Cincinnati (1984–87) During the off-season before the 1984–85 season, the CBA approved the re-location of the Mixers from Lima, Ohio to Cincinnati. The Sawyer family of Lima, who owned the Ohio Mixers, had their re-location costs underwritten by Jerry Robinson, the president of the Cincinnati Gardens where the newly branded Cincinnati Slammers would play. It was the first professional basketball team in that city since the Cincinnati Royals re-located to Kansas City, Missouri. The first player Cincinnati signed was former University of Dayton swingman Roosevelt Chapman when he inked a contract on October 16, 1984. When asked by the United Press International how it felt to be close to his alma mater, Chapman responded, "It feels good [...] I'll be close to home and there will be a lot of [NBA] scouts here watching us." The Slammers recorded their first win of the season against the Louisville Catbirds, by a score of 111–90. Cincinnati center Dewayne Scales scored a game-high 29 points and 13 rebounds, followed by Slammers player Darrell Gadsden who scored 26 points. Head coach Tom Sawyer resigned his position in early January 1985. Tom Thacker, who had been Cincinnati's assistant coach, was given the head coaching position following Sawyers resignation. Cincinnati's finished their first season with the worst record in the league (17–31), although based on the league's point system they were second to last (135 points). In June 1985 the Slammers hired Linda Reed as their general manager. That marked the first time a woman had been hired as general manager to a professional basketball team. Reed offered Herb Brown the Slammers' head coaching position for the 1985–86 season. The season before Brown had coached the Puerto Rico Coquis where he received a $500 fine for an altercation with a CBA referee. Tom Thacker, who had been the team's head coach since Tom Sawyer resigned in January 1985, stayed with Cincinnati as an assistant coach to Brown. Slammers' head coach Herb Brown was named CBA Coach of the Month for January 1986. Cincinnati player Victor Fleming was selected to the 1986 CBA All-Star Team. The Slammers finished the 1985–86 season with the best record in the Western Division (33–15). They also finished first in their division in points, which the CBA uses to determine their postseason seeding. During the first round of the 1986 CBA Playoffs the Slammers faced the Kansas City Sizzlers. Cincinnati swept Kansas City four games to none. The Slammers went on the 1986 CBA Western Division Semifinals where they played the Evansville Thunder. The Thunder managed to win one game in that series, but the Slammers were victorious in four games advancing them to the 1986 Western Division Finals. The La Crosse Catbirds advanced the 1986 CBA Finals over Cincinnati after winning four games of the series to the Slammers' two games. In spite of their success during the 1985–86 season, Slammers' part-owner Jerry Robinson announced he was selling his interest in the Cincinnati CBA franchise. According to Robinson, the Slammers had lost $500,000 during their two seasons in Cincinnati. He also stated that the average attendance for home games was 940 spectators. During the playoffs, the Slammers could only muster 1,500 persons on average. Their small crowds did not deter the team from signing a contract with their home venue, Cincinnati Gardens, for the 1986–87 season. During a game on February 13, 1987 Cincinnati player Bill Martin knocked Charleston Gunners center Peter Verhoeven unconscious during a fight in the third quarter. Martin was suspended three games. Hiatus and re-location to Cedar Rapids (1987–88) Team owner Jerry Gordon, who purchased Jerry Robinson's interest in the Slammers, denied reports that the franchise was looking to relocate to Fort Wayne, Indiana following the 1986–87 season. Gordon did say that there was still the possibility the Slammers could be re-located, just not to Fort Wayne. Several days later, Gordon backtracked on his previous statement admitting that the Slammers were looking to relocate to Fort Wayne. Cincinnati had the second lowest attendance during the 1986–87 season, averaging 705 spectators per game. Going into the 1987–88 season the CBA shifted their focus away from big markets (like Cincinnati) to smaller ones. Slammers owner Jerry Gordon was given a year to find a small market buyer who could re-locate before the 1988–89 season. Gordon looked at Canton, Ohio as a possible new home for the Slammers, but he found little interest from potential buyers and city officials. Krause Gentle, owner of the convenience store chain Kum & Go, approached Slammers owner Jerry Gordon about buying the franchise and re-locating it to Cedar Rapids, Iowa. The deal was approved by the CBA and the team was re-branded as the Cedar Rapids Silver Bullets before the 1988–89 season. Season-by-season standings All-time roster Maurice Adams Richard Adams Norm Anchrum Dwight Anderson Ken Austin Marvin Barnes Billy Ray Bates Norris Bell Tom Bethea Lewis Brown Johnny Brown Tony Brown David Burns Albert Butts John Campbell Butch Carter Roosevelt Chapman Leroy Combs Mark Dorris Jerry Eaves Dan Federmann Kevin Figaro Scott Fisher Victor Fleming Alvin Frederick Darrell Gadsden Lionel Garrett Mike Green Dino Gregory Lamar Heard Lawrence Held Carl Henry Anthony Hicks Johnny High Doug Jemison Jeff Jenkins Jim Johnstone Ozell Jones Mike Kanieski Daryl Lloyd Nigel Lloyd Bill Martin Wes Matthews Jim McCaffrey John McCullough Hank McDowell Bob Miller Brian O'Connor John Pinone John Schweitz Jay Shakir Wayne Smith Lloyd Terry Joel Thompson Sedric Toney Steve Trumbo Horace Wyatt John Wiley Kevin Williams Tony Wilson Brad Wright Sources References External links McKay, Robert (February 1986). "Hearts on their sleeves, no names on their jerseys". Cincinnati Magazine. pp. 72-75 "CINCINNATI SLAMMERS CBA 1985-1987" by Cincinnati Sports History via Flickr Basketball teams in Cincinnati Lima, Ohio 1982 establishments in Ohio Continental Basketball Association teams Basketball teams established in 1982 1987 disestablishments in Ohio Sports clubs disestablished in 1987
24075
https://en.wikipedia.org/wiki/Peripheral%20Component%20Interconnect
Peripheral Component Interconnect
Peripheral Component Interconnect (PCI) is a local computer bus for attaching hardware devices in a computer and is part of the PCI Local Bus standard. The PCI bus supports the functions found on a processor bus but in a standardized format that is independent of any given processor's native bus. Devices connected to the PCI bus appear to a bus master to be connected directly to its own bus and are assigned addresses in the processor's address space. It is a parallel bus, synchronous to a single bus clock. Attached devices can take either the form of an integrated circuit fitted onto the motherboard (called a planar device in the PCI specification) or an expansion card that fits into a slot. The PCI Local Bus was first implemented in IBM PC compatibles, where it displaced the combination of several slow Industry Standard Architecture (ISA) slots and one fast VESA Local Bus (VLB) slot as the bus configuration. It has subsequently been adopted for other computer types. Typical PCI cards used in PCs include: network cards, sound cards, modems, extra ports such as Universal Serial Bus (USB) or serial, TV tuner cards and hard disk drive host adapters. PCI video cards replaced ISA and VLB cards until rising bandwidth needs outgrew the abilities of PCI. The preferred interface for video cards then became Accelerated Graphics Port (AGP), a superset of PCI, before giving way to PCI Express. The first version of PCI found in retail desktop computers was a 32-bit bus using a 33 MHz bus clock and 5 V signalling, although the PCI 1.0 standard provided for a 64-bit variant as well. These have one locating notch in the card. Version 2.0 of the PCI standard introduced 3.3 V slots, physically distinguished by a flipped physical connector to prevent accidental insertion of 5 V cards. Universal cards, which can operate on either voltage, have two notches. Version 2.1 of the PCI standard introduced optional 66 MHz operation. A server-oriented variant of PCI, PCI Extended (PCI-X) operated at frequencies up to 133 MHz for PCI-X 1.0 and up to 533 MHz for PCI-X 2.0. An internal connector for laptop cards, called Mini PCI, was introduced in version 2.2 of the PCI specification. The PCI bus was also adopted for an external laptop connector standard the CardBus. The first PCI specification was developed by Intel, but subsequent development of the standard became the responsibility of the PCI Special Interest Group (PCI-SIG). PCI and PCI-X sometimes are referred to as either Parallel PCI or Conventional PCI to distinguish them technologically from their more recent successor PCI Express, which adopted a serial, lane-based architecture. PCI's heyday in the desktop computer market was approximately 1995 to 2005. PCI and PCI-X have become obsolete for most purposes; however in 2020 they are still common on modern desktops for the purposes of backward compatibility and the low relative cost to produce. Another common modern application of parallel PCI is in industrial PCs, where many specialized expansion cards, used here, never transitioned to PCI Express, just as with some ISA cards. Many kinds of devices formerly available on PCI expansion cards are now commonly integrated onto motherboards or available in USB and PCI Express versions. History Work on PCI began at the Intel Architecture Labs (IAL, also Architecture Development Lab) . A team of primarily IAL engineers defined the architecture and developed a proof of concept chipset and platform (Saturn) partnering with teams in the company's desktop PC systems and core logic product organizations. PCI was immediately put to use in servers, replacing Micro Channel architecture (MCA) and Extended Industry Standard Architecture (EISA) as the server expansion bus of choice. In mainstream PCs, PCI was slower to replace VLB, and did not gain significant market penetration until late 1994 in second-generation Pentium PCs. By 1996, VLB was all but extinct, and manufacturers had adopted PCI even for Intel 80486 (486) computers. EISA continued to be used alongside PCI through 2000. Apple Computer adopted PCI for professional Power Macintosh computers (replacing NuBus) in mid-1995, and the consumer Performa product line (replacing LC Processor Direct Slot (PDS)) in mid-1996. Outside the server market, the 64-bit version of plain PCI remained rare in practice though, although it was used for example by all (post-iMac) G3 and G4 Power Macintosh computers. Later revisions of PCI added new features and performance improvements, including a 66 MHz 3.3 V standard and 133 MHz PCI-X, and the adaptation of PCI signaling to other form factors. Both PCI-X 1.0b and PCI-X 2.0 are backward compatible with some PCI standards. These revisions were used on server hardware but consumer PC hardware remained nearly all 32-bit, 33 MHz and 5 volt. The PCI-SIG introduced the serial PCI Express in . Since then, motherboard manufacturers have included progressively fewer PCI slots in favor of the new standard. Many new motherboards do not provide PCI slots at all, as of late 2013. Auto configuration PCI provides separate memory and memory-mapped I/O port address spaces for the x86 processor family, 64 and 32 bits, respectively. Addresses in these address spaces are assigned by software. A third address space, called the PCI Configuration Space, which uses a fixed addressing scheme, allows software to determine the amount of memory and I/O address space needed by each device. Each device can request up to six areas of memory space or input/output (I/O) port space via its configuration space registers. In a typical system, the firmware (or operating system) queries all PCI buses at startup time (via PCI Configuration Space) to find out what devices are present and what system resources (memory space, I/O space, interrupt lines, etc.) each needs. It then allocates the resources and tells each device what its allocation is. The PCI configuration space also contains a small amount of device type information, which helps an operating system choose device drivers for it, or at least to have a dialogue with a user about the system configuration. Devices may have an on-board read-only memory (ROM) containing executable code for x86 or PA-RISC processors, an Open Firmware driver, or an Option ROM. These are typically needed for devices used during system startup, before device drivers are loaded by the operating system. In addition, there are PCI Latency Timers that are a mechanism for PCI Bus-Mastering devices to share the PCI bus fairly. "Fair" in this case means that devices will not use such a large portion of the available PCI bus bandwidth that other devices are not able to get needed work done. Note, this does not apply to PCI Express. Interrupts Devices are required to follow a protocol so that the interrupt lines can be shared. The PCI bus includes four interrupt lines, all of which are available to each device. However, they are not wired in parallel as are the other PCI bus lines. The positions of the interrupt lines rotate between slots, so what appears to one device as the INTA# line is INTB# to the next and INTC# to the one after that. Single-function devices use their INTA# for interrupt signaling, so the device load is spread fairly evenly across the four available interrupt lines. This alleviates a common problem with sharing interrupts. The mapping of PCI interrupt lines onto system interrupt lines, through the PCI host bridge, is implementation-dependent. Platform-specific Basic Input/Output System (BIOS) code is meant to know this, and set the "interrupt line" field in each device's configuration space indicating which IRQ it is connected to. PCI interrupt lines are level-triggered. This was chosen over edge-triggering to gain an advantage when servicing a shared interrupt line, and for robustness: edge triggered interrupts are easy to miss. Later revisions of the PCI specification add support for message-signaled interrupts. In this system, a device signals its need for service by performing a memory write, rather than by asserting a dedicated line. This alleviates the problem of scarcity of interrupt lines. Even if interrupt vectors are still shared, it does not suffer the sharing problems of level-triggered interrupts. It also resolves the routing problem, because the memory write is not unpredictably modified between device and host. Finally, because the message signaling is in-band, it resolves some synchronization problems that can occur with posted writes and out-of-band interrupt lines. PCI Express does not have physical interrupt lines at all. It uses message-signaled interrupts exclusively. Conventional hardware specifications These specifications represent the most common version of PCI used in normal PCs: clock with synchronous transfers Peak transfer rate of 133 MB/s (133 megabytes per second) for 32-bit bus width (33.33 MHz × 32 bits ÷ 8 bits/byte = 133 MB/s) 32-bit bus width 32- or 64-bit memory address space (4 GiB or 16 EiB) 32-bit I/O port space 256-byte (per device) configuration space 5-volt signaling Reflected-wave switching The PCI specification also provides options for 3.3 V signaling, 64-bit bus width, and 66 MHz clocking, but these are not commonly encountered outside of PCI-X support on server motherboards. The PCI bus arbiter performs bus arbitration among multiple masters on the PCI bus. Any number of bus masters can reside on the PCI bus, as well as requests for the bus. One pair of request and grant signals is dedicated to each bus master. Card voltage and keying Typical PCI cards have either one or two key notches, depending on their signaling voltage. Cards requiring 3.3 volts have a notch 56.21 mm from the card backplate; those requiring 5 volts have a notch 104.47 mm from the backplate. This allows cards to be fitted only into slots with a voltage they support. "Universal cards" accepting either voltage have both key notches. Connector pinout The PCI connector is defined as having 62 contacts on each side of the edge connector, but two or four of them are replaced by key notches, so a card has 60 or 58 contacts on each side. Side A refers to the 'solder side' and side B refers to the 'component side': if the card is held with the connector pointing down, a view of side A will have the backplate on the right, whereas a view of side B will have the backplate on the left. The pinout of B and A sides are as follows, looking down into the motherboard connector (pins A1 and B1 are closest to backplate). 64-bit PCI extends this by an additional 32 contacts on each side which provide AD[63:32], C/BE[7:4]#, the PAR64 parity signal, and a number of power and ground pins. Most lines are connected to each slot in parallel. The exceptions are: Each slot has its own REQ# output to, and GNT# input from the motherboard arbiter. Each slot has its own IDSEL line, usually connected to a specific AD line. TDO is daisy-chained to the following slot's TDI. Cards without JTAG support must connect TDI to TDO so as not to break the chain. PRSNT1# and PRSNT2# for each slot have their own pull-up resistors on the motherboard. The motherboard may (but does not have to) sense these pins to determine the presence of PCI cards and their power requirements. REQ64# and ACK64# are individually pulled up on 32-bit only slots. The interrupt lines INTA# through INTD# are connected to all slots in different orders. (INTA# on one slot is INTB# on the next and INTC# on the one after that.) Notes: IOPWR is +3.3 V or +5 V, depending on the backplane. The slots also have a ridge in one of two places which prevents insertion of cards that do not have the corresponding key notch, indicating support for that voltage standard. Universal cards have both key notches and use IOPWR to determine their I/O signal levels. The PCI SIG strongly encourages 3.3 V PCI signaling, requiring support for it since standard revision 2.3, but most PC motherboards use the 5 V variant. Thus, while many currently available PCI cards support both, and have two key notches to indicate that, there are still a large number of 5 V-only cards on the market. The M66EN pin is an additional ground on 5 V PCI buses found in most PC motherboards. Cards and motherboards that do not support 66 MHz operation also ground this pin. If all participants support 66 MHz operation, a pull-up resistor on the motherboard raises this signal high and 66 MHz operation is enabled. The pin is still connected to ground via coupling capacitors on each card to preserve its AC shielding function. The PCIXCAP pin is an additional ground on PCI buses and cards. If all cards and the motherboard support the PCI-X protocol, a pull-up resistor on the motherboard raises this signal high and PCI-X operation is enabled. The pin is still connected to ground via coupling capacitors on each card to preserve its AC shielding function. At least one of PRSNT1# and PRSNT2# must be grounded by the card. The combination chosen indicates the total power requirements of the card (25 W, 15 W, or 7.5 W). SBO# and SDONE are signals from a cache controller to the current target. They are not initiator outputs, but are colored that way because they are target inputs. PME# () Power management event (optional) which is supported in PCI and higher. It is a , open drain, active low signal. PCI cards may use this signal to send and receive PME via the PCI socket directly, which eliminates the need for a special Wake-on-LAN cable. Mixing of 32-bit and 64-bit PCI cards in different width slots Most 32-bit PCI cards will function properly in 64-bit PCI-X slots, but the bus clock rate will be limited to the clock frequency of the slowest card, an inherent limitation of PCI's shared bus topology. For example, when a PCI 2.3, 66-MHz peripheral is installed into a PCI-X bus capable of 133 MHz, the entire bus backplane will be limited to 66 MHz. To get around this limitation, many motherboards have two or more PCI/PCI-X buses, with one bus intended for use with high-speed PCI-X peripherals, and the other bus intended for general-purpose peripherals. Many 64-bit PCI-X cards are designed to work in 32-bit mode if inserted in shorter 32-bit connectors, with some loss of performance. An example of this is the Adaptec 29160 64-bit SCSI interface card. However, some 64-bit PCI-X cards do not work in standard 32-bit PCI slots. Installing a 64-bit PCI-X card in a 32-bit slot will leave the 64-bit portion of the card edge connector not connected and overhanging. This requires that there be no motherboard components positioned so as to mechanically obstruct the overhanging portion of the card edge connector. Physical dimensions PCI brackets heights: Standard: 120.02 mm; Low Profile: 79.20 mm. PCI Card lengths (Standard Bracket & 3.3 V): Short Card: 169.52 mm; Long Card: 313.78 mm. PCI Card lengths (Low Profile Bracket & 3.3 V): MD1: 121.79 mm; MD2: 169.52 mm; MD3: 243.18 mm. Mini PCI Mini PCI was added to PCI version 2.2 for use in laptops; it uses a 32-bit, 33 MHz bus with powered connections (3.3 V only; 5 V is limited to 100 mA) and support for bus mastering and DMA. The standard size for Mini PCI cards is approximately a quarter of their full-sized counterparts. There is no access to the card from outside the case, unlike desktop PCI cards with brackets carrying connectors. This limits the kinds of functions a Mini PCI card can perform. Many Mini PCI devices were developed such as Wi-Fi, Fast Ethernet, Bluetooth, modems (often Winmodems), sound cards, cryptographic accelerators, SCSI, IDE–ATA, SATA controllers and combination cards. Mini PCI cards can be used with regular PCI-equipped hardware, using Mini PCI-to-PCI converters. Mini PCI has been superseded by the much narrower PCI Express Mini Card Technical details of Mini PCI Mini PCI cards have a 2 W maximum power consumption, which limits the functionality that can be implemented in this form factor. They also are required to support the CLKRUN# PCI signal used to start and stop the PCI clock for power management purposes. There are three card form factors: Type I, Type II, and Type III cards. The card connector used for each type include: Type I and II use a 100-pin stacking connector, while Type III uses a 124-pin edge connector, i.e. the connector for Types I and II differs from that for Type III, where the connector is on the edge of a card, like with a SO-DIMM. The additional 24 pins provide the extra signals required to route I/O back through the system connector (audio, AC-Link, LAN, phone-line interface). Type II cards have RJ11 and RJ45 mounted connectors. These cards must be located at the edge of the computer or docking station so that the RJ11 and RJ45 ports can be mounted for external access. Mini PCI is distinct from 144-pin Micro PCI. PCI bus transactions PCI bus traffic consists of a series of PCI bus transactions. Each transaction consists of an address phase followed by one or more data phases. The direction of the data phases may be from initiator to target (write transaction) or vice versa (read transaction), but all of the data phases must be in the same direction. Either party may pause or halt the data phases at any point. (One common example is a low-performance PCI device that does not support burst transactions, and always halts a transaction after the first data phase.) Any PCI device may initiate a transaction. First, it must request permission from a PCI bus arbiter on the motherboard. The arbiter grants permission to one of the requesting devices. The initiator begins the address phase by broadcasting a 32-bit address plus a 4-bit command code, then waits for a target to respond. All other devices examine this address and one of them responds a few cycles later. 64-bit addressing is done using a two-stage address phase. The initiator broadcasts the low 32 address bits, accompanied by a special "dual address cycle" command code. Devices which do not support 64-bit addressing can simply not respond to that command code. The next cycle, the initiator transmits the high 32 address bits, plus the real command code. The transaction operates identically from that point on. To ensure compatibility with 32-bit PCI devices, it is forbidden to use a dual address cycle if not necessary, i.e. if the high-order address bits are all zero. While the PCI bus transfers 32 bits per data phase, the initiator transmits 4 active-low byte enable signals indicating which 8-bit bytes are to be considered significant. In particular, a write must affect only the enabled bytes in the target PCI device. They are of little importance for memory reads, but I/O reads might have side effects. The PCI standard explicitly allows a data phase with no bytes enabled, which must behave as a no-op. PCI address spaces PCI has three address spaces: memory, I/O address, and configuration. Memory addresses are 32 bits (optionally 64 bits) in size, support caching and can be burst transactions. I/O addresses are for compatibility with the Intel x86 architecture's I/O port address space. Although the PCI bus specification allows burst transactions in any address space, most devices only support it for memory addresses and not I/O. Finally, PCI configuration space provides access to 256 bytes of special configuration registers per PCI device. Each PCI slot gets its own configuration space address range. The registers are used to configure devices memory and I/O address ranges they should respond to from transaction initiators. When a computer is first turned on, all PCI devices respond only to their configuration space accesses. The computer's BIOS scans for devices and assigns Memory and I/O address ranges to them. If an address is not claimed by any device, the transaction initiator's address phase will time out causing the initiator to abort the operation. In case of reads, it is customary to supply all-ones for the read data value (0xFFFFFFFF) in this case. PCI devices therefore generally attempt to avoid using the all-ones value in important status registers, so that such an error can be easily detected by software. PCI command codes There are 16 possible 4-bit command codes, and 12 of them are assigned. With the exception of the unique dual address cycle, the least significant bit of the command code indicates whether the following data phases are a read (data sent from target to initiator) or a write (data sent from an initiator to target). PCI targets must examine the command code as well as the address and not respond to address phases which specify an unsupported command code. The commands that refer to cache lines depend on the PCI configuration space cache line size register being set up properly; they may not be used until that has been done. 0000 Interrupt Acknowledge This is a special form of read cycle implicitly addressed to the interrupt controller, which returns an interrupt vector. The 32-bit address field is ignored. One possible implementation is to generate an interrupt acknowledge cycle on an ISA bus using a PCI/ISA bus bridge. This command is for IBM PC compatibility; if there is no Intel 8259 style interrupt controller on the PCI bus, this cycle need never be used. 0001 Special Cycle This cycle is a special broadcast write of system events that PCI card may be interested in. The address field of a special cycle is ignored, but it is followed by a data phase containing a payload message. The currently defined messages announce that the processor is stopping for some reason (e.g. to save power). No device ever responds to this cycle; it is always terminated with a master abort after leaving the data on the bus for at least 4 cycles. 0010 I/O Read This performs a read from I/O space. All 32 bits of the read address are provided, so that a device may (for compatibility reasons) implement less than 4 bytes worth of I/O registers. If the byte enables request data not within the address range supported by the PCI device (e.g. a 4-byte read from a device which only supports 2 bytes of I/O address space), it must be terminated with a target abort. Multiple data cycles are permitted, using linear (simple incrementing) burst ordering. The PCI standard is discouraging the use of I/O space in new devices, preferring that as much as possible be done through main memory mapping. 0011 I/O Write This performs a write to I/O space. 010x Reserved A PCI device must not respond to an address cycle with these command codes. 0110 Memory Read This performs a read cycle from memory space. Because the smallest memory space a PCI device is permitted to implement is 16 bytes, the two least significant bits of the address are not needed during the address phase; equivalent information will arrive during the data phases in the form of byte select signals. They instead specify the order in which burst data must be returned. If a device does not support the requested order, it must provide the first word and then disconnect. If a memory space is marked as "prefetchable", then the target device must ignore the byte select signals on a memory read and always return 32 valid bits. 0111 Memory Write This operates similarly to a memory read. The byte select signals are more important in a write, as unselected bytes must not be written to memory. Generally, PCI writes are faster than PCI reads, because a device may buffer the incoming write data and release the bus faster. For a read, it must delay the data phase until the data has been fetched. 100x Reserved A PCI device must not respond to an address cycle with these command codes. 1010 Configuration Read This is similar to an I/O read, but reads from PCI configuration space. A device must respond only if the low 11 bits of the address specify a function and register that it implements, and if the special IDSEL signal is asserted. It must ignore the high 21 bits. Burst reads (using linear incrementing) are permitted in PCI configuration space. Unlike I/O space, standard PCI configuration registers are defined so that reads never disturb the state of the device. It is possible for a device to have configuration space registers beyond the standard 64 bytes which have read side effects, but this is rare. Configuration space accesses often have a few cycles of delay to allow the IDSEL lines to stabilize, which makes them slower than other forms of access. Also, a configuration space access requires a multi-step operation rather than a single machine instruction. Thus, it is best to avoid them during routine operation of a PCI device. 1011 Configuration Write This operates analogously to a configuration read. 1100 Memory Read Multiple This command is identical to a generic memory read, but includes the hint that a long read burst will continue beyond the end of the current cache line, and the target should internally prefetch a large amount of data. A target is always permitted to consider this a synonym for a generic memory read. 1101 Dual Address Cycle When accessing a memory address that requires more than 32 bits to represent, the address phase begins with this command and the low 32 bits of the address, followed by a second cycle with the actual command and the high 32 bits of the address. PCI targets that do not support 64-bit addressing may simply treat this as another reserved command code and not respond to it. This command code may only be used with a non-zero high-order address word; it is forbidden to use this cycle if not necessary. 1110 Memory Read Line This command is identical to a generic memory read, but includes the hint that the read will continue to the end of the cache line. A target is always permitted to consider this a synonym for a generic memory read. 1111 Memory Write and Invalidate This command is identical to a generic memory write, but comes with the guarantee that one or more whole cache lines will be written, with all byte selects enabled. This is an optimization for write-back caches snooping the bus. Normally, a write-back cache holding dirty data must interrupt the write operation long enough to write its own dirty data first. If the write is performed using this command, the data to be written back is guaranteed to be irrelevant, and may simply be invalidated in the write-back cache. This optimization only affects the snooping cache, and makes no difference to the target, which may treat this as a synonym for the memory write command. PCI bus latency Soon after promulgation of the PCI specification, it was discovered that lengthy transactions by some devices, due to slow acknowledgments, long data bursts, or some combination, could cause buffer underrun or overrun in other devices. Recommendations on the timing of individual phases in Revision 2.0 were made mandatory in revision 2.1: A target must be able to complete the initial data phase (assert TRDY# and/or STOP#) within 16 cycles of the start of a transaction. An initiator must complete each data phase (assert IRDY#) within 8 cycles. Additionally, as of revision 2.1, all initiators capable of bursting more than two data phases must implement a programmable latency timer. The timer starts counting clock cycles when a transaction starts (initiator asserts FRAME#). If the timer has expired and the arbiter has removed GNT#, then the initiator must terminate the transaction at the next legal opportunity. This is usually the next data phase, but Memory Write and Invalidate transactions must continue to the end of the cache line. Delayed transactions Devices unable to meet those timing restrictions must use a combination of posted writes (for memory writes) and delayed transactions (for other writes and all reads). In a delayed transaction, the target records the transaction (including the write data) internally and aborts (asserts STOP# rather than TRDY#) the first data phase. The initiator must retry exactly the same transaction later. In the interim, the target internally performs the transaction, and waits for the retried transaction. When the retried transaction is seen, the buffered result is delivered. A device may be the target of other transactions while completing one delayed transaction; it must remember the transaction type, address, byte selects and (if a write) data value, and only complete the correct transaction. If the target has a limit on the number of delayed transactions that it can record internally (simple targets may impose a limit of 1), it will force those transactions to retry without recording them. They will be dealt with when the current delayed transaction is completed. If two initiators attempt the same transaction, a delayed transaction begun by one may have its result delivered to the other; this is harmless. A target abandons a delayed transaction when a retry succeeds in delivering the buffered result, the bus is reset, or when 215=32768 clock cycles (approximately 1 ms) elapse without seeing a retry. The latter should never happen in normal operation, but it prevents a deadlock of the whole bus if one initiator is reset or malfunctions. PCI bus bridges The PCI standard permits multiple independent PCI buses to be connected by bus bridges that will forward operations on one bus to another when required. Although PCI tends not to use many bus bridges, PCI Express systems use many PCI-to-PCI bridge usually called PCI Express Root Port; each PCI Express slot appears to be a separate bus, connected by a bridge to the others. The PCI host bridge (usually northbridge in x86 platforms) interconnect between CPU, main memory and PCI bus. Posted writes Generally, when a bus bridge sees a transaction on one bus that must be forwarded to the other, the original transaction must wait until the forwarded transaction completes before a result is ready. One notable exception occurs in the case of memory writes. Here, the bridge may record the write data internally (if it has room) and signal completion of the write before the forwarded write has completed. Or, indeed, before it has begun. Such "sent but not yet arrived" writes are referred to as "posted writes", by analogy with a postal mail message. Although they offer great opportunity for performance gains, the rules governing what is permissible are somewhat intricate. Combining, merging, and collapsing The PCI standard permits bus bridges to convert multiple bus transactions into one larger transaction under certain situations. This can improve the efficiency of the PCI bus. Combining Write transactions to consecutive addresses may be combined into a longer burst write, as long as the order of the accesses in the burst is the same as the order of the original writes. It is permissible to insert extra data phases with all byte enables turned off if the writes are almost consecutive. Merging Multiple writes to disjoint portions of the same word may be merged into a single write with multiple byte enables asserted. In this case, writes that were presented to the bus bridge in a particular order are merged so they occur at the same time when forwarded. Collapsing Multiple writes to the same byte or bytes may not be combined, for example, by performing only the second write and skipping the first write that was overwritten. This is because the PCI specification permits writes to have side effects. PCI bus signals PCI bus transactions are controlled by five main control signals, two driven by the initiator of a transaction (FRAME# and IRDY#), and three driven by the target (DEVSEL#, TRDY#, and STOP#). There are two additional arbitration signals (REQ# and GNT#) which are used to obtain permission to initiate a transaction. All are active-low, meaning that the active or asserted state is a low voltage. Pull-up resistors on the motherboard ensure they will remain high (inactive or deasserted) if not driven by any device, but the PCI bus does not depend on the resistors to change the signal level; all devices drive the signals high for one cycle before ceasing to drive the signals. Signal timing All PCI bus signals are sampled on the rising edge of the clock. Signals nominally change on the falling edge of the clock, giving each PCI device approximately one half a clock cycle to decide how to respond to the signals it observed on the rising edge, and one half a clock cycle to transmit its response to the other device. The PCI bus requires that every time the device driving a PCI bus signal changes, one turnaround cycle must elapse between the time the one device stops driving the signal and the other device starts. Without this, there might be a period when both devices were driving the signal, which would interfere with bus operation. The combination of this turnaround cycle and the requirement to drive a control line high for one cycle before ceasing to drive it means that each of the main control lines must be high for a minimum of two cycles when changing owners. The PCI bus protocol is designed so this is rarely a limitation; only in a few special cases (notably fast back-to-back transactions) is it necessary to insert additional delay to meet this requirement. Arbitration Any device on a PCI bus that is capable of acting as a bus master may initiate a transaction with any other device. To ensure that only one transaction is initiated at a time, each master must first wait for a bus grant signal, GNT#, from an arbiter located on the motherboard. Each device has a separate request line REQ# that requests the bus, but the arbiter may "park" the bus grant signal at any device if there are no current requests. The arbiter may remove GNT# at any time. A device which loses GNT# may complete its current transaction, but may not start one (by asserting FRAME#) unless it observes GNT# asserted the cycle before it begins. The arbiter may also provide GNT# at any time, including during another master's transaction. During a transaction, either FRAME# or IRDY# or both are asserted; when both are deasserted, the bus is idle. A device may initiate a transaction at any time that GNT# is asserted and the bus is idle. Address phase A PCI bus transaction begins with an address phase. The initiator, seeing that it has GNT# and the bus is idle, drives the target address onto the AD[31:0] lines, the associated command (e.g. memory read, or I/O write) on the C/BE[3:0]# lines, and pulls FRAME# low. Each other device examines the address and command and decides whether to respond as the target by asserting DEVSEL#. A device must respond by asserting DEVSEL# within 3 cycles. Devices which promise to respond within 1 or 2 cycles are said to have "fast DEVSEL" or "medium DEVSEL", respectively. (Actually, the time to respond is 2.5 cycles, since PCI devices must transmit all signals half a cycle early so that they can be received three cycles later.) Note that a device must latch the address on the first cycle; the initiator is required to remove the address and command from the bus on the following cycle, even before receiving a DEVSEL# response. The additional time is available only for interpreting the address and command after it is captured. On the fifth cycle of the address phase (or earlier if all other devices have medium DEVSEL or faster), a catch-all "subtractive decoding" is allowed for some address ranges. This is commonly used by an ISA bus bridge for addresses within its range (24 bits for memory and 16 bits for I/O). On the sixth cycle, if there has been no response, the initiator may abort the transaction by deasserting FRAME#. This is known as master abort termination and it is customary for PCI bus bridges to return all-ones data (0xFFFFFFFF) in this case. PCI devices therefore are generally designed to avoid using the all-ones value in important status registers, so that such an error can be easily detected by software. Address phase timing _ 0_ 1_ 2_ 3_ 4_ 5_ CLK _/ \_/ \_/ \_/ \_/ \_/ \_/ ___ GNT# \___/XXXXXXXXXXXXXXXXXXX (GNT# Irrelevant after cycle has started) ___ FRAME# \___ ___ AD[31:0] -------<___>--------------- (Address only valid for one cycle.) ___ ___ C/BE[3:0]# -------<__ (Command, then first data phase byte enables) ___ DEVSEL# \___\___\___\___ Fast Med Slow Subtractive _ _ _ _ _ _ _ CLK _/ \_/ \_/ \_/ \_/ \_/ \_/ 0 1 2 3 4 5 On the rising edge of clock 0, the initiator observes FRAME# and IRDY# both high, and GNT# low, so it drives the address, command, and asserts FRAME# in time for the rising edge of clock 1. Targets latch the address and begin decoding it. They may respond with DEVSEL# in time for clock 2 (fast DEVSEL), 3 (medium) or 4 (slow). Subtractive decode devices, seeing no other response by clock 4, may respond on clock 5. If the master does not see a response by clock 5, it will terminate the transaction and remove FRAME# on clock 6. TRDY# and STOP# are deasserted (high) during the address phase. The initiator may assert IRDY# as soon as it is ready to transfer data, which could theoretically be as soon as clock 2. Dual-cycle address To allow 64-bit addressing, a master will present the address over two consecutive cycles. First, it sends the low-order address bits with a special "dual-cycle address" command on the C/BE[3:0]#. On the following cycle, it sends the high-order address bits and the actual command. Dual-address cycles are forbidden if the high-order address bits are zero, so devices which do not support 64-bit addressing can simply not respond to dual cycle commands. _ 0_ 1_ 2_ 3_ 4_ 5_ 6_ CLK _/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ ___ GNT# \___/XXXXXXXXXXXXXXXXXXXXXXX ___ FRAME# \___ ___ ___ AD[31:0] -------<__>--------------- (Low, then high bits) ___ ___ ___ C/BE[3:0]# -------<__X___ (DAC, then actual command) ___ DEVSEL# \___\___\___\___ Fast Med Slow _ _ _ _ _ _ _ _ CLK _/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ 0 1 2 3 4 5 6 Configuration access Addresses for PCI configuration space access are decoded specially. For these, the low-order address lines specify the offset of the desired PCI configuration register, and the high-order address lines are ignored. Instead, an additional address signal, the IDSEL input, must be high before a device may assert DEVSEL#. Each slot connects a different high-order address line to the IDSEL pin, and is selected using one-hot encoding on the upper address lines. Data phases After the address phase (specifically, beginning with the cycle that DEVSEL# goes low) comes a burst of one or more data phases. In all cases, the initiator drives active-low byte select signals on the C/BE[3:0]# lines, but the data on the AD[31:0] may be driven by the initiator (in case of writes) or target (in case of reads). During data phases, the C/BE[3:0]# lines are interpreted as active-low byte enables. In case of a write, the asserted signals indicate which of the four bytes on the AD bus are to be written to the addressed location. In the case of a read, they indicate which bytes the initiator is interested in. For reads, it is always legal to ignore the byte enable signals and simply return all 32 bits; cacheable memory resources are required to always return 32 valid bits. The byte enables are mainly useful for I/O space accesses where reads have side effects. A data phase with all four C/BE# lines deasserted is explicitly permitted by the PCI standard, and must have no effect on the target other than to advance the address in the burst access in progress. The data phase continues until both parties are ready to complete the transfer and continue to the next data phase. The initiator asserts IRDY# (initiator ready) when it no longer needs to wait, while the target asserts TRDY# (target ready). Whichever side is providing the data must drive it on the AD bus before asserting its ready signal. Once one of the participants asserts its ready signal, it may not become un-ready or otherwise alter its control signals until the end of the data phase. The data recipient must latch the AD bus each cycle until it sees both IRDY# and TRDY# asserted, which marks the end of the current data phase and indicates that the just-latched data is the word to be transferred. To maintain full burst speed, the data sender then has half a clock cycle after seeing both IRDY# and TRDY# asserted to drive the next word onto the AD bus. 0_ 1_ 2_ 3_ 4_ 5_ 6_ 7_ 8_ 9_ CLK _/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ ___ ___ ___ ___ ___ AD[31:0] ---<__XXXXX__X___ (If a write) ___ ___ ___ ___ ___ AD[31:0] ---<___>~~~<XXXXXXXX__X__ (If a read) ___ ___ ___ ___ ___ C/BE[3:0]# ---<__X__X___ (Must always be valid) ___ | ___ | | | IRDY# x \___/ x \___ ___ | | | | TRDY# x x \___ ___ | | | | DEVSEL# \___ ___ | | | | FRAME# \___ _ _ _ _ _ |_ _ |_ |_ |_ CLK _/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ 0 1 2 3 4 5 6 7 8 9 This continues the address cycle illustrated above, assuming a single address cycle with medium DEVSEL, so the target responds in time for clock 3. However, at that time, neither side is ready to transfer data. For clock 4, the initiator is ready, but the target is not. On clock 5, both are ready, and a data transfer takes place (as indicated by the vertical lines). For clock 6, the target is ready to transfer, but the initiator is not. On clock 7, the initiator becomes ready, and data is transferred. For clocks 8 and 9, both sides remain ready to transfer data, and data is transferred at the maximum possible rate (32 bits per clock cycle). In case of a read, clock 2 is reserved for turning around the AD bus, so the target is not permitted to drive data on the bus even if it is capable of fast DEVSEL. Fast DEVSEL# on reads A target that supports fast DEVSEL could in theory begin responding to a read the cycle after the address is presented. This cycle is, however, reserved for AD bus turnaround. Thus, a target may not drive the AD bus (and thus may not assert TRDY#) on the second cycle of a transaction. Note that most targets will not be this fast and will not need any special logic to enforce this condition. Ending transactions Either side may request that a burst end after the current data phase. Simple PCI devices that do not support multi-word bursts will always request this immediately. Even devices that do support bursts will have some limit on the maximum length they can support, such as the end of their addressable memory. Initiator burst termination The initiator can mark any data phase as the final one in a transaction by deasserting FRAME# at the same time as it asserts IRDY#. The cycle after the target asserts TRDY#, the final data transfer is complete, both sides deassert their respective RDY# signals, and the bus is idle again. The master may not deassert FRAME# before asserting IRDY#, nor may it deassert FRAME# while waiting, with IRDY# asserted, for the target to assert TRDY#. The only minor exception is a master abort termination, when no target responds with DEVSEL#. Obviously, it is pointless to wait for TRDY# in such a case. However, even in this case, the master must assert IRDY# for at least one cycle after deasserting FRAME#. (Commonly, a master will assert IRDY# before receiving DEVSEL#, so it must simply hold IRDY# asserted for one cycle longer.) This is to ensure that bus turnaround timing rules are obeyed on the FRAME# line. Target burst termination The target requests the initiator end a burst by asserting STOP#. The initiator will then end the transaction by deasserting FRAME# at the next legal opportunity; if it wishes to transfer more data, it will continue in a separate transaction. There are several ways for the target to do this: Disconnect with data If the target asserts STOP# and TRDY# at the same time, this indicates that the target wishes this to be the last data phase. For example, a target that does not support burst transfers will always do this to force single-word PCI transactions. This is the most efficient way for a target to end a burst. Disconnect without data If the target asserts STOP# without asserting TRDY#, this indicates that the target wishes to stop without transferring data. STOP# is considered equivalent to TRDY# for the purpose of ending a data phase, but no data is transferred. Retry A Disconnect without data before transferring any data is a retry, and unlike other PCI transactions, PCI initiators are required to pause slightly before continuing the operation. See the PCI specification for details. Target abort Normally, a target holds DEVSEL# asserted through the last data phase. However, if a target deasserts DEVSEL# before disconnecting without data (asserting STOP#), this indicates a target abort, which is a fatal error condition. The initiator may not retry, and typically treats it as a bus error. Note that a target may not deassert DEVSEL# while waiting with TRDY# or STOP# low; it must do this at the beginning of a data phase. There will always be at least one more cycle after a target-initiated disconnection, to allow the master to deassert FRAME#. There are two sub-cases, which take the same amount of time, but one requires an additional data phase: Disconnect-A If the initiator observes STOP# before asserting its own IRDY#, then it can end the burst by deasserting FRAME# at the same time as it asserts IRDY#, ending the burst after the current data phase. Disconnect-B If the initiator has already asserted IRDY# (without deasserting FRAME#) by the time it observes the target's STOP#, it is committed to an additional data phase. The target must wait through an additional data phase, holding STOP# asserted without TRDY#, before the transaction can end. If the initiator ends the burst at the same time as the target requests disconnection, there is no additional bus cycle. Burst addressing For memory space accesses, the words in a burst may be accessed in several orders. The unnecessary low-order address bits AD[1:0] are used to convey the initiator's requested order. A target which does not support a particular order must terminate the burst after the first word. Some of these orders depend on the cache line size, which is configurable on all PCI devices. If the starting offset within the cache line is zero, all of these modes reduce to the same order. Cache line toggle and cache line wrap modes are two forms of critical-word-first cache line fetching. Toggle mode XORs the supplied address with an incrementing counter. This is the native order for Intel 486 and Pentium processors. It has the advantage that it is not necessary to know the cache line size to implement it. PCI version 2.1 obsoleted toggle mode and added the cache line wrap mode, where fetching proceeds linearly, wrapping around at the end of each cache line. When one cache line is completely fetched, fetching jumps to the starting offset in the next cache line. Note that most PCI devices only support a limited range of typical cache line sizes; if the cache line size is programmed to an unexpected value, they force single-word access. PCI also supports burst access to I/O and configuration space, but only linear mode is supported. (This is rarely used, and may be buggy in some devices; they may not support it, but not properly force single-word access either.) Transaction examples This is the highest-possible speed four-word write burst, terminated by the master: 0_ 1_ 2_ 3_ 4_ 5_ 6_ 7_ CLK _/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \ ___ ___ ___ ___ ___ AD[31:0] ---<__X__X___>---<___> ___ ___ ___ ___ ___ C/BE[3:0]# ---<__X__X___>---<___> | | | | ___ IRDY# ^^^^^^^^\__/ ^^^^^ | | | | ___ TRDY# ^^^^^^^^\__/ ^^^^^ | | | | ___ DEVSEL# ^^^^^^^^\__/ ^^^^^ ___ | | | ___ FRAME# \___/ | ^^^^\ _ _ |_ |_ |_ |_ _ _ CLK _/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \ 0 1 2 3 4 5 6 7 On clock edge 1, the initiator starts a transaction by driving an address, command, and asserting FRAME# The other signals are idle (indicated by ^^^), pulled high by the motherboard's pull-up resistors. That might be their turnaround cycle. On cycle 2, the target asserts both DEVSEL# and TRDY#. As the initiator is also ready, a data transfer occurs. This repeats for three more cycles, but before the last one (clock edge 5), the master deasserts FRAME#, indicating that this is the end. On clock edge 6, the AD bus and FRAME# are undriven (turnaround cycle) and the other control lines are driven high for 1 cycle. On clock edge 7, another initiator can start a different transaction. This is also the turnaround cycle for the other control lines. The equivalent read burst takes one more cycle, because the target must wait 1 cycle for the AD bus to turn around before it may assert TRDY#: 0_ 1_ 2_ 3_ 4_ 5_ 6_ 7_ 8_ CLK _/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \ ___ ___ ___ ___ ___ AD[31:0] ---<___>---<__X__>---<___> ___ ___ ___ ___ ___ C/BE[3:0]# ---<__X__X___>---<___> ___ | | | | ___ IRDY# ^^^^\___/ ^^^^^ ___ _ | | | | ___ TRDY# ^^^^ \__/ ^^^^^ ___ | | | | ___ DEVSEL# ^^^^\___/ ^^^^^ ___ | | | ___ FRAME# \___/ | ^^^^\ _ _ _ |_ |_ |_ |_ _ _ CLK _/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \ 0 1 2 3 4 5 6 7 8 A high-speed burst terminated by the target will have an extra cycle at the end: 0_ 1_ 2_ 3_ 4_ 5_ 6_ 7_ 8_ CLK _/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \ ___ ___ ___ ___ ___ AD[31:0] ---<___>---<__X__XXXX>---- ___ ___ ___ ___ ___ ___ C/BE[3:0]# ---<__X__X__>---- | | | | ___ IRDY# ^^^^^^^\___/ _ | | | | ___ TRDY# ^^^^^^^ \__/ | ___ STOP# ^^^^^^^ | | | \___/ | | | | ___ DEVSEL# ^^^^^^^\___/ ___ | | | | ___ FRAME# \___/ ^^^^ _ _ _ |_ |_ |_ |_ _ _ CLK _/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \ 0 1 2 3 4 5 6 7 8 On clock edge 6, the target indicates that it wants to stop (with data), but the initiator is already holding IRDY# low, so there is a fifth data phase (clock edge 7), during which no data is transferred. Parity The PCI bus detects parity errors, but does not attempt to correct them by retrying operations; it is purely a failure indication. Due to this, there is no need to detect the parity error before it has happened, and the PCI bus actually detects it a few cycles later. During a data phase, whichever device is driving the AD[31:0] lines computes even parity over them and the C/BE[3:0]# lines, and sends that out the PAR line one cycle later. All access rules and turnaround cycles for the AD bus apply to the PAR line, just one cycle later. The device listening on the AD bus checks the received parity and asserts the PERR# (parity error) line one cycle after that. This generally generates a processor interrupt, and the processor can search the PCI bus for the device which detected the error. The PERR# line is only used during data phases, once a target has been selected. If a parity error is detected during an address phase (or the data phase of a Special Cycle), the devices which observe it assert the SERR# (System error) line. Even when some bytes are masked by the C/BE# lines and not in use, they must still have some defined value, and this value must be used to compute the parity. Fast back-to-back transactions Due to the need for a turnaround cycle between different devices driving PCI bus signals, in general it is necessary to have an idle cycle between PCI bus transactions. However, in some circumstances it is permitted to skip this idle cycle, going directly from the final cycle of one transfer (IRDY# asserted, FRAME# deasserted) to the first cycle of the next (FRAME# asserted, IRDY# deasserted). An initiator may only perform back-to-back transactions when: they are by the same initiator (or there would be no time to turn around the C/BE# and FRAME# lines), the first transaction was a write (so there is no need to turn around the AD bus), and the initiator still has permission (from its GNT# input) to use the PCI bus. Additional timing constraints may come from the need to turn around are the target control lines, particularly DEVSEL#. The target deasserts DEVSEL#, driving it high, in the cycle following the final data phase, which in the case of back-to-back transactions is the first cycle of the address phase. The second cycle of the address phase is then reserved for DEVSEL# turnaround, so if the target is different from the prior one, it must not assert DEVSEL# until the third cycle (medium DEVSEL speed). One case where this problem cannot arise is if the initiator knows somehow (presumably because the addresses share sufficient high-order bits) that the second transfer is addressed to the same target as the prior one. In that case, it may perform back-to-back transactions. All PCI targets must support this. It is also possible for the target keeps track of the requirements. If it never does fast DEVSEL, they are met trivially. If it does, it must wait until medium DEVSEL time unless: the current transaction was preceded by an idle cycle (is not back-to-back), or the prior transaction was to the same target, or the current transaction began with a double address cycle. Targets which have this ability indicate it by a special bit in a PCI configuration register, and if all targets on a bus have it, all initiators may use back-to-back transfers freely. A subtractive decoding bus bridge must know to expect this extra delay in the event of back-to-back cycles, to advertise back-to-back support. 64-bit PCI Starting from revision 2.1, the PCI specification includes optional 64-bit support. This is provided via an extended connector which provides the 64-bit bus extensions AD[63:32], C/BE[7:4]#, and PAR64, and a number of additional power and ground pins. The 64-bit PCI connector can be distinguished from a 32-bit connector by the additional 64-bit segment. Memory transactions between 64-bit devices may use all 64 bits to double the data transfer rate. Non-memory transactions (including configuration and I/O space accesses) may not use the 64-bit extension. During a 64-bit burst, burst addressing works just as in a 32-bit transfer, but the address is incremented twice per data phase. The starting address must be 64-bit aligned; i.e. AD2 must be 0. The data corresponding to the intervening addresses (with AD2 = 1) is carried on the upper half of the AD bus. To initiate a 64-bit transaction, the initiator drives the starting address on the AD bus and asserts REQ64# at the same time as FRAME#. If the selected target can support a 64-bit transfer for this transaction, it replies by asserting ACK64# at the same time as DEVSEL#. Note that a target may decide on a per-transaction basis whether to allow a 64-bit transfer. If REQ64# is asserted during the address phase, the initiator also drives the high 32 bits of the address and a copy of the bus command on the high half of the bus. If the address requires 64 bits, a dual address cycle is still required, but the high half of the bus carries the upper half of the address and the final command code during both address phase cycles; this allows a 64-bit target to see the entire address and begin responding earlier. If the initiator sees DEVSEL# asserted without ACK64#, it performs 32-bit data phases. The data which would have been transferred on the upper half of the bus during the first data phase is instead transferred during the second data phase. Typically, the initiator drives all 64 bits of data before seeing DEVSEL#. If ACK64# is missing, it may cease driving the upper half of the data bus. The REQ64# and ACK64# lines are held asserted for the entire transaction save the last data phase, and deasserted at the same time as FRAME# and DEVSEL#, respectively. The PAR64 line operates just like the PAR line, but provides even parity over AD[63:32] and C/BE[7:4]#. It is only valid for address phases if REQ64# is asserted. PAR64 is only valid for data phases if both REQ64# and ACK64# are asserted. Cache snooping (obsolete) PCI originally included optional support for write-back cache coherence. This required support by cacheable memory targets, which would listen to two pins from the cache on the bus, SDONE (snoop done) and SBO# (snoop backoff). Because this was rarely implemented in practice, it was deleted from revision 2.2 of the PCI specification, and the pins re-used for SMBus access in revision 2.3. The cache would watch all memory accesses, without asserting DEVSEL#. If it noticed an access that might be cached, it would drive SDONE low (snoop not done). A coherence-supporting target would avoid completing a data phase (asserting TRDY#) until it observed SDONE high. In the case of a write to data that was clean in the cache, the cache would only have to invalidate its copy, and would assert SDONE as soon as this was established. However, if the cache contained dirty data, the cache would have to write it back before the access could proceed. so it would assert SBO# when raising SDONE. This would signal the active target to assert STOP# rather than TRDY#, causing the initiator to disconnect and retry the operation later. In the meantime, the cache would arbitrate for the bus and write its data back to memory. Targets supporting cache coherency are also required to terminate bursts before they cross cache lines. Development tools When developing and/or troubleshooting the PCI bus, examination of hardware signals can be very important. Logic analyzers and bus analyzers are tools which collect, analyze, and decode signals for users to view in useful ways. See also PCI Configuration Space CompactPCI, PCI-X, PCI Express PCI-SIG, PCI Special Interest Group PICMG, PCI Industrial Computer Manufacturers Group Eurocard (printed circuit board) References Further reading Official technical specifications ($1000 for non-members or $50 for members. PCI-SIG membership is $3000 per year.) ($1000 for non-members or $50 for members. PCI-SIG membership is $3000 per year.) Books 250 pages. 832 pages. 752 pages. 1140 pages. 162 pages. External links Official , PCI Special Interest Group (PCI-SIG) Technical Details Introduction to PCI protocol , electrofriends.com PCI bus pin-out and signals, pinouts.ru PCI card dimensions, interfacebus.com Lists of Vendors, Devices, IDs PCI Vendor and Device Lists, pcidatabase.com PCI ID Repository, a project to collect all known IDs Tips Brief overview of PCI power requirements and compatibility with a nice diagram Good diagrams and text on how to recognize the difference between 5 volt and 3.3 volt slots Linux Linux with miniPCI cards Decoding PCI data and lspci output on Linux hosts Development Tools Active PCI Bus Extender, dinigroup.com FPGA Cores PCI Interface Core, Lattice Semiconductor . . Peripheral Component Interconnect Motherboard expansion slot Macintosh internals IBM PC compatibles Wikipedia articles with ASCII art Computer-related introductions in 1993
55932375
https://en.wikipedia.org/wiki/Kinaxis
Kinaxis
Kinaxis is a supply chain management and sales and operation planning software company based in the Kanata district of Ottawa, Ontario, Canada. It is listed on the Toronto Stock Exchange and is a S&P/TSX Composite Component. The company was founded in 1984 by Duncan Klett and two others as Cadence Computer Corporation and went public in June 2014. It has 500 employees. Business Kinaxis provides supply-chain-management software on a subscription basis, primarily to large, multinational companies. Customers include Ford, Cisco, Qualcomm, and Avaya. They also provide related professional services to their customers. Contracts typically run for two to five years. Their main product is called RapidResponse. As of 2017, approximately 77% of revenue came from subscriptions, with the remainder from professional services. Kinaxis also allows other companies, including Deloitte and Bain & Company, to install Kinaxis software for a percentage of the subscription revenues. Kinaxis runs two data centers in South Korea. It has approximately 100 customers and about 5% of an estimated $4 billion market for software related to supply chain planning. As of 2016, 85% of revenue was from US customers, 4% from Canadian customers, 8% from Asian customers, and the rest from European customers. Competitors in the supply chain management software industry include SAP SE and JDA Software. In 2017, a significant customer in Asia stopped paying, leading to a 3% reduction in revenue for the company. History Kinaxis was founded in 1984 as Cadence Computer Corporation, to do supply-chain analysis of using custom mainframe computers, by three former Mitel engineers. The name was later changed to Carp Systems International (after the nearby Carp River), then Enterprise Planning Systems. In the mid 1990s, it changed its name to Webplan, and shifted from making hardware to providing software. Recent history In 2000, it led a venture round that raised $33 million. In 2005, it renamed itself Kinaxis, and started focusing on selling software by subscription, as opposed to collecting a one-time fee. In June 2014, it held an IPO on the Toronto Stock Exchange, raising a total of $100 million. Since then, its market capitalization has increased to $1.7 billion, as of August 2017. References Online companies of Canada Companies listed on the Toronto Stock Exchange Companies based in Ottawa Supply chain software companies Information technology companies Information technology companies of Canada Canadian companies established in 1984 Software companies of Canada Software companies established in 1984 1984 establishments in Ontario
13620488
https://en.wikipedia.org/wiki/Alan%20Hacker
Alan Hacker
Alan Ray Hacker (30 September 1938 – 16 April 2012) was an English clarinettist, conductor, and music professor. Biography He was born in Dorking, Surrey in 1938, the son of Kenneth and Sybil Hacker. After attending Dulwich College (from 1950 to 1955, under Stanley Wilson until the end of 1953), he went on to study at the Royal Academy of Music where he won the Dove Prize and the Boise Travelling Scholarship which he used to study in Paris, Bayreuth and Vienna. In 1958 he joined the London Philharmonic Orchestra. He became a professor of the Royal Academy of Music in 1960 and went on to found the Pierrot Players in 1965 along with American pianist Stephen Pruslin and Harrison Birtwistle. In 1966, a thrombosis on his spinal column caused permanent paraplegia. For the rest of his life he used a wheelchair and drove adapted cars. In 1972, the Pierrot Players renamed themselves the Fires of London, and Hacker continued to perform with them until 1976. In 1971, he founded his own group, Matrix. He was also appointed chairman of the Institute of Contemporary Arts Music section and of the British section of the International Society for Contemporary Music. He was one of those credited with reviving the basset clarinet, and in 1967, he restored the original text of Mozart's Concerto and Quintet. He played them on an instrument modelled on that for which Mozart originally wrote them, the Stadler's extended basset clarinet. Hacker also founded the Music Party in 1972, an organisation set up for the authentic performance of classical music. The later establishment of the Classical Orchestra in York was also a vehicle which promoted the performances of the classics on original instruments. Hacker also branched out into conducting opera, where he led performances of works from Monteverdi's Ulisse to Birtwistle's The Io Passion. In the 1972–1973 academic year he became the Sir Robert Mayer lecturer at Leeds University. In 1976 he was appointed lecturer in music at the University of York and went on to hold a post of senior lecturer between 1984 and 1987. Hacker was awarded the OBE for his services to music in 1988. In 1994, he was a guest on Desert Island Discs. Personal life Hacker was married three times. In 1959, he married Anna Maria Sroka, with whom he had two daughters, Katy and Sophie. His second marriage, to Karen Wynne Evans in 1976, produced a son, Alcuin. His third wife, Margaret Lee, survives him, as do his children and first two wives. Publications Scores of Mozart Concerto and Quintet – 1972 1st ed. of reconstructed Mozart Concerto – 1973 See also List of clarinetists References External links Alan Hacker, Desert Island Discs, 17 April 1994 Officers of the Order of the British Empire 1938 births British classical clarinetists People educated at Dulwich College Fellows of the Royal Academy of Music 2012 deaths Academics of the Royal Academy of Music Academics of the University of York 20th-century classical musicians
6054596
https://en.wikipedia.org/wiki/General%20Graphics%20Interface
General Graphics Interface
General Graphics Interface (GGI) was a project that aimed to develop a reliable, stable and fast computer graphics system that works everywhere. The intent was to allow for any program using GGI to run on any computing platform supported by it, requiring at most a recompilation. GGI is free and open-source software, subject to the requirements of the MIT License. The GGI project, and its related projects such as KGI, are generally acknowledged to be dead. Goals The project was originally started to make switching back and forth between virtual consoles, svgalib, and the X display server subsystems on Linux more reliable. The goals were: Portability through a flexible and extensible API for the applications. This avoids bloat in the applications by only getting what they use. Portability in cross-platform and in backends Security in the sense of requiring as few privileges as possible The GGI framework is implemented by a set of portable user-space libraries, with an array of different backends or targets (e.g. Linux framebuffer, X11, Quartz, DirectX), of which the two most fundamental are LibGII (for input-handling) and LibGGI (for graphical output). All other packages add features to these core libraries, and so depend on one or both of them. Some targets talk to other targets. These are called pseudo targets. Pseudo targets can be combined and work like a pipeline. One example: display-palemu, for example, emulates palette mode on truecolor modes. This allows users to run applications in palette mode even on machines where no palette mode would be available otherwise. display-tile splits large virtual display into many smaller pieces. You can spread them on multiple monitors or even forward them over a network. History Andreas Beck and Steffen Seeger founded The GGI Project in 1994 after some experimental precursors that were called "scrdrv". Development of scrdrv was motivated by the problems caused by coexisting but not very well cooperating graphics environments (mainly X and SVGAlib) under the Linux operating system at this time which frequently lead to lockups requiring a reboot. The first scrdrv design was heavily influenced by the graphics subsystem of the DJ DOS extender and some concepts from the SANE project. The basic problem that scrdrv solved was that it provided a kernel mode driver that knew enough of the video hardware to set up modes, thus allowing to get into a sane state even from a messed-up or crashed graphics application. The first official version appeared in 1995. About 1996, GGI 1.0 was released under the LGPL license. GGI only consisted of the core lib named libggi. It included input handling, a set of 2d graphic primitives and some userspace drivers for graphic boards along with a Linux kernel patch with the userspace interface for the drivers. The patch was known as KGI, the Kernel Graphics Interface. In 1997, GGI went into a complete re-design. Many new ideas and a decision from Linux made GGI to what it became in GGI 2.0 released in August 2001 under the MIT release. In 1998, there was a big flame war on the linux kernel mailing list about getting KGI into the kernel. Linus Torvalds explained his concerns about GGI stating, "I think that X is good enough" and expressing concern regarding the overall direction of GGI. During this time, another design idea called EvStack also added to the flamewar. EvStack was a pretty much complete redesign of the input and output subsystem that allowed for events (thus the "Ev") to flow through a "Stack" of modules that can be configured to manipulate them. EvStack is a very powerful concept, allowing e.g. to have two keyboards attached to the same machine, one operating a text console on one graphics adapter and one operating a graphics console on the other (as was demonstrated on the Linux-Kongreß ´97) and even allows for having different keyboard layouts on different virtual consoles or attaching keyboards via network. However this came at the price of a huge patch to the input subsystem which did not seem acceptable. The modern Linux input event system allows programs (e.g. Xorg) to receive keyboard events other than through the console keyboard, allowing multiseat operation. A set of talks about GGI, KGI and EvStack were given at LinuxExpo 98. For GGI 2.0, KGI was split off and became its own project named The KGI Project. GGI 2.0 consisted of a set of libraries. During the 2.0 beta phase in late 1998 the license of the libraries was changed from LGPL to a MIT-style license. Much work was also done on the buildsystem to support more operating systems. It worked on FreeBSD, code for OpenBSD, NetBSD and even Microsoft Windows were there as well as some support for more hardware platforms. Input handling was moved into a library called libgii. Generic GGI code was in libgg, a sublib within libgii. The core graphic library, libggi, has a lightweight set of graphic primitives that was common enough to write any kind of graphic application, while higherlevel API went into other libraries on top of libggi. These were called GGI extensions. libggi support a set of targets, most of them were Linux specific: fbdev, X, aa, vcsa, terminfo and some pseudo targets such as tile, multi, palemu and trueemu. The GGI extensions featured higherlevel API. libggiwmh provides functionality for windowed only targets, at that time this was only X. libggimisc provided some basic stuff like vga splitline. GGI 2.0.2 was released in December 2002. The most user visible change was from the scratch re-designed X backend. Another noticeable change was the huge documentation improvement. Last, but not least, the release cycles changed. From this release on, there was a development and a stable tree. The stable tree is open for bugfixes only, the development tree got the name, following the BSD scheme, -current. November 2004, the last bugfix from the GGI 2.0.x stable tree was released in favour for a new GGI 2.1.x stable tree. GGI 2.1.x runs on many Operating Systems: GNU Hurd, Linux, *BSD, System V, Mac OS X and Microsoft Windows. Support for more hardware platforms has been added. NetBSD even created a binary package for NetBSD/Vax! A new GGI library on top of libgii called libgiigic has been added. It allows to combine user actions with events at run time. GGI 2.2 was released in December 2005. The target auto detection has been reworked and was no longer linux centric. GGI replaced its own integer datatypes with ANSI C99 types for more portability. A target for Quartz has been added. Mac OS X users no longer depend on X11 but still can use the X11 backend. The most user visible change, however, was the support for static linked in targets. Latest release is GGI 2.2.2, a bugfix release in the GGI 2.2.x stable series. It was released in January 2007. Status as of 2006 The GGI Project was moving onward to the GGI 3.0 release. libgii had been re-designed. The input handling had been replaced with a reactor event model, which is more flexible than using select() on a file descriptor. This also simplified the input-drivers in general, particularly for those who don't use file descriptors such as input-quartz. libgg had been moved out into a separate library. libggi had merged some targets into one sublib, multi with tile and mono text with palemu. libggi also had gotten a new VNC target, which allowed to run any application as a VNC server. In GGI 3.0, the extension mechanism had been re-designed from scratch to simplify interactions between the extensions and the core libs. This required a little API change. See also Direct Rendering Infrastructure DirectFB Graphical Kernel System Linux framebuffer SVGALib References The GGI Project Frequently Asked Questions List Linux Weekly News - February 26, 1998, section: Kernel GGI Project Unhappy On Linux Christopher Browne's Web Pages: The X Window System, 15. GGI - General Graphical Interface Peter Amstutz: An Overview of the GGI Project 1998 Linux-GGI Project LinuxJournal article by Steffen Seeger and Andreas Beck 1996 External links GGI Project webpage Application programming interfaces C (programming language) libraries Cross-platform software Free computer libraries Graphics software Linux APIs Video game development software
36891093
https://en.wikipedia.org/wiki/Google%20Safe%20Browsing
Google Safe Browsing
Google Safe Browsing is a service from Google that helps protect devices by showing warnings to users when they attempt to navigate to dangerous sites or download dangerous files. Safe Browsing also notifies webmasters when their websites are compromised by malicious actors and helps them diagnose and resolve the problem so that their visitors stay safer. Safe Browsing protections work across Google products and power safer browsing experiences across the Internet. It lists URLs for web resources that contains malware or phishing content. Browsers like Google Chrome, Safari, Firefox, Vivaldi, Brave and GNOME Web use these lists from Google Safe Browsing service to checking pages against potential threats. Google also provides a public API for the service. Google also provides information to Internet service providers, by sending e-mail alerts to autonomous system operators regarding threats hosted on their networks. According to Google, as of September 2017, over 4 billion Internet devices are protected by this service. Alternatives are offered by both Tencent and Yandex. Clients protected Web browsers: Google Chrome, Safari, Firefox, Vivaldi, Brave and GNOME Web. Android: Google Play Protect, Verify Apps API Google Search Google AdSense Gmail Instagram Privacy Google maintains the Safe Browsing Lookup API, which has a privacy drawback: "The URLs to be looked up are not hashed so the server knows which URLs the API users have looked up". The Safe Browsing Update API, on the other hand, compares 32-bit hash prefixes of the URL to preserve privacy. The Chrome, Firefox and Safari browsers use the latter. Safe Browsing also stores a mandatory preferences cookie on the computer. Google Safe Browsing "conducts client-side checks. If a website looks suspicious, it sends a subset of likely phishing and social engineering terms found on the page to Google to obtain additional information available from Google's servers on whether the website should be considered malicious". Logs, "including an IP address and one or more cookies" are kept for two weeks. They are "tied to the other Safe Browsing requests made from the same device." Criticism Websites not containing malware have been blacklisted by Google Safe Browsing due to the presence of infected ads. Requesting removal from the blacklist requires the webmaster to create a Google Webmaster's Tool account and wait several days for removal to come into effect. There have also been concerns that Google Safe Browsing may be used for censorious purposes in the future, however this has not occurred. See also Anti-phishing software StopBadware Response policy zone Censorship References External links Safe Browsing Homepage Transparency Report: Safe Browsing Safe Browsing Computer network security
40131213
https://en.wikipedia.org/wiki/SU2%20code
SU2 code
SU2 is a suite of open-source software tools written in C++ for the numerical solution of partial differential equations (PDE) and performing PDE-constrained optimization. The primary applications are computational fluid dynamics and aerodynamic shape optimization, but has been extended to treat more general equations such as electrodynamics and chemically reacting flows. SU2 supports continuous and discrete adjoint for calculating the sensitivities/gradients of a scalar field but sadly it doesn't work. Developers SU2 is being developed by individuals and organized teams around the world. The SU2 Lead Developers are: Dr. Francisco Palacios and Dr. Thomas D. Economon. The most active groups developing SU2 are: Prof. Juan J. Alonso's group at Stanford University. Prof. Piero Colonna's group at Delft University of Technology. Prof. Nicolas R. Gauger's group at Kaiserslautern University of Technology. Prof. Alberto Guardone's group at Polytechnic University of Milan. Prof. Rafael Palacios' group at Imperial College London. Capabilities The SU2 tools suite solution suite includes High-fidelity analysis and adjoint-based design using unstructured mesh technology. Compressible and incompressible Euler, Navier-Stokes, and RANS solvers. Additional PDE solvers for electrodynamics, linear elasticity, heat equation, wave equation and thermochemical non-equilibrium. Convergence acceleration (multi-grid, preconditioning, etc.). Sensitivity information via the continuous adjoint methodology approach. Adaptive, goal-oriented mesh refinement and deformation. Modularized C++ object-oriented design. Parallelization with MPI. Python scripts for automation. FEATool Multiphysics features built-in GUI and CLI interfaces for SU2. Release history License SU2 is free and open source software, released under the GNU General Public License version 3 (SU2 v1.0 and v2.0) and GNU Lesser General Public License version 2.1 (SU2 v2.0.7 and later versions). Alternative software Free and open-source software Advanced Simulation Library (AGPL) CLAWPACK Code Saturne (GPL) FreeFem++ Gerris Flow Solver (GPL) OpenFOAM OpenFVM Palabos Flow Solver Proprietary software ADINA CFD ANSYS CFX ANSYS Fluent Azore FEATool Multiphysics Pumplinx STAR-CCM+ COMSOL Multiphysics KIVA (software) RELAP5-3D PowerFlow FOAMpro SimScale Cradle SC/Tetra Cradle scSTREAM Cradle Heat Designer References External links Official resources SU2 home page SU2 Github repository Community resources SU2 Forum at CFD Online SU2 wiki page at CFD Online Other resources SU2 version 2.0 announcement Tecplot Co-founder review of SU2 Stanford News story about SU2 initial release FEATool Multiphysics GUI and CFD solver interface for SU2 Computational fluid dynamics Free science software Free computer-aided design software Scientific simulation software 2012 software
13032011
https://en.wikipedia.org/wiki/Legal%20issues%20with%20BitTorrent
Legal issues with BitTorrent
The use of the BitTorrent protocol for sharing of copyrighted content generated a variety of novel legal issues. While the technology and related platforms are legal in many jurisdictions, law enforcement and prosecutorial agencies are attempting to address this avenue of copyright infringement. Notably, the use of BitTorrent in connection with copyrighted material may make the issuers of the BitTorrent file, link or metadata liable as an infringing party under some copyright laws. Similarly, the use of BitTorrent to procure illegal materials could potentially create liability for end users as an accomplice. BitTorrent files can be seen conceptually as a hyperlink. However, it can also be a very specific instruction for how to obtain content on the internet. BitTorrent may transmit or include illegal or copyrighted content. Court decisions in various jurisdictions have deemed some BitTorrent files illegal. Complicating the legal analysis are jurisdictional issues that are common when nation states attempt to regulate any activity. BitTorrent files and links can be accessed in different geographic locations and legal jurisdictions. Thus, it is possible to host a BitTorrent file in geographic jurisdictions where it is legal and others where it is illegal. A single link, file or data or download action may be actionable in some places, but not in others. This analysis applies to other sharing technologies and platforms. Jurisdictional variations Legal regimes vary from country to country. BitTorrent metafiles do not store copyrighted data and are ordinarily unobjectionable. Some accused parties argued that BitTorrent trackers are legal even if sharing the copyrighted data in question was a copyright violation. Despite these arguments, there has been tremendous legal pressure, usually on behalf of the MPAA, RIAA and similar organizations around the world, to shut down BitTorrent trackers. Finland: Finreactor In December 2004, Finnish police raided Finreactor, a major BitTorrent site. Seven system administrators and four others were ordered to pay hundreds of thousands of euros in damages. The defendants appealed the case all the way to the Supreme Court of Finland, but failed to overturn the verdict. Two other defendants were acquitted because they were underage at the time, but were held liable for legal fees and compensation for illegal distribution ranging up to 60,000 euros. The court set their fine at 10% of the retail price of products distributed. Hong Kong: individual actions On 24 October 2005, BitTorrent user Chan Nai-ming (陳乃明), using the handle 古惑天皇 (The Master of Cunning, although the magistrate referred to him as Big Crook) was convicted of violating copyright by uploading Daredevil, Red Planet and Miss Congeniality to a newsgroup (Chapter 528 of Hong Kong law). The magistrate remarked that Chan's act significantly damaged the interest of copyright holders. He was released on bail for HK$5,000, awaiting a sentencing hearing, though the magistrate himself admitted the difficulty of determining how he should be sentenced due to the lack of precedent. On 7 November 2005 he was sentenced to jail for three months, but was immediately granted bail pending an appeal. The appeal was dismissed by the Court of First Instance on 12 December 2006 and Chan was immediately jailed. On 3 January 2007, he was released pending appeal to the Court of Final Appeal on 9 May 2007. In 2008 and 2009, an unidentified woman and man were arrested for illegally uploading files with BitTorrent in September 2008 and April 2009, respectively. Singapore: Odex actions against users Anime distributor Odex actively took down and sent legal threats against individual BitTorrent users in Singapore beginning in 2007. These Internet users allegedly downloaded fansubbed anime via BitTorrent. Court orders required ISPs to reveal subscribers' personal information. This led to cease-and-desist letters from Odex to users that led to out-of-court settlements for at least S$3,000 (US$2,000) per person. One person who received such a letter was 9 years old. These actions were considered controversial by the local anime community and attracted criticism, as they were seen by fans as heavy-handed. Slovenia: Suprnova In December 2004, Suprnova.org, a popular early BitTorrent site, closed purportedly due to the pressure felt by Andre Preston, aka Sloncek, the site's founder and administrator. In December 2005, Sloncek revealed that the Suprnova computer servers had been confiscated by Slovenian authorities. Sweden: Pirate Bay The Pirate Bay torrent website, formed by a Swedish anti-copyright group, is notorious for the "legal threats" section of its website in which letters and replies on the subject of alleged copyright infringements are publicly displayed. On 31 May 2006, their servers in Sweden were raided by Swedish police on allegations by the MPAA of copyright infringement. The site was back online in less than 72 hours, and returned to Sweden, accompanied by public and media backlash against the government's actions. Steal This Film, was made to cover these incidents. On 17 April 2009, as a result of the trial following the raid, the site's four co-founders were sentenced to one year of jail time each and to collectively pay 30 million SEK in damages. All the defendants appealed the decision, although two later served their sentences. In 2012, to minimize legal exposure and save computer resources, The Pirate Bay entirely switched to providing plaintext magnet links instead of traditional torrent files. As the most popular and well-known facilitator of copyright infringement, The Pirate Bay continues to shift between different hosting facilities and domain registrars in the face of legal prosecution and shutdown threats. Telenor was recently forced to ban the DNS of TPB (although other cloud based clones still are available). United States: 2003–present Soon after the closure of Suprnova, civil and criminal legal actions in the United States began to increase. MPAA cease and desist messages In 2003, the Motion Picture Association of America began to send cease and desist messages to BitTorrent sites, leading to the shutdown of Torrentse and Sharelive in July of 2003. LokiTorrent In 2005, Edward Webber (known as "lowkee"), webmaster of LokiTorrent, was ordered by a U.S. court to pay a fine and supply the MPAA with server logs (including the IP addresses of visitors). Webber began a fundraising campaign to pay legal fees for actions brought by the MPAA. Webber raised approximately US$45,000 through a PayPal-based donation system. Following the agreement, the MPAA changed the LokiTorrent website to display a message intended to discourage filesharers from downloading illegal content. EliteTorrents On 25 May 2005, the popular BitTorrent website EliteTorrents.org was shut down by the United States Federal Bureau of Investigation and Immigration and Customs Enforcement. Ten search warrants relating to members of the website were executed. Six site administrators pleaded guilty to conspiracy to commit criminal copyright infringement and criminal copyright infringement of a pre-commercial release work. Punishments included jail time, house arrest and fines. Jail sentences were issued to some defendants violations of criminal law, the Family Entertainment and Copyright Act. Newnova In June 2006, the popular website Newnova.org, a replicate of Supernova, was closed. TorrentSpy On 29 May 2007, a U.S. federal judge ordered TorrentSpy to begin monitoring its users' activities and to submit logs to MPAA. TorrentSpy ultimately removed access for US visitors rather than operate in an "uncertain legal environment." In the face of destruction of evidence charges and a $111 million legal judgement, TorrentSpy voluntarily shut down and filed for bankruptcy in 2008, although appeals continued through 2009. isoHunt On 21 December 2009 a federal district court found the founder of isoHunt guilty of inducing copyright infringement. The ruling was upheld on appeal in Columbia Pictures Industries, Inc. v. Fung in March 2013 and the site finally shut down in October 2013. Copyright holder actions Copyright owners have undertaken a variety of tactics and strategies to try to curtail BitTorrent transmittal of their intellectual property. In 2005 HBO began "poisoning" torrents of its show Rome, by providing bad chunks of data to clients. In 2007 HBO sent cease and desist letters to the Internet Service Providers of BitTorrent users. Many users reported receiving letters from their ISP's that threatened to cut off their internet service if the alleged infringement continued. HBO, unlike the RIAA, has not been reported to have filed suit over file sharing as of April 2007. Beginning in early 2010, the US Copyright Group, acting on behalf of several independent movie makers, has obtained the IP addresses of BitTorrent users illegally downloading specific movies. The group then sued these users, in order to obtain subpoenas forcing ISPs to reveal the users' true identities. The group then sent out settlement offers in the $1,000–$3,000 range. About 16,200 lawsuits were filed between March and September 2010. In 2011, United States courts began determining the legality of suits brought against hundreds or thousands of BitTorrent users. Nearly simultaneously, a suit against 5,000 IP addresses was dismissed. A smaller suit, Pacific Century International, Ltd. v. Does against 100 ISPs, has also been dismissed. In October 2011, John Wiley and Sons brought suit against 27 New York "John Does" for illegally copying books from the For Dummies series. According to TorrentFreak, Wiley is thus "the first book publisher to take this kind of action". Settlements On 23 November 2005, the Motion Picture Association of America and Bram Cohen, the CEO of BitTorrent Inc., signed a deal to remove links to illegal content on the official BitTorrent website. Other notable search engines also voluntarily self-censored licensed content from their results, or became "content distribution"-only search engines. Mininova, announced that it would only allow freely licensed content (especially free content distributed by its author under a Creative Commons license) to be indexed after November 2009, resulting in the immediate removal of a majority of Mininova's search. Infringement's sales impact Some commentators have suggested that copyright violation through BitTorrent need not mean a loss of sales. In addition, the Game of Thrones director, HBO programming president and Time Warner CEO Jeff Bewkes spoke about the positive effects of file sharing. Bewkes further commented that he did not consider the unauthorized distribution to result in the loss of HBO subscriptions, rather: "Our experience is, it all leads to more penetration, more paying subs and more health for HBO." The show is the most infringed TV show, and "the show's first season was the best-selling TV DVD of 2012. Patent infringement In June 2011, Tranz-Send Broadcasting Network filed a U.S. District Court lawsuit against BitTorrent Inc. for infringing a patent applied for in April 1999. See also BitTorrent File sharing Legal aspects of file sharing Torrent poisoning Virtual private network Copyleft Opposition to copyright Kopimi References External links BitTorrent Copyright infringement Computer law
1602099
https://en.wikipedia.org/wiki/NSA%20Suite%20B%20Cryptography
NSA Suite B Cryptography
NSA Suite B Cryptography was a set of cryptographic algorithms promulgated by the National Security Agency as part of its Cryptographic Modernization Program. It was to serve as an interoperable cryptographic base for both unclassified information and most classified information. Suite B was announced on 16 February 2005. A corresponding set of unpublished algorithms, Suite A, is "used in applications where Suite B may not be appropriate. Both Suite A and Suite B can be used to protect foreign releasable information, US-Only information, and Sensitive Compartmented Information (SCI)." In 2018, NSA replaced Suite B with the Commercial National Security Algorithm Suite (CNSA). Suite B's components were: Advanced Encryption Standard (AES) with key sizes of 128 and 256 bits. For traffic flow, AES should be used with either the Counter Mode (CTR) for low bandwidth traffic or the Galois/Counter Mode (GCM) mode of operation for high bandwidth traffic (see Block cipher modes of operation) symmetric encryption Elliptic Curve Digital Signature Algorithm (ECDSA) digital signatures Elliptic Curve Diffie–Hellman (ECDH) key agreement Secure Hash Algorithm 2 (SHA-256 and SHA-384) message digest General information NIST, Recommendation for Pair-Wise Key Establishment Schemes Using Discrete Logarithm Cryptography, Special Publication 800-56A Suite B Cryptography Standards , Suite B Certificate and Certificate Revocation List (CRL) Profile , Suite B Cryptographic Suites for Secure Shell (SSH) , Suite B Cryptographic Suites for IPsec , Suite B Profile for Transport Layer Security (TLS) These RFC have been downgraded to historic references per . History In December 2006, NSA submitted an Internet Draft on implementing Suite B as part of IPsec. This draft had been accepted for publication by IETF as RFC 4869, later made obsolete by RFC 6379. Certicom Corporation of Ontario, Canada, which was purchased by BlackBerry Limited in 2009, holds some elliptic curve patents, which have been licensed by NSA for United States government use. These include patents on ECMQV, but ECMQV has been dropped from Suite B. AES and SHA had been previously released and have no patent restrictions. See also RFC 6090. As of October 2012, CNSSP-15 stated that the 256-bit elliptic curve (specified in FIPS 186-2), SHA-256, and AES with 128-bit keys are sufficient for protecting classified information up to the Secret level, while the 384-bit elliptic curve (specified in FIPS 186-2), SHA-384, and AES with 256-bit keys are necessary for the protection of Top Secret information. However, as of August 2015, NSA indicated that only the Top Secret algorithm strengths should be used to protect all levels of classified information. In 2018 NSA withdrew Suite B in favor of the CNSA. Quantum resistant suite In August 2015, NSA announced that it is planning to transition "in the not too distant future" to a new cipher suite that is resistant to quantum attacks. "Unfortunately, the growth of elliptic curve use has bumped up against the fact of continued progress in the research on quantum computing, necessitating a re-evaluation of our cryptographic strategy." NSA advised: "For those partners and vendors that have not yet made the transition to Suite B algorithms, we recommend not making a significant expenditure to do so at this point but instead to prepare for the upcoming quantum resistant algorithm transition." New standards are estimated to be published around 2024. Algorithm implementation Using an algorithm suitable to encrypt information is not necessarily sufficient to properly protect information. If the algorithm is not executed within a secure device the encryption keys are vulnerable to disclosure. For this reason, the US federal government requires not only the use of NIST-validated encryption algorithms, but also that they be executed in a validated Hardware Security Module (HSM) that provides physical protection of the keys and, depending on the validation level, countermeasures against electronic attacks such as differential power analysis and other side-channel attacks. For example, using AES-256 within an FIPS 140-2 validated module is sufficient to encrypt only US Government sensitive, unclassified data. This same notion applies to the other algorithms. Commercial National Security Algorithm Suite The Suite B algorithms have been replaced by Commercial National Security Algorithm (CNSA) Suite algorithms: Advanced Encryption Standard (AES), per FIPS 197, using 256 bit keys to protect up to TOP SECRET Elliptic Curve Diffie-Hellman (ECDH) Key Exchange, per FIPS SP 800-56A, using Curve P-384 to protect up to TOP SECRET. Elliptic Curve Digital Signature Algorithm (ECDSA), per FIPS 186-4 Secure Hash Algorithm (SHA), per FIPS 180-4, using SHA-384 to protect up to TOP SECRET. Diffie-Hellman (DH) Key Exchange, per RFC 3526, minimum 3072-bit modulus to protect up to TOP SECRET RSA for key establishment (NIST SP 800-56B rev 1) and digital signatures (FIPS 186-4), minimum 3072-bit modulus to protect up to TOP SECRET See also NSA cryptography References Cryptography standards National Security Agency cryptography Standards of the United States
12360824
https://en.wikipedia.org/wiki/Eurobalise
Eurobalise
A Eurobalise is a specific variant of a balise being a transponder placed between the rails of a railway. These balises constitute an integral part of the European Train Control System, where they serve as "beacons" giving the exact location of a train as well as transmitting signalling information in a digital telegram to the train. Overview A balise typically needs no power source. In response to radio frequency energy broadcast by a (BTM) mounted under a passing train, the balise transmits information to the train ('Uplink'). The provisions for Eurobalises to receive information from the train ('Downlink') has been removed from the specification. The transmission rate is sufficient to transmit at least 3 copies of a 'telegram' to be received by a train passing at any speed up to 500 km/h. Eurobalises are typically placed in pairs on two sleepers in the center of the track. For ETCS they are typically spaced 3 metres apart. With the balises being numbered the train will know whether it travels in nominal (1→2) or reverse direction (2→1). Singular balises exist only when linked to a previous balise group or when their function is reduced to provide only the exact position. There may be up to 8 balises in a balise group. Balises are differentiated as being either a 'Fixed Data Balise' transmitting the same data to every train, or a 'Transparent Data Balise' which transmits variable data, also called a 'Switchable' or 'Controllable Balise'. (Note that the word 'fixed' refers to the information transmitted by the balise, not to its physical location. All balises are immobile). Fixed Data Balise A 'Fixed Data Balise', or short 'fixed balise' is programmed to transmit the same data to every train. Information transmitted by a fixed balise typically includes: the location of the balise; the geometry of the line, such as curves and gradients; and any speed restrictions. The programming is performed using a wireless programming device. Thus a fixed balise can notify a train of its exact location, and the distance to the next signal, and can warn of any speed restrictions. Transparent Data Balise A 'Transparent Data Balise', or short 'controllable balise' is connected to a Lineside Electronics Unit (LEU), which transmits dynamic data to the train, such as signal indications. Balises forming part of an ETCS Level 1 signalling system employ this capability. The LEU integrates with the conventional (national) signal system either by connecting to the lineside railway signal or to the signalling control tower. Euroloop A balise transmits telegrams at a specific site. To allow a continuous transmission the telegrams may be sent along leaky feeder cable being up to 1000 metres long. The Euroloop cable is always connected with a balise at its end which serves as the End of Loop Marker (EOLM). The telegram structure is the same as for the balise it is connected to. Originally the Euroloop used the same frequency as the Eurobalises but that was changed for specification 2.0.1 in September 2004. Euroloops had been used in Switzerland which completed the change in July 2010. Modulation The downlink uses an amplitude modulation on the 27.095 MHz frequency. This frequency is used to power the passive balises (it is the intermediate channel 11A in CB radio). The uplink uses frequency-shift keying with 3.951 MHz for a logical '0' and 4.516 MHz for a logical '1'. The data rate of 564.48 kBit/s is enough to transmit 3 copies of a telegram to a train passing at 500 km/h. The Euroloop frequency was moved to a centre of 13.54750 MHz (exactly half of the Eurobalise power frequency). In a practical setup the BTM requires 65 Watt to power the Eurobalises and to receive the telegrams with the BTM mounted above top of rail on a bogie. Encoding Each pair of balises usually consists of a switchable balise and a fixed balise. A balise transmits a 'telegram' of either 1023 bits (93*11) or 341 bits (31*11) in the channel encoding with 11 bit per symbol. The user data block is cut into 10-bit user symbols before the scrambling and shaping operation - the effective payload of signalling information is 830 bit (83*10) for the long telegram and 210 bit (21*10) for the short telegram. The final telegram consists of shaped data (913 bit or 231 bit) containing the payload (830 or 210 bit) control bits (Cb, 3 bit) scrambling bits (Sb, 12 bit) extra shaping bits (Esb, 10 bit) checksum (CheckBits, 85 bit) The telegram is broadcast in a cyclic manner as the train passes over the balise. To avoid transmission errors the payload is scrambled (avoiding burst errors), substituted with a symbol code of different Hamming distance, and a checksum is added for validity checks. Since the checksum is computed after the symbol substitution the telegram contains extra shaping bits to allow the resulting checksum bits to be filled up in a way that only valid symbols of the chosen channel code are in the telegram where each symbol has 11 bits. The payload data consists of a header followed by multiple packets defined in the ERTMS protocols. Typical packets are: Packet 5 - Linking Packet 12 - Movement Authority Packet 21 - Gradient Profile Packet 27 - International Static Speed Profile Packet 255 - End of information Many applications include optional packets like Packet 3 - National Values, Packet 41 - Level Transition Order, and Packet 136 - Infill Location Reference. If the telegram maximum of 830 bits is reached then more packets can be sent in the following balises of the same balise group - with up to 8 balises in a balise group the maximum ERTMS message per balise group can encompass 8 * 830 = 6640 bits (note that every telegram must contain a header and the trailer packet 255). A fixed balise transmits a stable message which typically can include the linking information, gradient profile, and speed profile. It may also contain track information such as route suitability data for different train types and axle load restrictions. Almost all packet types contain a parameter flagging whether its information is relevant for the "nominal" or "reverse" direction (or both). If a train sees balise 1 before balise 2 then it passes over the group in the nominal direction. Consequently, some packets may be dropped by the application software of the receiver if they are not designated for the relevant direction. The ERTMS header block of 50 bits contains the ETCS version, the current number and total count of balises within a balise group (up to 8 balises), a flag whether it is a copy (up to 4 copies) that increases chances for the receiver to see the telegram of the balise in a group, a serial number flagging whether the message has changed lately, a 10-bit country identifier along with the 14-bit balise group identifier allowing for a unique ID of every balise group. The linking information informs about the distance to the next balise group (one linking packet per direction) and the required train reaction if the next balise group is missed (e.g. train stop). The movement authority packet defines a maximum speed that may be used for a given maximum distance and maximum time - setting the maximum speed to zero will force the train to stop. The gradient profile may have a variable length based on the contained pairs of section length (scalar and number in the metric system) and section gradient (uphill/downhill flag and a number in %). Similarly the international static speed profile is given in a variable count of section parts with each part denoting the section length (number in meters - the scale is only given once at the start of the packet for all sections), the maximum speed (number * 5 km/h - allowed numbers are 0-120 i.e. some spare values are left over) and a flag if the speed restriction applies to the front or rear end of the train (possibly allowing for a delay). The trailer packet only contains its packet id with no parameters where 255 equals the state of all bits set in the 8-bit packet id field (11111111). Manufacture The history of ETCS has seen the formation of UNISIG (Union of Signalling Industry) in 1998 to promote the development of the system. The founding members were Alstom, Ansaldo, Bombardier, Invensys, Siemens and Thales. The group has ensured that Eurobalises may be made by several different companies; while the balises may vary in the details, they are manufactured to meet the same standards. The principal manufacturers of Eurobalises belong to a group of seven firms (Alstom, Ansaldo STS, Bombardier, Invensys, Siemens, Sigma-Digitek, Thales) within the UNIFE federation of railway suppliers. This group cooperated in developing the specifications for Eurobalises. Specifications for Eurobalises are governed by the European Railway Agency. Usage Eurobalises are not only used in the ETCS/ERTMS train protection system. There are alternative implementations that pick up on the telegram structure to encode only some packet types and adding additional specific information. ETCS trains may decode the telegrams possibly translating them like any other Class-B signalling information. It is also possible that a balise transmits telegrams for different systems allowing for a transitional phase from one variant to another as typically used when switching from a national train protection system to ETCS. The following automatic train protection systems are based on Eurobalises: ETCS – the European-wide train protection system Chinese Train Control System versions CTCS-2 and CTCS-3, used on high speed rail lines in China – a variant of the earlier Swiss Integra-Signum train protection system – a variant of the earlier Swiss ZUB 121 train protection system SCMT – an Italian train protection system TBL1+ – a train protection system used in Belgium GNT – the system to control tilting trains in Germany ZBS – a new rapid transit control system for the S-Bahn Berlin Eurobalises have also been used in Germany to transmit tilting instructions for curves to tilting trains while keeping the traditional train protection system. The original GNT (Geschwindigkeitsüberwachung Neigetechnik) from Siemens had used specific coupling coils in 1992 (ZUB 122) and it was switched to Eurobalises in 2005 (ZUB 262). The additional telegram packet types for tilting trains have been added to the Baseline 3 series of ETCS. History The direct predecessor of Eurobalises are the balises of the Ebicab train protection system. The Ebicab system was developed in Sweden (and Norway) by LMEricson and SRT. The Ebicab system was developed after a crash in Norway in 1975 (Tretten). Trial runs started in 1979, and in Norway the first line fully equipped with the system was operational in 1983. The adaptation of the Ebicab system in France is the KVB system. It had been developed after a crash in 1985 and it was deployed in the early 1990s on French lines. The name for the beacons: "balise" was however in use in the Ebicab system in the late 1970s. About the same time the idea came up to develop a common train protection system for Europe leading to the 91/440/EEC as of 29 July 1991. Since 1993 the organizational framework was in place to publish TSI standards. This allowed for the first drafts of the new technology and since 1996 the elements were tested by six railway operators which had joined the ERTMS user group. The Ebicab technology did already use the 27 MHz carrier frequency as well as putting the beacons in the center of the track. With Ebicab a single balise transmission had only 12 bit but it allowed for 2 to 5 balises in a balise group providing 24 to 80 bit of signalling information. Most of the patents on that encoding are held by GEC Alsthom. It was then up to ABB to extend the telegram size from 12 bit in EBICAB 700 to 180 bit in EBICAB 900 (after encoding 255 bit) as used in the Mediterranean Corridor in Spain. In that time Ansaldo adopted the balise type for the digital evolution of the Italian SCMT also becoming a second supplier for the balise type to other railways. These balise types were later collectively named KER balises from their usage in KVB, Ebicab and RSDD (Ripetizione Segnali Discontinua Digitale). Another source for the technology comes from the Siemens ZUB 100 family where they used coupling coils at the side of the tracks to augment the existing train protection system with additional signalling. The first ZUB 111 beacon did just allow for 21 states (using 2 out of 7 frequencies). The successor ZUB 122 switched to a digital telegram modulated on an 850 kHz carrier. The latter was used first in the for Switzerland since 1992 and for Denmark since 1992. The telegram types of these systems are compatible with the ORE A46 specification for the German LZB telegrams (about 83 bits). Siemens published a report showing the advantages of the balise technology for railway operations in 1992 and in the fall of 1995 they delivered prototypes of Siemens type S21 Eurobalise. ABB, Alsthom and Ansaldo did also cooperate in the development and the S21 balise along with other Eurobalise prototypes were tested from July to October 1996 at the Velim railway test circuit and the Austrian railways test lab (Forschungs- und Prüfzentrum Arsenal). The Eurobalise FFFIS (Form Fit Function Interface Specification) was introduced to the ERMTS range of specifications as SUBSET-036. Its foreword describes the specification to be based on the results of EUROSIG consortium (ACEC Transport, Adtranz Signal, Alcatel SEL, GE C Alsthom Transport, Ansaldo Trasporti, CSEE Transport, SASIB Railway, Siemens, and Westinghouse Signal) that got financial support from the European Commission. The EUROSIG formed after the initial Eurobalise/Euroloop Project 92/94 leading into the actual ERTMS/EUROSIG Project 95/98 supported by the parallel EMSET Project 96/00 (testing the Eurocab specification). When the EUROSIG project had ended the ETCS was still not ready for real world application. So 1998 saw the formation of UNISIG (Union of Signalling Industry), including Alstom, Ansaldo, Siemens, Bombardier, Invensys and Thales which were to take over the finalisation of the standard. The first baseline specification has been tested by six railways since 1999 as part of the European Rail Traffic Management System The railway companies defined some extended requirements that were added to ETCS including telegram packet types for RBC-Handover and track profile information - the resulting Class 1 Version 2.0.0 specification of ETCS was then published in April 2000. References External links . Description of various telegram / packet types that Eurobalises can send. (1.12MB) Train protection systems
2161221
https://en.wikipedia.org/wiki/Extreme%20Warfare
Extreme Warfare
Extreme Warfare is a series of professional wrestling management text simulators created by British programmer Adam Ryland for the PC since 1995. The latest in the series is Total Extreme Wrestling 2020, which was released on May 15, 2020. Extreme Warfare Revenge 4.0 was released in 2002 on computer text simulator. Games in the series Classic Extreme Warfare Adam Ryland originally developed Extreme Warfare as a collectible card game with a wrestling theme. Due to complexity and set up time it was decided a computer format would be more suitable. The first Extreme Warfare on the PC (now called Extreme Warfare 1) was programmed in 1995 in QBasic. This game was a simple simulator, where one could decide what matches were to take place and who was going to win them but also involved some simple financial elements, such as the wages of wrestlers. Due to limitations in QBasic, Ryland moved the series over to Turbo Pascal where further incarnations of the game were created, including: Extreme Warfare 2, Extreme Warfare 2000, Extreme Warfare 2001, Extreme Warfare 2002, Extreme Warfare 5000, Extreme Warfare 6000, Extreme Warfare 7500, and Extreme Warfare 9000. Each version of the game was an upgrade of the previous and continually built on the ideas of booking matches and running the business side of a professional wrestling promotion. After release of EW 9000, a game called Promotion Wars was released by fellow British programmer Adam Jennings, taking some inspiration from both Extreme Warfare 9000 and Championship Manager. After the game's release, some of Extreme Warfare's fan base shifted their interest over to this game when released in October 2000. Extreme Warfare Deluxe On April 1, 2001, Extreme Warfare Deluxe (EWD) was released. It was the first game in a while to be built by scratch instead of an upgrade of the previous games. EWD expanded on the previous games in terms of the actual game world. The game world was expanded in that everyone in the database can now be hired by any promotion, unlike previous games in which WWF superstars can only be hired by the WWF, with the same applying for WCW and ECW. This helped to bring more competition between promotions, which now had their own artificial intelligence. Also included in EWD was the match report screen which featured stats about the match quality, crowd reaction and worker effort of the match along with an overall rating. This setup would end up being the basis of all match report screens in later games in the series up to and including TEW 2004. Initially, Ryland stated that Deluxe was going to be the final game of the series but shortly afterwards, he changed his mind and began work on a new Extreme Warfare game. With the limitations of Turbo Pascal now pushing the game to the limit, Ryland decided in October 2001 to start work on a brand new game in the EW series. Extreme Warfare Revenge Extreme Warfare Revenge (EWR) was released on June 15, 2002. Now programmed in Visual Basic, the series now took a Windows style interface. One of the most significant changes this game took to the series was the fact that everything on a wrestling event is under the control of the user. In previous games in the series, angles, finishes and (in EWD) interviews were randomly created. This also coincided with the new feud system that was to count the matches, angles and interview victories between the workers involved. The match reports also took a slight change, featuring reviews of the matches from such Internet columnists as Scott Keith instead of a straight play-by-play style. However, the report style would revert to its old style in TEW 2004. Another major feature that changed the way the game was played was the way the game world was represented. Unlike the previous games in which it was mostly focused on the major promotions such as the WWF, WCW, and ECW the promotion size feature meant many promotions in North America could now be included from the global sized promotions like WWE to the cult sized promotions like ROH to a mere backyard federation. From June 2002 to July 2003, the game has had some significant upgrades and new versions of the game were released. Some of these changes included changes to the TV timeslot system where the more further away from a prime time slot a televised event is shown, the fewer segments the user gets to book with. The Internet feature was also increased to include a website based on the independent promotions, a website based on backstage gossip and a website for your promotion. Relationships between workers were added to help bring in backstage politics where people are more willing put over their friends and less with their enemies. Eventually workers could also be in multiple tag teams with a statistic for experience which increases with each match fought together. Gimmicks were then added for wrestlers to use which would affect the overness of a worker over how strong that gimmick was. More changes were made to adapt to the independent promotions. This included multiple open contracts for workers, enabling them to work in up to three promotions and the ability of workers to go on Japanese tours, affecting the booking of cards. The optional ability of viewing a wrestler's picture was also added later in the game's production. Due to the size of the game, Ryland felt that in order to include new features and upgrades a completely new game would have to be programmed from scratch. With this task taking quite a lot of his time, Ryland decided to turn his hobby into a commercial venture, signing a contract with simulator game company .400 Software Studios to produce a new commercial game. Total Extreme Warfare 2004 Total Extreme Warfare 2004 (TEW 2004) was released on March 31, 2004 under .400 Software Studios. The game was distributed by downloading on the Internet after purchase, (using ELicense). A full working trial was also available for download which originally would expire after a single day but was replaced by a trial that makes the user able to play one game month unlimited times. Along with a new professional layout, the game had more features. While the previous games only focused on the wrestling scene of North America (Japan was featured in later versions of EWR but not playable), TEW 2004 expanded the world to include such areas as Japan, Mexico, the United Kingdom and Australia. With this, each worker's overness was now expanded from EWR's single value to a series of values depending on areas in the world. The AI was changed in that now the user could now see what matches other promotions have booked, other promotions' financial details and what deals they have made. More contract clauses such as medical coverage and travel expenditure being included, contracts deal decisions were now made over time rather than immediate. Inspired by some fans playing against each other using WWE brands by sending files to each other through the Internet, a multi-player feature was added to make users play against each other with different promotions. Booking was also improved in that not only could the user edit the card more easily, the booking was now time-based, meaning such anomalies as booking 11-hour-long Iron Man matches on a two-hour shows would no longer be possible. The game was also more customisable than before with new editing modes as Create-A-Match and Create-A-Gimmick. Due to the problem of copyright issues by going commercial, the series turned from using stats of the real wrestling world to a fictitious wrestling world called the CornellVerse. This world is named after the character of Tommy Cornell, one of the most influential people and best wrestlers in the CornellVerse, based on a character Ryland had created a few years earlier while participating in e-federations. On June 14, 2004, the game was renamed Total Extreme Wrestling 2004 to help distinguish the new TEW series from the earlier EWR series. Due to undisclosed reasons, Ryland moved from .400 Software Studios to another simulator game company, Grey Dog Software. His first game created there however was not another Extreme Warfare game, instead the first Wrestling Spirit game. Due to .400 Software Studio's closure on January 1, 2006, the game was taken off the market permanently. There are currently no plans to make this game freeware or shareware. Total Extreme Wrestling 2005 The sequel to TEW 2004, Total Extreme Wrestling 2005 (TEW 2005) was released on October 6, 2005 under Grey Dog Software. A demo was also released in advance on September 29, 2005, allowing the user to play one game month just like previous demos. TEW 2005 included some more new features. Advance booking was one example which helped to promote upcoming big events. Televised shows also improved, bringing both competition to the shows with non-wrestling shows along with multiple television deals around the world for one show. The pay-per-view feature was now very similar to television in that there's now a list of pay-per-view providers which the user must make a deal with to get their pay-per-view provided. A momentum meter was also added to the wrestlers to bring in more realism in that if they give great matches, cut good interviews and participate in angles, it will increase and thus gain more overness. This helped to prevent the user from booking the same over people all the time and expect good ratings. The booking also improved in that the match purpose feature from EWR has returned and enhanced. The user must now talk to road agents about how the match has to be set up, including ways of putting people over, burying a worker and the way an actual match needs to be performed. TEW 2005 also made more features customizable with its new editable statistics for angles, storylines, locations and injuries. Its angle editor consisted of many different types such as interviews to beatdowns to celebrations and uses up to six people to participate in various roles. The storyline editor takes these angles and places them in an order the booker will need to comply to. The storyline editor was created by Phil Parent, using Georges Polti's book The Thirty-Six Dramatic Situations as an inspiration. Also included was the "grades" feature. Instead of having an exact view of the stats each wrestler has along with changes, a more realistic grade feature was instead added to make the user rely on instinct for crucial decisions. TEW 05 became freeware on July 1, 2009. Total Extreme Wrestling 2007 Total Extreme Wrestling 2007 (TEW 2007) was officially released on December 29, 2006, with a number of new features. Whereas both TEW2004 and TEW2005 were written from scratch, TEW 2007 was being built on top of TEW 2005's source code. There were many new features, such as the ability to customise merchandise and a large amount of new contract types (short-term, etc.). Total Extreme Wrestling 2008 A new installment of the series, Total Extreme Wrestling 2008 (TEW 2008), was announced on the Grey Dog Software website on January 1, 2008. The game is largely based on TEW 2007, but Ryland made more than 100 changes and additions. The game allows players to import and convert their TEW 2007 databases. The game was released on June 7, 2008. The demo for the game was released on June 1, 2008. Total Extreme Wrestling 2010 In late 2009, it was announced that Total Extreme Wrestling 2010 (TEW 2010) would be released in early 2010. Some of the new features announced included a revamp of backstage morale, and several changes to improve the interface and to reduce the amount of time it takes to navigate through the game and to book a show. On January 20, 2010, Adam Ryland released the demo to Total Extreme Wrestling 2010. The official release happened on January 25, 2010. Total Extreme Wrestling 2013 At the end of July, it was announced that Total Extreme Wrestling 2013 (TEW 2013) would be released in December 2012. Some of the new features announced included an Autobooker, Fog Of War, Tribute Shows, Shoot Interviews, Legacies, and several other changes to help either make the game more realistic, and opened up more options in the database Total Extreme Wrestling 2013 was released on December 16, 2012, with the demo version available on December 9. Total Extreme Wrestling 2016 On January 8, 2016, it was announced that Total Extreme Wrestling 2016 (TEW 2016) was in development. The developer's journal announced that the game would feature elements that would add more realism and would also include things such as backstage cliques. On April 1, 2016, the last day of developer's journal updates, TEW 2016 was announced to have a demo release date of April 25, with a full release on May 2. While in previous versions of the demo, the player could only play through January of the game's titular year, it was announced that TEW 2016's demo would allow players to play through both January and February of any year. Total Extreme Wrestling 2020 On December 8, 2018, it was announced that Total Extreme Wrestling 2020 (TEW 2020) is in development with an estimate release of April 2020. The announcement stated that Ryland had completely rewritten the code before reinserting older features to make the game "effectively a much sharper, quicker, more intuitive, better quality piece of work" and promised that TEW 2020 would be "the biggest jump forward in terms of quality the series has ever seen". The developer's journal, beginning on the day of the announcement, was split into multiple phases, the first phase being announcements of new and returning features which had already been added beginning December 10, 2018 and ending July 12, 2019, while the second phase, which began July 29, 2019 and is currently ongoing, is a "live" journal discussing features currently being worked on and the current level of completion of the default game world, "The Cornellverse". Some of the highly-requested features added include playing as a company's developmental territory and giving the player more control over house shows. Another feature introduced was that of attributes, replacing the old "personality" and "lifestyle" systems. In an interview with My Games Lounge, Ryland said that "it was nice to do a spring clean of the code" as re-writing the code is what allowed him to add so many new features. On March 28, 2020 the release date of TEW 2020 was announced as April 23 for the trial version, and April 30 for the full retail version. On April 25, the game's release date was pushed back to May 14. References Further reading List of critics' reviews for Total Extreme Wrestling 2004 External links Grey Dog Software Video game franchises DOS games Windows games Professional wrestling games Fantasy wrestling Sports management video games Video games developed in the United Kingdom
7220
https://en.wikipedia.org/wiki/Common%20Gateway%20Interface
Common Gateway Interface
In computing, Common Gateway Interface (CGI) is an interface specification that enables web servers to execute an external program, typically to process user requests. Such programs are often written in a scripting language and are commonly referred to as CGI scripts, but they may include compiled programs. A typical use case occurs when a Web user submits a Web form on a web page that uses CGI. The form's data is sent to the Web server within an HTTP request with a URL denoting a CGI script. The Web server then launches the CGI script in a new computer process, passing the form data to it. The output of the CGI script, usually in the form of HTML, is returned by the script to the Web server, and the server relays it back to the browser as its response to the browser's request. Developed in the early 1990s, CGI was the earliest common method available that allowed a Web page to be interactive. Although still in use, CGI is relatively inefficient compared to newer technologies and has largely been replaced by them. History In 1993 the National Center for Supercomputing Applications (NCSA) team wrote the specification for calling command line executables on the www-talk mailing list. The other Web server developers adopted it, and it has been a standard for Web servers ever since. A work group chaired by Ken Coar started in November 1997 to get the NCSA definition of CGI more formally defined. This work resulted in RFC 3875, which specified CGI Version 1.1. Specifically mentioned in the RFC are the following contributors: Rob McCool (author of the NCSA HTTPd Web Server) John Franks (author of the GN Web Server) Ari Luotonen (the developer of the CERN httpd Web Server) Tony Sanders (author of the Plexus Web Server) George Phillips (Web server maintainer at the University of British Columbia) Historically CGI programs were often written using the C language. RFC 3875 "The Common Gateway Interface (CGI)" partially defines CGI using C, in saying that environment variables "are accessed by the C library routine getenv() or variable environ". The name CGI comes from the early days of the Web, where Web masters wanted to connect legacy information systems such as databases to their Web servers. The CGI program was executed by the server that provided a common "gateway" between the Web server and the legacy information system. Purpose of the CGI specification Each Web server runs HTTP server software, which responds to requests from web browsers. Generally, the HTTP server has a directory (folder), which is designated as a document collection – files that can be sent to Web browsers connected to this server. For example, if the Web server has the domain name example.com, and its document collection is stored at /usr/local/apache/htdocs/ in the local file system, then the Web server will respond to a request for http://example.com/index.html by sending to the browser the (pre-written) file /usr/local/apache/htdocs/index.html. For pages constructed on the fly, the server software may defer requests to separate programs and relay the results to the requesting client (usually, a Web browser that displays the page to the end user). In the early days of the Web, such programs were usually small and written in a scripting language; hence, they were known as scripts. Such programs usually require some additional information to be specified with the request. For instance, if Wikipedia were implemented as a script, one thing the script would need to know is whether the user is logged in and, if logged in, under which name. The content at the top of a Wikipedia page depends on this information. HTTP provides ways for browsers to pass such information to the Web server, e.g. as part of the URL. The server software must then pass this information through to the script somehow. Conversely, upon returning, the script must provide all the information required by HTTP for a response to the request: the HTTP status of the request, the document content (if available), the document type (e.g. HTML, PDF, or plain text), et cetera. Initially, different server software would use different ways to exchange this information with scripts. As a result, it wasn't possible to write scripts that would work unmodified for different server software, even though the information being exchanged was the same. Therefore, it was decided to specify a way for exchanging this information: CGI (the Common Gateway Interface, as it defines a common way for server software to interface with scripts). Webpage generating programs invoked by server software that operate according to the CGI specification are known as CGI scripts. This specification was quickly adopted and is still supported by all well-known server software, such as Apache, IIS, and (with an extension) node.js-based servers. An early use of CGI scripts was to process forms. In the beginning of HTML, HTML forms typically had an "action" attribute and a button designated as the "submit" button. When the submit button is pushed the URI specified in the "action" attribute would be sent to the server with the data from the form sent as a query string. If the "action" specifies a CGI script then the CGI script would be executed and it then produces an HTML page. Using CGI scripts A Web server allows its owner to configure which URLs shall be handled by which CGI scripts. This is usually done by marking a new directory within the document collection as containing CGI scripts – its name is often cgi-bin. For example, /usr/local/apache/htdocs/cgi-bin could be designated as a CGI directory on the Web server. When a Web browser requests a URL that points to a file within the CGI directory (e.g., http://example.com/cgi-bin/printenv.pl/with/additional/path?and=a&query=string), then, instead of simply sending that file (/usr/local/apache/htdocs/cgi-bin/printenv.pl) to the Web browser, the HTTP server runs the specified script and passes the output of the script to the Web browser. That is, anything that the script sends to standard output is passed to the Web client instead of being shown on-screen in a terminal window. As remarked above, the CGI specification defines how additional information passed with the request is passed to the script. For instance, if a slash and additional directory name(s) are appended to the URL immediately after the name of the script (in this example, /with/additional/path), then that path is stored in the PATH_INFO environment variable before the script is called. If parameters are sent to the script via an HTTP GET request (a question mark appended to the URL, followed by param=value pairs; in the example, ?and=a&query=string), then those parameters are stored in the QUERY_STRING environment variable before the script is called. If parameters are sent to the script via an HTTP POST request, they are passed to the script's standard input. The script can then read these environment variables or data from standard input and adapt to the Web browser's request. Example The following Perl program shows all the environment variables passed by the Web server: #!/usr/bin/env perl =head1 DESCRIPTION printenv — a CGI program that just prints its environment =cut print "Content-Type: text/plain\n\n"; for my $var ( sort keys %ENV ) { printf "%s=\"%s\"\n", $var, $ENV{$var}; } If a Web browser issues a request for the environment variables at http://example.com/cgi-bin/printenv.pl/foo/bar?var1=value1&var2=with%20percent%20encoding, a 64-bit Windows 7 Web server running cygwin returns the following information: Some, but not all, of these variables are defined by the CGI standard. Some, such as PATH_INFO, QUERY_STRING, and the ones starting with HTTP_, pass information along from the HTTP request. From the environment, it can be seen that the Web browser is Firefox running on a Windows 7 PC, the Web server is Apache running on a system that emulates Unix, and the CGI script is named cgi-bin/printenv.pl. The program could then generate any content, write that to standard output, and the Web server will transmit it to the browser. The following are environment variables passed to CGI programs: Server specific variables: SERVER_SOFTWARE: name/version of HTTP server. SERVER_NAME: host name of the server, may be dot-decimal IP address. GATEWAY_INTERFACE: CGI/version. Request specific variables: SERVER_PROTOCOL: HTTP/version. SERVER_PORT: TCP port (decimal). REQUEST_METHOD: name of HTTP method (see above). PATH_INFO: path suffix, if appended to URL after program name and a slash. PATH_TRANSLATED: corresponding full path as supposed by server, if PATH_INFO is present. SCRIPT_NAME: relative path to the program, like /cgi-bin/script.cgi. QUERY_STRING: the part of URL after ? character. The query string may be composed of *name=value pairs separated with ampersands (such as var1=val1&var2=val2...) when used to submit form data transferred via GET method as defined by HTML application/x-www-form-urlencoded. REMOTE_HOST: host name of the client, unset if server did not perform such lookup. REMOTE_ADDR: IP address of the client (dot-decimal). AUTH_TYPE: identification type, if applicable. REMOTE_USER used for certain AUTH_TYPEs. REMOTE_IDENT: see ident, only if server performed such lookup. CONTENT_TYPE: Internet media type of input data if PUT or POST method are used, as provided via HTTP header. CONTENT_LENGTH: similarly, size of input data (decimal, in octets) if provided via HTTP header. Variables passed by user agent (HTTP_ACCEPT, HTTP_ACCEPT_LANGUAGE, HTTP_USER_AGENT, HTTP_COOKIE and possibly others) contain values of corresponding HTTP headers and therefore have the same sense. The program returns the result to the Web server in the form of standard output, beginning with a header and a blank line. The header is encoded in the same way as an HTTP header and must include the MIME type of the document returned. The headers, supplemented by the Web server, are generally forwarded with the response back to the user. Here is a simple CGI program written in Python 3 along with the HTML that handles a simple addition problem. add.html: <!DOCTYPE html> <html> <body> <form action="add.cgi" method="POST"> <fieldset> <legend>Enter two numbers to add</legend> <label>First Number: <input type="number" name="num1"></label><br/> <label>Second Number: <input type="number" name="num2"></label><br/> </fieldset> <button>Add</button> </form> </body> </html> add.cgi: #!/usr/bin/env python3 import cgi, cgitb cgitb.enable() input_data = cgi.FieldStorage() print('Content-Type: text/html') # HTML is following print('') # Leave a blank line print('<h1>Addition Results</h1>') try: num1 = int(input_data["num1"].value) num2 = int(input_data["num2"].value) except: print('<output>Sorry, the script cannot turn your inputs into numbers (integers).</output>') raise SystemExit(1) print('<output>{0} + {1} = {2}</output>'.format(num1, num2, num1 + num2))This Python 3 CGI program gets the inputs from the HTML and adds the two numbers together. Deployment A Web server that supports CGI can be configured to interpret a URL that it serves as a reference to a CGI script. A common convention is to have a cgi-bin/ directory at the base of the directory tree and treat all executable files within this directory (and no other, for security) as CGI scripts. Another popular convention is to use filename extensions; for instance, if CGI scripts are consistently given the extension .cgi, the Web server can be configured to interpret all such files as CGI scripts. While convenient, and required by many prepackaged scripts, it opens the server to attack if a remote user can upload executable code with the proper extension. In the case of HTTP PUT or POSTs, the user-submitted data are provided to the program via the standard input. The Web server creates a subset of the environment variables passed to it and adds details pertinent to the HTTP environment. Uses CGI is often used to process input information from the user and produce the appropriate output. An example of a CGI program is one implementing a wiki. If the user agent requests the name of an entry, the Web server executes the CGI program. The CGI program retrieves the source of that entry's page (if one exists), transforms it into HTML, and prints the result. The Web server receives the output from the CGI program and transmits it to the user agent. Then if the user agent clicks the "Edit page" button, the CGI program populates an HTML textarea or other editing control with the page's contents. Finally if the user agent clicks the "Publish page" button, the CGI program transforms the updated HTML into the source of that entry's page and saves it. Security CGI programs run, by default, in the security context of the Web server. When first introduced a number of example scripts were provided with the reference distributions of the NCSA, Apache and CERN Web servers to show how shell scripts or C programs could be coded to make use of the new CGI. One such example script was a CGI program called PHF that implemented a simple phone book. In common with a number of other scripts at the time, this script made use of a function: escape_shell_cmd(). The function was supposed to sanitize its argument, which came from user input and then pass the input to the Unix shell, to be run in the security context of the Web server. The script did not correctly sanitize all input and allowed new lines to be passed to the shell, which effectively allowed multiple commands to be run. The results of these commands were then displayed on the Web server. If the security context of the Web server allowed it, malicious commands could be executed by attackers. This was the first widespread example of a new type of Web based attack, where unsanitized data from Web users could lead to execution of code on a Web server. Because the example code was installed by default, attacks were widespread and led to a number of security advisories in early 1996. Alternatives For each incoming HTTP request, a Web server creates a new CGI process for handling it and destroys the CGI process after the HTTP request has been handled. Creating and destroying a process can consume much more CPU and memory than the actual work of generating the output of the process, especially when the CGI program still needs to be interpreted by a virtual machine. For a high number of HTTP requests, the resulting workload can quickly overwhelm the Web server. The overhead involved in CGI process creation and destruction can be reduced by the following techniques: CGI programs precompiled to machine code, e.g. precompiled from C or C++ programs, rather than CGI programs interpreted by a virtual machine, e.g. Perl, PHP or Python programs. Web server extensions such as Apache modules (e.g. mod_perl, mod_php, mod_python), NSAPI plugins, and ISAPI plugins which allow long-running application processes handling more than one request and hosted within the Web server. Web 2.0 allows to transfer data from the client to the server without using HTML forms and without the user noticing. FastCGI, SCGI, and AJP which allow long-running application processes handling more than one request hosted externally to the Web server. Each application process listens on a socket; the Web server handles an HTTP request and sends it via another protocol (FastCGI, SCGI or AJP) to the socket only for dynamic content, while static content are usually handled directly by the Web server. This approach needs less application processes so consumes less memory than the Web server extension approach. And unlike converting an application program to a Web server extension, FastCGI, SCGI, and AJP application programs remain independent of the Web server. Jakarta EE runs Jakarta Servlet applications in a Web container to serve dynamic content and optionally static content which replaces the overhead of creating and destroying processes with the much lower overhead of creating and destroying threads. It also exposes the programmer to the library that comes with Java SE on which the version of Jakarta EE in use is based. The optimal configuration for any Web application depends on application-specific details, amount of traffic, and complexity of the transaction; these trade-offs need to be analyzed to determine the best implementation for a given task and time budget. Web frameworks offer an alternative to using CGI scripts to interact with user agents. See also CGI.pm DOS Gateway Interface (DGI) FastCGI Perl Web Server Gateway Interface Rack (web server interface) Server Side Includes Web Server Gateway Interface References External links GNU cgicc, a C++ class library for writing CGI applications CGI, a standard Perl module for CGI request parsing and HTML response generation CGI Programming 101: Learn CGI Today!, a CGI tutorial The Invention of CGI Servers (computing) Web 1.0 Web technology Network protocols Articles with example Python (programming language) code
62256066
https://en.wikipedia.org/wiki/Streamlabs
Streamlabs
Streamlabs (formerly TwitchAlerts) is a California-based software company founded in 2014. The company primarily distributes livestreaming software. Streamlabs was acquired by Logitech in 2019. Overview Streamlabs was founded in 2014 as TwitchAlerts, an Open Broadcaster Software-based software that allowed live streamers to add visual alerts on the screen that were triggered by viewer interaction such as new followers, subscribers, and donations. TwitchAlerts was later renamed to Streamlabs in 2019 while changing the name of their livestreaming software from Streamlabs OBS to Streamlabs in late 2021. Both re-brands were due to having no affiliation with their namesake. Streamlabs also produces CrossClip, a video converter; Melon, a podcast streaming service; Oslo, a video editing tool, and Willow; a website builder. History Streamlabs was founded in 2014 as TwitchAlerts, but changed it due to having no official affiliation with Twitch. Logitech purchased the company for $89 million on September 26, 2019. Criticism On November 16, 2021, Streamlabs released 'Streamlabs Studio', a cloud capture software for the Xbox One, Xbox Series S, and the Xbox Series X. After the release, the streaming service Lightstream accused Streamlabs of plagiarising their promotional materials, down to every marketing word and layout, comparing it to plagiarized homework. Later that same day, the OBS Studio team tweeted that Streamlabs used the name "OBS" for their products, giving the false appearance of being in partnership with them, despite OBS Studio already denying Streamlabs permission to use it upon request. The main software in question, Streamlabs OBS, had been considered a "hostile fork" (a fork which has been done without permission or consultation of the main project) of OBS by members of the libre software community prior to this controversy. OBS Studio's tweet resulted in Twitch streamers including Pokimane and Hasan Piker threatening a boycott of their product if changes were not made. Other companies, such as Elgato and 1UpCoin, have also spoken up on Twitter about Streamlabs copying their products. The company has subsequently promised to remove "OBS" from the name of its product. Products Streamlabs Desktop (formerly Streamlabs OBS) is a free and open-source streaming software that is based on a fork of OBS and employs Electron for user interface. Streamlabs distributes their user's content over platforms such as Twitch, YouTube Live, and Facebook Live. Crossclip is a video converter website that allows users to convert, edit and share live streaming content across multiple platforms. Crossclip formats video in the correct dimensions for TikTok, Instagram, and YouTube shorts. Willow is a link-in-bio link tool that aims to help users increase revenue and make their links more discoverable. It includes a tipping feature and allows users to tip directly on the page. Melon is a browser-based live streaming studio. Users can broadcast their live streams to Twitch, YouTube, Facebook, Linkedin, or a custom RTMP destination. Oslo is a tool for video review and collaboration. Users can upload and share projects in the cloud, and Oslo's project management and annotation tools provide ways for teams to receive and review feedback, as well as upload videos directly to YouTube. Streamlabs Charity is a free fundraising platform that assists charities in raising funds and connecting with streamers. Excluding standard processing fees, the platform takes no cut from donations, allowing everything to go to charity. References External links 2019 mergers and acquisitions C++ software Cross-platform free software Free and open-source software Free software programmed in C++ Livestreaming software Logitech products Screencasting software Software that uses FFmpeg Software that uses Qt Streaming software Technology companies established in 2014 Video recording software Windows software
22897286
https://en.wikipedia.org/wiki/F9%20Financial%20Reporting
F9 Financial Reporting
F9 is a financial reporting software application that dynamically links general ledger data to Microsoft Excel through the use of financial cell-based formulas, wizards, and analysis tools to create spreadsheet reports that can be calculated, filtered, and drilled upon. The F9 software is developed, marketed, and support by an organization also called F9, a division of Infor Global Solutions (Canada) Ltd. which is headquartered in Vancouver, British Columbia. History F9 - The Financial Reporter was originally developed by Synex Systems Corporation, a subsidiary of Synex International (Symbol SXI, TSX). First announced in 1988 as Acclink for Accpac as a Lotus 1-2-3 Add-in for DOS and released under F9 name later in 1989. Subsequently F9 was developed for the Microsoft Excel Spreadsheet Platform. F9 was developed to allow a non-technical user, typically an accountant, to create a dynamic, customized general ledger financial report using a spreadsheet that is 'hot-linked' to an accounting system's general ledger. Initially, the user interface used the same syntax as Accpac for specifying the reporting period. This was soon replaced by a simpler to understand and more flexible generic natural language interface that used a temporal trinary (three part) phrase parsing syntax composed of a modifier, a period specifier, and a temporal index. For example: "starting balance last quarter" is broken down to 'starting balance' (modifier) + 'last' (temporal index) + 'quarter' (period). The temporal index can be relative or absolute and the modifier can determine if the value returned is differential or cumulative. The first F9 addin was a significant software effort in that it used a coding trick to break the small memory model limit 1-2-3 imposed on addins and allowed F9 to be run as a compact memory model program. This allowed F9 to be written in C (using a Microsoft C DOS compiler) rather than assembler allowing easier changes and debugging. An F9 addin was developed for Excel in 1989 and with the lack of a 1-2-3 version that supported Windows and problems with the Lotus Programming Language (LPL) the Excel version of F9 soon far outsold the 1-2-3 version. On or about the year 2002 F9 was renamed 'F9 - Financial Intelligence.' In 2002, Synex Systems was acquired by privately owned Lasata Software of Perth, Australia. In 2005, Lasata was acquired by UK based Systems Union. In 2007, Systems Union was acquired by privately held Infor Global Solutions, a U.S. company that specializes in enterprise software. What was Synex Systems Corporation now operates as an independent business unit (IBU) within Infor Global Solutions called F9. As of 2012 F9 was used by over 30,000 financial accounting professionals in more than 20 countries worldwide and was named one of Accounting Today's "Top 100 Software Products" for 2001. Features Financial Spreadsheet Formulas Use of natural language (English) accounting period specifiers Dynamic Report Filters Report Wizards Table and Pivot Table Reporting Drilldown to Transactions Budget Reporting and Budget Write Back Consolidations Report Analysis Web reporting and dashboards Over 150 different accounting/ERP systems are supported. Competition The major competitor of F9 is Microsoft's FRx. However starting in 2011 FRx is now in a state of phased discontinuation. and Microsoft’s official FRx replacement is Management Reporter. Other reporting systems competing with F9 are Alchemex, Bizinsight, Renovofyi and SmartView by Solution 7. See also List of companies of Canada List of ERP software packages CYMA (software) References External links F9 official site Infor Global Solutions K2 Enterprise Review of F9 CYMA Accounting Software Reporting software
900144
https://en.wikipedia.org/wiki/Advanced%20Weather%20Interactive%20Processing%20System
Advanced Weather Interactive Processing System
The Advanced Weather Interactive Processing System (AWIPS) is a technologically advanced processing, display, and telecommunications system that is the cornerstone of the United States National Weather Service's (NWS) operations. AWIPS is a complex network of systems that ingests and integrates meteorological, hydrological, satellite, and radar data, and also processes and distributes the data to 135 Weather Forecast Offices (WFOs) and River Forecast Centers (RFCs) nationwide. Weather forecasters utilize the capabilities of AWIPS to make increasingly accurate weather, water, and climate predictions, and to dispense rapid, highly reliable warnings and advisories. The AWIPS system architectural design is driven by expandability, flexibility, availability, and portability. The system is easily expandable to allow for the introduction of new functionality and the augmentation of network and processing capabilities. AWIPS is designed so that software and data can be migrated to new platforms as technology evolves. History AWIPS replaced the Automation of Field Operations and Services (AFOS) system which had become obsolete and was very difficult to maintain. AWIPS was originally developed and maintained by PRC, Inc (later acquired by Northrop Grumman Information Technology) with installation completed in 1998. Since 2005, Raytheon has been NWS’ partner for the operations, maintenance and evolution of AWIPS, providing the integrated mission services required to sustain and enhance system performance. It is a five-year contract with five one-year award terms for a potential maximum 10-year contract. Teaming with Raytheon are Keane Federal Systems, Globecomm Systems Inc., GTSI Corp., ENSCO, Reston Consulting Group, Fairfield Technologies, Centuria Corporation, and Earth Resources Technology. Together they provide software operations and maintenance, software development, hardware maintenance and logistics, commercial off-the-shelf software maintenance, satellite communications, and network monitoring and control. Evolution As the architect of the AWIPS evolution, Raytheon designed, developed, and released the system's next-generation software known as AWIPS II. AWIPS II, which features a new service oriented architecture (SOA) began roll-out in late 2011. This new system simplified code and consequently strengthened system performance while reducing the maintenance burden. All of this is achieved while retaining a system look and feel that makes the AWIPS evolution appear similar to the user. The AWIPS program office is working in conjunction with the National Centers for Environmental Prediction (NCEP) to incorporate the NAWIPS baseline software used by the NCEP centers, National Hurricane Center (NHC) / Aviation Weather Center (AWC) / Storm Prediction Center (SPC) as well as the Weather Prediction Center (formerly Hydrometeorological Prediction Center) and Ocean Prediction Center (WPC and OPC) into the AWIPSII baseline. The commissioning of a new AWIPS site, the first since the Huntsville Weather Forecast Office in 2002, will also be part of this effort at the Space Weather Prediction Center (SWPC) in Boulder, CO. See also WarnGen, the component of AWIPS used to issue watches, warnings, and advisories. First Warning, a computer-based alerting system used to disseminate watches, warnings and advisories via broadcast television. References Ferris, Nancy (2000). "Advanced Weather System", Government Executive. National Weather Service Graphic software in meteorology Computer workstations
19025129
https://en.wikipedia.org/wiki/Government%20College%20of%20Engineering%20and%20Leather%20Technology
Government College of Engineering and Leather Technology
{{Infobox college | name = Government College of Engineering and Leather Technology | native_name = প্রকৌশল ও চামড়া প্রযুক্তির সরকারি মহাবিদ্যালয় | image = Government_College_of_Engineering_and_Leather_Technology.jpg | established = 1919 | founder = Prof. Rai Bahadur B.M. Das (Prof. B.M. Das) | type = Government Engineering College | academic_affiliation = Maulana Abul Kalam Azad University of Technology (formerly known as WBUT) | free_label = Approvals | free = AICTE | principal = Sanjoy Chakroborty (Officer-in-charge) | location = Block – LB 11, Sector-III, Salt Lake, West Bengal, Kolkata-700106 | faculty = 50 | undergrad = 650 | postgrad = 50 | campus = Urban | website = | former_names = Calcutta Research Tannery Bengal Tanning Institute College of Leather Technology }} The Government College of Engineering and Leather Technology, often referred to as GCELT, is an institute offering engineering courses at the undergraduate levels in Computer Science & Engineering, Information Technology and Leather Technology, diploma in shoes and leather goods making and M.tech in Leather technology. The college is affiliated to the Maulana Abul Kalam Azad University of Technology (formerly known as West Bengal University of Technology) and is approved by AICTE. History The Government College of Engineering and Leather Technology (GCELT) was established in year 1919 on being recommended by the Munitions Board after the First World War to use indigenous resources of hides, skins and tanning materials for producing leather goods and the development of leather industry in India. It was known as Calcutta Research Tannery and was renamed to Bengal Tanning Institute in 1926. The institute became affiliated to Calcutta University and introduced a Certificate Course in tanning. In 1955, the B.Sc. (Tech) course in Leather Technology was introduced. The name of the institute was changed in 1958 to College of Leather Technology'''. Calcutta Research Tannery as well as Bengal Tanning Institute was situated at the campus of Canal South Road, Beleghata (presently that place is the campus of RCCIIT college). In the year 1994, the campus was shifted from there to the present address of Salt Lake City. In 1999–2000, apart from the B.Tech in Leather Technology that it was offering, the college started offering B.Tech. in Information Technology and in 2000–2001 B.Tech. in Computer Science & Technology. Present status GCELT is officially under the jurisdiction of the Department of Higher Education and Directorate of Technical education of the Government of West Bengal. Courses offered The institute offers B.Tech in Computer Science and Engineering, Information Technology and Leather Technology. The institute also offers certificate course on 'Boot and Shoe Manufacturing'. College programs Fresher's Welcome- The event is organized to welcome the 1st year students of the college. Usually 2nd year students organize it with great enthusiasm. On that special day, there is a cultural program with arrangement for lunch for newcomers. This ends with the DJ evening and the whole event is restricted only to college students. Technical Fest – Enginerds- ENGINERDS was first organized in the year 2010 (by 2008–12 batch). Cultural Fest – Punormilon – PUNORMILON is one of the most important cultural event of GCELT with the get together of all the Ex-Students and Present-Students under the same roof. Every year a new committee is formed to organize the PUNORMILON. The committee contains members from all the streams and from present students as well as from Alumni. All the responsibilities of organizing the event is on the committee members. The events of PUNORMILON include Inauguration, College Performance, Tribute to Alumni, Rabinra Sangeet, Band Performance and also DJ. References Technical universities and colleges in India Colleges affiliated to West Bengal University of Technology Engineering colleges in Kolkata Indian leather industry 1919 establishments in India Educational institutions established in 1919
24993687
https://en.wikipedia.org/wiki/Peter%20Millican
Peter Millican
Peter J. R. Millican (born 1 March 1958) is Gilbert Ryle Fellow and Professor of Philosophy at Hertford College, University of Oxford in the United Kingdom. His primary interests include the philosophy of David Hume, philosophy of religion, philosophy of language, epistemology, and moral philosophy. Millican is particularly well known for his work on David Hume, and from 2005 until 2010 was co-editor of the journal Hume Studies. He is also an International Correspondence Chess Grandmaster, and has a strong interest in the field of computing and its links with Philosophy. Recently he has developed a new degree programme at Oxford University, in Computer Science and Philosophy, which accepted its first students in 2012. He currently hosts the University of Oxford's Futuremakers podcast, winning a CASE Gold Award in 2019. From 2014 to 2017 he maintained EarlyModernTexts.com, a site which hosts the writings of famous Early Modern writers in a somewhat modified form to make the text simpler to understand. Education Peter Millican attended Borden Grammar School in Sittingbourne in Kent, United Kingdom. He read Mathematics and then Philosophy and Theology at Lincoln College, Oxford from 1976–1980. Staying at Lincoln College, Millican took the Philosophy B.Phil in 1982 (with a thesis in Philosophical Logic). Millican later obtained his PhD with a thesis on Hume, Induction and Probability, and also a research MSc in Computer Science, while employed at Leeds. Academic career After teaching at the University of Glasgow from 1983, Millican was appointed in 1985 to a permanent Lectureship at Leeds University, teaching both Computing and Philosophy. After 20 years at Leeds, in 2005 Millican was appointed as Gilbert Ryle Fellow in Philosophy at Hertford College, Oxford, promoted to Reader in Early Modern Philosophy in 2007, and Professor of Philosophy in 2010. In 2009, he was appointed as the first "David Hume Illumni Fellow" at University of Edinburgh, a visiting position that he occupied during 2010–11. Research Millican is best known for his research on David Hume, notably on the development of Hume's philosophy, and on the interpretation of his writings on induction and causation. In a 1995 paper, Millican gave a detailed analysis of Hume's famous argument concerning induction, aiming to reconcile its apparent sceptical thrust with Hume's clear endorsement of inductive science: the previous interpretations that he was attacking had either condemned Hume as an inconsistent sceptic, or denied the scepticism entirely. His 2002 collection included a paper refining his analysis, and arguing against recent revisionary non-sceptical interpretations (particularly those proposed by Don Garrett and David Owen)—this debate is still ongoing in his 2012 paper. The collection also emphasised the distinctive importance of Hume's work in the 1748 Enquiry, with the controversial implication that the Enquiry, rather than the Treatise, should be taken as presenting Hume's definitive perspective on the main topics that it covers. Millican has published a series of substantial papers with the aim of deciding the so-called "New Hume" debate, which has been the most prominent controversy in Hume scholarship over the last 20 years ("New Humeans" take Hume to be a believer in a form of causation that goes beyond the constraints of his famous "two definitions of cause"). The first of these appeared in a 2007 collection on the debate, the second in the July 2009 issue of Mind, and the third (responding to replies) in a 2010 collection on causation. The Mind paper concludes that "the New Hume interpretation is not just wrong in detail—failing in the many ways documented above—but fundamentally misrepresents the basis, core, point and spirit of Hume's philosophy of causation". A reviewer of the third paper judges that "Millican convincingly argues that none of his opponents' attempts to [answer his criticisms] are plausible. I am not alone in thinking the New Hume debate has run its course; as Millican says at the end of his essay, 'it is time to call it a day' (p. 158)." Much of Millican's other research, while not itself historical, has focused on Humean topics such as induction, probability, and philosophy of religion, but also on philosophy of language. His most significant non-Humean papers are on the logic of definite descriptions (1990), the morality of abortion (1992), and Anselm's Ontological Argument (2004).". Philosophy and computing As an educator, Millican's most distinctive contributions have been on the interface between Computing and Philosophy, devoting most of his career at Leeds to developing the teaching of Computer Science and programming to students in the Humanities. In 2012 he championed a new degree in Computer Science and Philosophy at Oxford University (see Degrees of the University of Oxford). To encourage students in the Humanities to get involved in Computing, Millican has developed a number of user-friendly software teaching systems. Barack Obama autobiography In 2008 and 2009, some Republican commentators advanced claims that used Millican's software to claim Barack Obama's autobiography, Dreams from My Father was written or ghost-written by Bill Ayers. Millican insists the claim is false. In a series of articles in American Thinker and WorldNetDaily, author Jack Cashill claimed that his own analysis of the book showed Ayers' writing style, and backed this up citing analyses by American researchers using Millican's Signature software. In late October 2008, shortly before the Presidential election, Republican Congressman Chris Cannon and his brother-in-law attempted to hire Millican to prove Ayers' authorship using computer analysis. Millican refused after they would not assure him in advance that his results would be published regardless of the outcome. After some analysis Millican later criticised the claim, saying variously that he had "found no evidence for Cashill's ghostwriting hypothesis", that it was "unlikely" and that he felt "totally confident that it is false". Chess career Millican played chess over-the-board in his youth, and captained Oxford University to victory in the National Chess Club Championship in 1983. He later turned to correspondence chess, becoming British Champion in 1990. This brought him the British Master title, and he then became an International Master in 1993 by winning his Semi-final group in the 19th World Correspondence Championship. With an international rating of 2610 (ranked 31 in the world), Millican was invited to play in the NPSF-50 "super tournament" (the first-ever Category 15 tournament, with an average rating over 2600). By coming fifth—after Ulf Andersson, Gert Jan Timmerman, Joop van Oosterom, and Hans-Marcus Elwert, Millican qualified in 1997 as an International Correspondence Chess Grandmaster. He analysed the Double Muzio chess opening in detail, asserting equality. Main publications "Content, Thoughts, and Definite Descriptions", Proceedings of the Aristotelian Society, Supplementary Volume 64 (1990), pp. 167–203. "The Complex Problem of Abortion", in Philosophical Ethics in Reproductive Medicine (co-edited by Millican with D. Bromham, M. Dalton, and J. Jackson, Springer Verlag: 1992), pp. 161–88. "Hume's Argument Concerning Induction: Structure and Interpretation", in David Hume: Critical Assessments, edited by Stanley Tweyman (Routledge, 1995), vol. 2 pp. 91–144 978-0-415-02012-1. The Legacy of Alan Turing, volume 1 (Machines and Thought ) and volume 2 (Connectionism, Concepts, and Folk Psychology ), (both co-edited by Millican with Andy Clark, Oxford University Press: 1996). Reading Hume on Human Understanding: Essays on the First Enquiry (Oxford, Oxford University Press: 2002) . "The One Fatal Flaw in Anselm's Argument", Mind 113 (2004), pp. 437–76. Hume's Enquiry Concerning Human Understanding (Oxford: Oxford University Press: 2007) . "Humes Old and New: Four Fashionable Falsehoods, and One Unfashionable Truth", Proceedings of the Aristotelian Society, Supplementary Volume 81 (2007), pp. 163–99. "Against the New Hume", in The New Hume Debate, revised edition, edited by Rupert Read and Kenneth Richman (Routledge: 2007), pp. 211–52 . "Hume, Causal Realism, and Causal Science", Mind 118 (2009), pp. 647–712. "Hume, Causal Realism, and Free Will", in Causation and Modern Philosophy, edited by Keith Allen and Tom Stoneham (Routledge: 2010), pp. 123–65 . "Twenty Questions about Hume's 'Of Miracles'" in Philosophy and Religion, edited by Anthony O'Hear (Cambridge University Press: 2011), pp. 151–92 . "Hume's 'Scepticism' about Induction" in The Continuum Companion to Hume, edited by Alan Bailey and Dan O'Brien (Continuum: 2012), pp. 57–103. "Hume" in Ethics: The Key Thinkers, edited by Tom Angier (Bloomsbury: 2012), pp. 105–31 . References External links Personal website Staff homepage at Hertford College, Oxford Website on his work on David Hume Website on his work on Philosophy and Computing 1958 births Living people People educated at Borden Grammar School Alumni of Lincoln College, Oxford Academics of the University of Leeds Academics of the University of Glasgow Academics of the University of Edinburgh Fellows of Hertford College, Oxford 20th-century British philosophers 21st-century British philosophers Analytic philosophers Epistemologists Philosophers of language Correspondence chess grandmasters Skeptics
632323
https://en.wikipedia.org/wiki/CueCat
CueCat
The CueCat, styled :CueCat with a leading colon, is a cat-shaped handheld barcode reader that was given away free to Internet users starting in 2000 by the now-defunct Digital Convergence Corporation. The CueCat was named CUE for the unique bar code which the device scanned and CAT as a play on "Keystroke Automation Technology" and it enabled a user to open a link to an Internet URL by scanning a barcode — called a "cue" by Digital Convergence — appearing in an article or catalog or on some other printed matter. In this way, a user could be directed to a web page containing related information without having to enter a URL. The company asserted that the ability of the device to direct users to a specific URL, rather than a domain name, was valuable. In addition, television broadcasters could use an audio tone in programs or commercials that, if a TV was connected to a computer via an audio cable, acted as a web address shortcut. By year-end 2001, codes were no longer available for the device and scanning with the device no longer yielded results. However, third-party software can decode the lightweight encryption in the device, allowing it to be used as a general-purpose wand-type barcode reader. The CueCat can read several common barcode types, in addition to the proprietary CUE barcodes promoted by Digital Convergence. Marketing The CueCat patents are held by Jeffry Jovan Philyaw, who changed his name to Jovan Hutton Pulitzer after the failure of CueCat. Belo Corporation, parent company of the Dallas Morning News and owner of many TV stations, invested US$37.5 million in Digital Convergence, RadioShack $30 million, Young & Rubicam $28 million and Coca-Cola $10 million. Other investors included General Electric, and E. W. Scripps Company. The total amount invested was $185 million. Each CueCat cost RadioShack about $6.50 to manufacture. Starting in late 2000 and continuing for about a year, advertisements, special web editions and editorial content containing CueCat barcodes appeared in many U.S. periodicals, including Parade magazine, Forbes magazine and Wired magazine. The Dallas Morning News and other Belo-owned newspapers added the barcodes next to major articles and regular features like stocks and weather. Commercial publications such as Adweek, Brandweek and Mediaweek also employed the technology. The CueCat bar codes also appeared in select Verizon Yellow Pages, providing advertisers with a link to additional information. For a time, RadioShack included these barcodes in its product catalogs and distributed CueCat devices through its retail chain to customers at no charge. Forbes magazine mailed out the first 830,000 CueCats as gifts to their subscribers since Forbes was starting to use CRQ (See Our Q Codes) in their magazine. Wired magazine mailed over 500,000 of the free devices as gifts to their subscribers. Each publisher branded the CueCat they sent to their mailing list. Marketing partners Organizations that used :CRQ and :CueCat: Magazines Adweek Brandweek Mediaweek MC Magazine Forbes Wired Parade Catalogs RadioShack Newspapers The Dallas Morning News Milwaukee Journal Sentinel The Providence Journal The Press-Enterprise Broadcast stations WNBC New York KNBC Los Angeles WMAQ Chicago WCAU Philadelphia WFAA Dallas WRC Washington, DC WXYZ Detroit KHOU Houston KING & KONG Seattle WFTS Tampa WEWS Cleveland WTVJ Miami KTVK & KASW Phoenix KMOV St. Louis KGW Portland WMAR Baltimore KNSD San Diego WVIT Hartford WCNC Charlotte WNCN Raleigh KSHB Kansas City WCPO Cincinnati WTMJ Milwaukee WCMH Columbus KENS San Antonio WVTM Birmingham WWL New Orleans WVEC Norfolk WPTV West Palm Beach WHAS Louisville WJAR Providence KTNV Las Vegas KMPH Fresno KOTV Tulsa KVUE Austin KMSB & KTTU Tucson KPTM Omaha KREM & KSKN Spokane KTVB Boise CNBC MSNBC User experience Installation of software and hardware, configuration, and registration took around an hour. Registration required the user's name, age, and e-mail address, and demanded completion of a lengthy survey with invasive questions about shopping habits, hobbies, and educational level. Then one could scan bar codes on groceries, bar codes on books, and custom bar codes in ads in magazines, newspapers, Verizon Yellow Pages, and RadioShack catalogs. CRQ software with a permanent advertisement-displaying taskbar used the code and the unique serial number from the device to return a URL which directed the user's browser to the sponsored website. It could log the web surfing habits associated with the users' real names and e-mail addresses. Reception In The Wall Street Journal, Walter Mossberg criticized CueCat: "In order to scan in codes from magazines and newspapers, you have to be reading them in front of your PC. That's unnatural and ridiculous." Mossberg wrote that the device "fails miserably. Using it is just unnatural." He concluded that the CueCat "isn't worth installing and using, even though it's available free of charge". Joel Spolsky, a computer technology reviewer, also criticized the device as "not solving a problem" and characterized the venture as a "feeble business idea". The CueCat is widely described as a commercial failure. It was ranked twentieth in "The 25 Worst Tech Products of All Time" by PC World magazine in 2006. The CueCat's critics said the device was ultimately of little use. Joe Salkowski of the Chicago Tribune wrote, "You have to wonder about a business plan based on the notion that people want to interact with a soda can", while Debbie Barham of the Evening Standard quipped that the CueCat "fails to solve a problem which never existed". In December 2009, the popular gadget blog Gizmodo voted the CueCat the #1 worst invention of the decade of the "2000s". In 2010, Time magazine included it on a list of "The 50 worst Inventions", adding that people didn't accept "the idea of reading their magazines next to a wired cat-shaped scanner". The CueCat device was controversial, initially because of privacy concerns of its collecting of aggregate user data. Each CueCat has a unique serial number, and users suspected that Digital Convergence could compile a database of all barcodes scanned by a given user and connect it to the user's name and address. For this reason, and because the demographic market targeted by Digital Convergence was unusually tech-savvy, numerous websites arose detailing instructions for "declawing" the CueCat — blocking or encrypting the data it sent to Digital Convergence. Digital Convergence registered the domain "digitaldemographics.com", giving additional credence to privacy concerns about the use of data. Security breach each scan delivers the product code, the user's ID and the scanner's ID back to Digital:Convergence, said Internet technologist Matt Curtin, founder of Interhack The data format was proprietary, and was scrambled so the barcode data could not be read as plain text. However, the barcode itself is closely related to Code 128, and the scanner was also capable of reading EAN/UPC and other symbologies, such as Priority Mail, UPC-A, UPC-E, EAN-13, EAN-8, 2-of-5 interleaved, CODABAR, CODE39, CODE128, and ISBN. Because of the weak obfuscation of the data, meant only to protect the company under DMCA guidelines (like the DVD protection Content Scramble System), the software for decoding the CueCat's output quickly appeared on the Internet, followed by a plethora of unofficial applications. The CueCat connected to computers, in the same way as a keystroke logger, as a pass-through, between the keyboard PS/2 jack and the motherboard PS/2 port (due to USB-PS/2 compatibility, USB-PS/2 adapters may be additionally used). CRQ ("see our cue"), the desktop software, intercepted the data from both the keyboard and the CueCat, before passing it on to the operating system. Versions for both Windows 32-bit or Mac OS 9 were included. Users of this software were required to register with their ZIP code, gender, and email address. This registration process enabled the device to deliver relevant content to a single or multiple users in a household. Privacy groups warned that it could be used to track readers' online behavior because each unit has a unique identifier. Belo officials said they would not track individual CueCat users but would gather anonymous information grouped by age, gender and ZIP code. In September 2000, security watchdog website Securitywatch.com notified Digital Convergence of a security vulnerability on the Digital Convergence website that exposed private information about CueCat users. Digital Convergence immediately shut down that part of their website, and their investigation concluded that approximately 140,000 CueCat users who had registered their CueCat were exposed to a breach that revealed their name, email address, age range, gender and zip code. This was not a breach of the main user database itself, but a flat text file used only for reporting purposes that was generated by ColdFusion code that was saved on a publicly available portion of the Digital Convergence web server. This failure was given a multi-citation Octopus TV "Failure Award" regarding brands that failed to take off and were hacked. Aftermath Digital Convergence responded to this security breach by sending an email to those affected by the incident claiming that it was correcting this problem and would be offering them a $10 gift certificate to RadioShack, an investor in Digital Convergence. The company's response to these hacks was to assert that users did not own the devices and had no right to modify or reverse engineer them. Threats of legal action against the hackers swiftly brought on more controversy and criticism. The company changed the licensing agreement several times, adding explicit restrictions, apparently in response to hacker activity. Hackers argued that the changes did not apply retroactively to devices that had been purchased under older versions of the license, and that the thousands of users who received unsolicited CueCats in the mail had neither agreed to nor were legally bound by the license. No lawsuit was ever brought against "hackers", as this tactic was not employed to go after specific users or the hacker community, but to show "reasonable assertion" that would prevent a corporation from developing integrated software within an operating system or browser which could take over the device and circumvent the CRQ watchdog software and therefore revenue model that Digital Convergence employed. In May 2001, Digital Convergence fired most of its 225-person workforce. In September 2001, Belo Corporation, CueCat investor and owner of newspapers and TV stations, who sent at least 200,000 free CueCats to its readers, wrote off their $37.5 million investment, and stopped using CueCat technology with newspapers's editions, notably, The Press-Enterprise, The Dallas Morning News, and The Providence Journal Investors in CueCat lost their $185 million. Technology journalist Scott Rosenberg called the CueCat a "Rube Goldberg contraption", a "massive flop" and a "fiasco". Awards In 2001, Computerworld named CueCat as a Laureate in the Media Arts & Entertainment category. In 2001, Software and Information Industry Association named Digital: Convergence Corp.'s :CRQ Technology as Best Reference Tool. Surplus liquidation In June 2005, a liquidator offered two million CueCats for sale at $0.30 each (in quantities of 500,000 or more). Once available for free, the device can now be found on sale at eBay for prices ranging from $5 to as much as $100. Open source Hobbyists have reverse-engineered the firmware, software, and the customer database. Books Gallery See also Mobile tagging QR code i-Opener References External links Scan to Connect Patent Portfolio Dissecting the CueCat CueCat post mortem Computing input devices Computer-related introductions in 2000
2652533
https://en.wikipedia.org/wiki/GP2X
GP2X
The GP2X is a Linux-based handheld video game console and portable media player developed by South Korean company GamePark Holdings. It was released on November 10, 2005, in South Korea only. The GP2X is designed for homebrew developers as well as commercial developers. It is commonly used to run emulators for game consoles such as Neo Geo, Mega Drive/Genesis, Master System, Game Gear, Amstrad CPC, Commodore 64, NES, TurboGrafx-16, and MAME. Overview The GP2X was designed to play music and videos, view photos, and play games. It had an open architecture (Linux based), allowing anybody to develop and run software. Also, there was the possibility for additional features (such as support for new media formats) to be added in the future due to the upgradeable firmware. A popular use of the GP2X was to run emulators, which allows one to use software from a video game of another system on the GP2X. History Shortly after the release of the GP32 in 2001, its maker Game Park began to design their next handheld. A disagreement within the company about the general direction of this system prompted many of the staff to leave and create their own company, GamePark Holdings, to produce a 2D-based handheld system which they saw as the sequel to the GP32. GamePark Holdings spoke to previous GP32 distributors and developers to determine the specifications for the new machine and how it should be promoted. Meetings were held in Seoul, Korea, where the final design of the new console was agreed upon. The first name of this console was the GPX2. However, it couldn't be used as a final name due to a possible trademark violation with the name of a Japanese printer, the GPX. A contest for a new name was announced on August 3, 2005. Around 1500 names were submitted in total. The winner of the competition was Matt Bakse who chose the title GP2X. For this he was awarded a free GP2X console, although delivery of his prize was rather delayed. The GP2X has seen several minor hardware updates, most notably the changes from the First Edition to Normal Edition and the Normal Edition to the MK2. Also, a new version called the "F200" was released October 30, 2007 and features a touchscreen, among other changes. As of October 16, 2006, the GP2X had sold 30,000 units. On August 31, 2008, the CEO of Gamepark Holdings told German GP2X distributor Michael Mrozek (aka. EvilDragon) that 60,000 GP2X units had been sold. On 26 August 2008, GamePark Holdings announced the successor to the GP2X, the "Wiz". As of September 1, 2008 a version of the GP2X is still being sold in Korea by Vocamaster that is geared toward Koreans who wish to learn English. In fact, according to the official GP2X distributor for the UK, Craig Rothwell, most GP2X units sold to date have been sold through Vocamaster as English-learning tools. Hardware Specifications Chipset: MagicEyes MMSP2 MP2520F System-on-a-chip CPU: 200 MHz ARM920T host processor, 200 MHz ARM940T programmable coprocessor NAND Flash ROM: 64 MB RAM: SDRAM 64 MB Operating System: Linux-based OS Storage: SD Card (Latest firmware supports SDHC) Connection to PC: USB 2.0 High Speed USB Host: USB 1.1 Power: 2 × AA battery or via AC adapter Display: 320×240 3.5 inch, 65,536 colors TFT LCD TV output Physical size: 143.6 mm wide, 82.9 mm high, 27 mm (excl. joystick approx.) / 34 mm deep Weight: 161 g (without battery) The ARM940T was used by GPH's implementation of Linux to control video processing. Using the 940T core in Linux for other tasks apart from video processing is difficult but possible. Accessing the hardware directly makes it easier to use both CPUs. The F-200 version of the GP2X hardware replaced the joystick with a directional pad and adds a touchscreen. Expandability The GP2X had an expansion "EXT" port on the base of the unit into which a range of special cables (for USB host, TV-out etc.) or break out box could be plugged, allowing four USB devices to be connected to and used with the GP2X directly. The only thing limiting what can be used through this interface is the availability of drivers. The connector used to expand the GP2X is hard to come by on its own but it is used with a few other devices. The Samsung e810/e730 and LG U8110/20/30/36/38 mobile telephone data cables, along with the official GP2X TV-Out adapter are suitable connectors. This connector isn't proprietary; the specifications of this connector are fully open, encouraging home cable construction. TV output The GP2X also supported TV-out with a special cable that plugs into the EXT port. This allows videos that are normally scaled down to fit the GP2X's screen to be played at native resolution on a TV. It also lets software be displayed on the higher resolution TV rather than the screen. Not all software supports this natively, but 3rd party software exists that enables TV-out functionality in all applications. This is done by launching a background process. Power The GP2X requires 2 AA-sized batteries if not running from an external power supply. Due to the high current drain, standard alkaline batteries will not function for very long in the GP2X; NiMH or lithium batteries are recommended. Battery life varies depending on the type of activity being performed and can last anywhere from 10 minutes (using alkaline batteries) to over 6 hours (using high-capacity NiMH batteries). When listening to music, power can be conserved by turning off the backlight and display. The GP2X has a socket for an external power supply. It must be rated 3.3V DC at 1A with a standard center-grounded (negative center) connector. The power supply should be regulated, as voltage spikes can permanently damage the unit. Storage The GP2X's primary storage device is the Secure Digital card, which can be placed into a socket at the top of the unit. Older firmware only supported SD cards up to 4 GB in capacity. SD cards must be formatted as either FAT16, FAT32 (32 is more reliable), or ext2. The GP2X also has 64 MB of internal flash memory storage, of which 32MB can be used for user data. From firmware release version 4.0 the GP2X F200 is capable of addressing the new SDHC standard and thus now works with SDHC cards up to 64GB in size. Overclocking The two ARM processors in the GP2X can be overclocked beyond their rated speed in software. The maximum speed one can reach through overclocking varies from system to system, with about 1 in 50 reaching over 300 MHz and others barely reaching 240 MHz (many systems can be overclocked beyond 240 MHz with no problems. The highest they are advertised to overclock to is 266 MHz.) Multimedia support Video Video formats: DivX 3/4/5, Xvid (MPEG-4) Audio formats: MP3 and Vorbis Container files: AVI and OGM (WMA and MPG via additional software) Maximum Resolution: 720*480 (scaled to 320x240 screen resolution using built in scaling chip) Captions: SMI, SRT Battery Life: 3.5 hours average, longer times possible with high capacity batteries and with use of the power saving modes within. Audio Audio Formats: MP3, Vorbis (more with alternative players) Channels: Stereo Frequency Range: 20 Hz - 20 kHz Power output: 100 mW Sample Resolution/Rate: 16bit/8–48 kHz Equalizer: includes "Normal", "Classic", "Rock", "Jazz", "Pop" presets Battery Life: ~6 hours (information given by manufacturer) with 2 x 2500mAh AA batteries. Software Because the tools required for development on the GP2X are freely available, there is a wealth of software available for the GP2X, much of which is free. Types of software available includes emulators, games, PDA applications and multimedia players. Built-in software The GP2X has several pieces of software built directly into the firmware. There is a version of MPlayer which is used to play music and video, an image viewer, an e-book reader (which can display the contents of standard text documents on-screen) and a utility to adjust the LCD update frequency to eliminate any flickering. Other applications available (though not accessible directly through the menu) were a Samba server, for transferring files to the machine using the default Windows network file sharing protocol; an HTTP server, for providing web pages; an FTP server, a different way of transferring files; and telnet access allowing for direct command line access from outside the machine. These servers operate over the included USB networking functionality, allowing one to connect the GP2X to a wider network through a PC. The new GP2X-F200 supports none of these network programs. Version 3.0.0 of the firmware comes with 5 games pre-installed in the NAND memory. The games are Payback (demo), Noiz2sa, Flobopuyo, SuperTux, and Vektar (freeware version). This firmware is currently shipped with new GP2Xs. Emulators There are many emulators available for the GP2X which allow you to run software from other systems on the GP2X. Many emulators will run most software perfectly and at the intended speed, but some others may have various issues (often to do with speed or sound). Popular emulators include GnGeo which emulates the Neo Geo; GNUboy2x, Game Boy and Game Boy Color emulators; MAME, an emulator of various arcade machines; DrMD, which emulates the Master System, Game Gear and Mega Drive/Genesis; SquidgeSNES and PocketSNES, which emulate Super NES games; and Picodrive, which emulates Mega Drive and Sega CD games; psx4all which emulates PlayStation games. Stella, an emulator for the Atari 2600 has also been ported to the GP2X Games Since the GP2X has a much smaller following than other handheld consoles, such as the Sony PSP or the Nintendo DS, there are very few commercial games available for it. Vektar, Payback, Quartz², retrovirus RTS, Wind and Water: Puzzle Battles and Blazar have been released as commercial games for the GP2X, and the games Odonata and Elsewhere were released in October 2006 for Korean distribution only. However, there are many ports of games from other platforms, mostly Linux, to the GP2X. Popular ports include SuperTux and Frozen Bubble as well as the Duke Nukem 3D, Quake, and Doom engines (which can run the original games if the user owns a copy with the correct data files). There are also hundreds of original freeware games such as Tilematch and Beat2X, made by GP2X programmers in their spare time. Multimedia players There are several unofficial multimedia players available for the GP2X, intended to support more formats than the built-in music and video players can handle. One such program is a port of FFPlay that allows you to play several RealMedia and Windows Media formats. Since the release of the MPlayer source code, several unofficial builds have been released for various purposes. One of these adds support for playing music in the AAC format. Music Creation Tools The GP2X natively runs the free homebrew application Little Game Park Tracker, a music tracker program which was created by chip musician M-.-n specifically for the GP2X. Little Game Park Tracker, also known as LGPT or Little Piggy Tracker, allows for sample-based music production with a myriad of sample tweaking abilities. LGPT borrows the interface of the popular Game Boy music tracker Little Sound DJ. It has since been ported to the PSP, Dingoo, Windows, OS X, and other platforms. PDA Applications Two popular PDA desktop environments have been ported to the GP2X: Qtopia and GPE. Both contain a range of programs such as a web browser, word processor, etc. and can be controlled with either the GP2X controls or a USB mouse and keyboard connected through a USB cable attached to the EXT port. Open source development SDKs (software development kits) are freely and easily available for the GP2X allowing anybody with the required skills to write an application or game. Most SDKs are based around a gcc cross-compiler toolchain and SDL. SDL is available for many systems, allowing for cross-compatibility of code with other platforms such as Microsoft Windows and GNU/Linux. A port of the Allegro game programming library is also available for the GP2X, as are ports of the Fenix and BennuGD game toolkits. Other libraries under development include Minimal Library SDK, which allows for direct hardware access inside the GP2X Linux environment, and sdk2x a set of libraries and a program which allows you to leave Linux completely for total control of all the hardware with no operating system to interfere. Currently in development is gpu940, a soft 3D renderer that can do many rendering types, including true perspective texture mapping/lighting. It utilizes the ARM940T CPU of the GP2X, and allows for the GP2X to run basic OpenGL functions. In January 2007, the renderer's OpenGL functions allowed for the 3D roleplaying game Egoboo to be ported to the GP2X at a playable speed, and a month later updated with increased speed and added lighting effects. GP2X executables GP2X executable files have one of two 3 letters file extensions. For games, the extension is used. These are listed in the Games section of the menu. Utilities have the extension , and appear in the Utilities section of the menu; in firmware 3.0.0 they appear along with the games. DRM controversy There was debate before launch over the implied inclusion of DRM in the GP2X. However, since release, the GP2X platform was shown to be clear of any form of DRM. See also Comparison of handheld game consoles GP32 - Predecessor device GP2X Wiz - Successor device GP2X Caanoo - Successor device Pandora (console), another open source handheld device List of other Linux-based, handheld gaming devices References External links Developer and User Wiki Site GP2X Software Archive http://www.console-spot.com/2006/02/21/gp2x-review/ Archive of older versions of emulators for GP2X and GP32 Seventh-generation video game consoles ARM-based video game consoles Linux-based devices Game Park Regionless game consoles Handheld game consoles
17886246
https://en.wikipedia.org/wiki/SPSS%20Inc.
SPSS Inc.
SPSS Inc. was a software house headquartered in Chicago and incorporated in Delaware, most noted for the proprietary software of the same name SPSS. The company was started in 1968 when Norman Nie, Dale Bent, and Hadlai "Tex" Hull developed and started selling the SPSS software. The company was incorporated in 1975, and Nie served as CEO from 1975 until 1992. Jack Noonan served as CEO from 1992 until the 2009 acquisition of SPSS Inc. by IBM. In 2008 SPSS Inc. had sales of and over 250,000 customers. In addition to the software which shares its name, SPSS Inc. sold a wide range of software for market research, survey research and statistical analysis. These included AMOS (Before 2003, it was part of SmallWaters Corp.) for structural equation modeling, SamplePower for statistical power analysis, AnswerTree (decision tree software) used for market segmentation, SPSS Text Analysis for Surveys to code open-ended responses, Quantum for cross-tabulation, SPSS Modeler (previously known as Clementine or PASW Modeler) for data mining and mrInterview for CATI and online surveys. SPSS had forged partnerships with several other high-profile companies such as Oracle Corporation and participates in a number of US government programs. The company was challenged legally of artificially inflating their stock price in 2004. IBM acquisition On July 28, 2009, IBM announced it was acquiring SPSS Inc. for in cash. In January 2010, the company became "SPSS: An IBM Company". Complete transfer of business to IBM was done by October 1, 2010. By that date, SPSS: An IBM Company ceased to exist. IBM SPSS is now fully integrated into the IBM Corporation, and is one of two brands under IBM Software Group's Business Analytics Portfolio. References External links IBM SPSS Homepage Stock Market Commentary Wall Street Journal article about IBM acquisition of SPSS, Inc. History of SPSS Inc. from 1968 to 2003. Software companies based in Illinois Companies based in Chicago IBM acquisitions Former IBM subsidiaries 2009 mergers and acquisitions Software companies established in 1968 1968 establishments in Illinois Software companies of the United States
305224
https://en.wikipedia.org/wiki/Quality%20assurance
Quality assurance
Quality assurance (QA) is a way of preventing mistakes and defects in manufactured products and avoiding problems when delivering products or services to customers; which ISO 9000 defines as "part of quality management focused on providing confidence that quality requirements will be fulfilled". This defect prevention in quality assurance differs subtly from defect detection and rejection in quality control and has been referred to as a shift left since it focuses on quality earlier in the process (i.e., to the left of a linear process diagram reading left to right). The terms "quality assurance" and "quality control" are often used interchangeably to refer to ways of ensuring the quality of a service or product. For instance, the term "assurance" is often used as follows: Implementation of inspection and structured testing as a measure of quality assurance in a television set software project at Philips Semiconductors is described. The term "control", however, is used to describe the fifth phase of the define, measure, analyze, improve, control (DMAIC) model. DMAIC is a data-driven quality strategy used to improve processes. Quality assurance comprises administrative and procedural activities implemented in a quality system so that requirements and goals for a product, service or activity will be fulfilled. It is the systematic measurement, comparison with a standard, monitoring of processes and an associated feedback loop that confers error prevention. This can be contrasted with quality control, which is focused on process output. Quality assurance includes two principles: "fit for purpose" (the product should be suitable for the intended purpose); and "right first time" (mistakes should be eliminated). QA includes management of the quality of raw materials, assemblies, products and components, services related to production, and management, production and inspection processes. The two principles also manifest before the background of developing (engineering) a novel technical product: The task of engineering is to make it work once, while the task of quality assurance is to make it work all the time. Historically, defining what suitable product or service quality means has been a more difficult process, determined in many ways, from the subjective user-based approach that contains "the different weights that individuals normally attach to quality characteristics," to the value-based approach which finds consumers linking quality to price and making overall conclusions of quality based on such a relationship. History Initial efforts to control the quality of production During the Middle Ages, guilds adopted responsibility for the quality of goods and services offered by their members, setting and maintaining certain standards for guild membership. Royal governments purchasing material were interested in quality control as customers. For this reason, King John of England appointed William de Wrotham to report about the construction and repair of ships. Centuries later, Samuel Pepys, Secretary to the British Admiralty, appointed multiple such overseers to standardize sea rations and naval training. Prior to the extensive division of labor and mechanization resulting from the Industrial Revolution, it was possible for workers to control the quality of their own products. The Industrial Revolution led to a system in which large groups of people performing a specialized type of work were grouped together under the supervision of a foreman who was appointed to control the quality of work manufactured. Wartime production During the time of the First World War, manufacturing processes typically became more complex, with larger numbers of workers being supervised. This period saw the widespread introduction of mass production and piece work, which created problems as workmen could now earn more money by the production of extra products, which in turn occasionally led to poor quality workmanship being passed on to the assembly lines. Pioneers such as Frederick Winslow Taylor and Henry Ford recognized the limitations of the methods being used in mass production at the time and the subsequent varying quality of output. Taylor, utilizing the concept of scientific management, helped separate production tasks into many simple steps (the assembly line) and limited quality control to a few specific individuals, limiting complexity. Ford emphasized standardization of design and component standards to ensure a standard product was produced, while quality was the responsibility of machine inspectors, "placed in each department to cover all operations ... at frequent intervals, so that no faulty operation shall proceed for any great length of time." Out of this also came statistical process control (SPC), which was pioneered by Walter A. Shewhart at Bell Laboratories in the early 1920s. Shewhart developed the control chart in 1924 and the concept of a state of statistical control. Statistical control is equivalent to the concept of exchangeability developed by logician William Ernest Johnson, also in 1924, in his book Logic, Part III: The Logical Foundations of Science. Along with a team at AT&T that included Harold Dodge and Harry Romig, he worked to put sampling inspection on a rational statistical basis as well. Shewhart consulted with Colonel Leslie E. Simon in the application of control charts to munitions manufacture at the Army's Picatinny Arsenal in 1934. That successful application helped convince Army Ordnance to engage AT&T's George Edwards to consult on the use of statistical quality control among its divisions and contractors at the outbreak of World War II. Postwar After World War II, many countries' manufacturing capabilities that had been destroyed during the war were rebuilt. General Douglas MacArthur oversaw the rebuilding of Japan. He involved two key people in the development of modern quality concepts: W. Edwards Deming and Joseph Juran. They and others promoted the collaborative concepts of quality to Japanese business and technical groups, and these groups used these concepts in the redevelopment of the Japanese economy. Although there were many people trying to lead United States industries toward a more comprehensive approach to quality, the US continued to apply the Quality Control (QC) concepts of inspection and sampling to remove defective products from production lines, essentially unaware of or ignoring advances in QA for decades. Approaches Failure testing It is valuable to failure test or stress test a complete consumer product. In mechanical terms this is the operation of a product until it fails, often under stresses such as increasing vibration, temperature, and humidity. This may expose many unanticipated weaknesses in the product, and the data is used to drive engineering and manufacturing process improvements. Often quite simple changes can dramatically improve product service, such as changing to mold-resistant paint or adding lock-washer placement to the training for new assembly personnel. Statistical control Statistical control is based on analyses of objective and subjective data. Many organizations use statistical process control as a tool in any quality improvement effort to track quality data. Product quality data is statistically charted to distinguish between common cause variation or special cause variation. Walter Shewart of Bell Telephone Laboratories recognized that when a product is made, data can be taken from scrutinized areas of a sample lot of the part and statistical variances are then analyzed and charted. Control can then be implemented on the part in the form of rework or scrap, or control can be implemented on the process that made the part, ideally eliminating the defect before more parts can be made like it. Total quality management The quality of products is dependent upon that of the participating constituents, some of which are sustainable and effectively controlled while others are not. The process(es) which are managed with QA pertain to Total quality management. If the specification does not reflect the true quality requirements, the product's quality cannot be guaranteed. For instance, the parameters for a pressure vessel should cover not only the material and dimensions but operating, environmental, safety, reliability and maintainability requirements. Models and standards ISO 17025 is an international standard that specifies the general requirements for the competence to carry out tests and or calibrations. There are 15 management requirements and 10 technical requirements. These requirements outline what a laboratory must do to become accredited. Management system refers to the organization's structure for managing its processes or activities that transform inputs of resources into a product or service which meets the organization's objectives, such as satisfying the customer's quality requirements, complying with regulations, or meeting environmental objectives. WHO has developed several tools and offers training courses for quality assurance in public health laboratories. The Capability Maturity Model Integration (CMMI) model is widely used to implement Process and Product Quality Assurance (PPQA) in an organization. The CMMI maturity levels can be divided into 5 steps, which a company can achieve by performing specific activities within the organization. Company quality During the 1980s, the concept of "company quality" with the focus on management and people came to the fore in the U.S. It was considered that, if all departments approached quality with an open mind, success was possible if management led the quality improvement process. The company-wide quality approach places an emphasis on four aspects (enshrined in standards such as ISO 9001): Elements such as controls, job management, adequate processes, performance and integrity criteria, and identification of records Competence such as knowledge, skills, experiences, qualifications Soft elements, such as personnel integrity, confidence, organizational culture, motivation, team spirit and quality relationships Infrastructure (as it enhances or limits functionality) The quality of the outputs is at risk if any of these aspects is deficient. The importance of actually measuring Quality Culture throughout the organization is illustrated by a survey that was done by Forbes Insights in partnership with the American Society for Quality. 75% of senior or C-suite titles believed that their organization exhibits “a comprehensive, group-wide culture of quality.” But agreement with that response dropped to less than half among those with quality job titles. In other words, the further from the C-suite, the less favorable the view of the culture of quality. A survey of more than 60 multinational companies found that those companies whose employees rated as having a low quality culture had increased costs of $67 million/year for every 5000 employees compared to those rated as having a high quality culture. QA is not limited to manufacturing, and can be applied to any business or non-business activity, including: design, consulting, banking, insurance, computer software development, retailing, investment, transportation, education, and translation. It comprises a quality improvement process, which is generic in the sense that it can be applied to any of these activities and it establishes a quality culture, which supports the achievement of quality. This in turn is supported by quality management practices which can include a number of business systems and which are usually specific to the activities of the business unit concerned. In manufacturing and construction activities, these business practices can be equated to the models for quality assurance defined by the International Standards contained in the ISO 9000 series and the specified specifications for quality systems. In the system of Company Quality, the work being carried out was shop floor inspection which did not reveal the major quality problems. This led to quality assurance or total quality control, which has come into being recently. In practice Medical industry QA is very important in the medical field because it helps to identify the standards of medical equipment and services. Hospitals and laboratories make use of external agencies in order to ensure standards for equipment such as X-ray machines, Diagnostic Radiology and AERB. QA is particularly applicable throughout the development and introduction of new medicines and medical devices. The Research Quality Association (RQA) supports and promotes the quality of research in life sciences, through its members and regulatory bodies. Aerospace industry The term product assurance (PA) is often used instead of quality assurance and is, alongside project management and engineering, one of the three primary project functions. Quality assurance is seen as one part of product assurance. Due to the sometimes catastrophic consequences a single failure can have for human lives, the environment, a device, or a mission, product assurance plays a particularly important role here. It has organizational, budgetary and product developmental independence meaning that it reports to highest management only, has its own budget, and does not expend labor to help build a product. Product assurance stands on an equal footing with project management but embraces the customer's point of view. Software development Software quality assurance refers to monitoring the software engineering processes and methods used to ensure quality. Various methods or frameworks are employed for this, such as ensuring conformance to one or more standards, e.g. ISO 25010 (which supersede ISO/IEC 9126) or process models such as CMMI, or SPICE. In addition, enterprise quality management software is used to correct issues such as supply chain disaggregation and to ensure regulatory compliance; these are vital for medical device manufacturers. Using contractors or consultants Consultants and contractors are sometimes employed when introducing new quality practices and methods, particularly where the relevant skills and expertise and resources are not available within the organization. Consultants and contractors will often employ Quality Management Systems (QMS), auditing and procedural documentation writing CMMI, Six Sigma, Measurement Systems Analysis (MSA), Quality Function Deployment (QFD), Failure Mode and Effects Analysis (FMEA), and Advance Product Quality Planning (APQP). See also Best practice Data quality Data integrity Farm assurance GxP, a general term for Good Practice quality guidelines and regulations Mission assurance Production assurance Program assurance QA/QC Quality engineering Quality management Quality management system Ringtest, part of a quality assurance program in which identical samples are analyzed by different laboratories Software testing Verification and validation References Further reading Journals Quality Progress, American Society for Quality Quality Assurance in Education, , Emerald Publishing Group Accreditation and Quality Assurance, Food Quality and Preference, , an official journal of the Sensometric Society and the official journal of the European Sensory Science Society Asigurarea Calitatii, , Romanian Society for Quality Assurance (SRAC) Issues Books Majcen N., Taylor P. (Editors): Practical examples on traceability, measurement uncertainty and validation in chemistry, Vol 1; , 2010. Pyzdek, T, "Quality Engineering Handbook", 2003, Godfrey, A. B., "Juran's Quality Handbook", 1999, Marselis, R. & Roodenrijs, E. "the PointZERO vision", 2012, da Silva, R.B., Bulska, E., Godlewska-Zylkiewicz, B., Hedrich, M., Majcen, N., Magnusson, B., Marincic, S., Papadakis, I., Patriarca, M., Vassileva, E., Taylor, P., Analytical measurement: measurement uncertainty and statistics;, 2012.
28185891
https://en.wikipedia.org/wiki/Xilinx%20ISE
Xilinx ISE
Xilinx ISE (Integrated Synthesis Environment) is a discontinued software tool from Xilinx for synthesis and analysis of HDL designs, which primarily targets development of embedded firmware for Xilinx FPGA and CPLD integrated circuit (IC) product families. It was succeeded by Xilinx Vivado. Use of the last released edition from October 2013 continues for in-system programming of legacy hardware designs containing older FPGAs and CPLDs otherwise orphaned by the replacement design tool, Vivado Design Suite. ISE enables the developer to synthesize ("compile") their designs, perform timing analysis, examine RTL diagrams, simulate a design's reaction to different stimuli, and configure the target device with the programmer. Other components shipped with the Xilinx ISE include the Embedded Development Kit (EDK), a Software Development Kit (SDK) and ChipScope Pro. The Xilinx ISE is primarily used for circuit synthesis and design, while ISIM or the ModelSim logic simulator is used for system-level testing. As commonly practiced in the commercial electronic design automation sector, Xilinx ISE is tightly-coupled to the architecture of Xilinx's own chips (the internals of which are highly proprietary) and cannot be used with FPGA products from other vendors. Given the highly proprietary nature of the Xilinx hardware product lines, it is rarely possible to use open source alternatives to tooling provided directly from Xilinx, although as of 2020, some exploratory attempts are being made. Legacy status Since 2012, Xilinx ISE has been discontinued in favor of Vivado Design Suite that serves the same roles as ISE with additional features for system on a chip development. Xilinx released the last version of ISE in October 2013 (version 14.7), and states that "ISE has moved into the sustaining phase of its product life cycle, and there are no more planned ISE releases." User Interface The primary user interface of the ISE is the Project Navigator, which includes the design hierarchy (Sources), a source code editor (Workplace), an output console (Transcript), and a processes tree (Processes). The Design hierarchy consists of design files (modules), whose dependencies are interpreted by the ISE and displayed as a tree structure. For single-chip designs there may be one main module, with other modules included by the main module, similar to the main() subroutine in C++ programs. Design constraints are specified in modules, which include pin configuration and mapping. The Processes hierarchy describes the operations that the ISE will perform on the currently active module. The hierarchy includes compilation functions, their dependency functions, and other utilities. The window also denotes issues or errors that arise with each function. The Transcript window provides status of currently running operations, and informs engineers on design issues. Such issues may be filtered to show Warnings, Errors, or both. Simulation System-level testing may be performed with ISIM or the ModelSim logic simulator, and such test programs must also be written in HDL languages. Test bench programs may include simulated input signal waveforms, or monitors which observe and verify the outputs of the device under test. ModelSim or ISIM may be used to perform the following types of simulations: Logical verification, to ensure the module produces expected results Behavioural verification, to verify logical and timing issues Post-place & route simulation, to verify behaviour after placement of the module within the reconfigurable logic of the FPGA Synthesis Xilinx's patented algorithms for synthesis allow designs to run up to 30% faster than competing programs, and allows greater logic density which reduces project time and costs. Also, due to the increasing complexity of FPGA fabric, including memory blocks and I/O blocks, more complex synthesis algorithms were developed that separate unrelated modules into slices, reducing post-placement errors. IP Cores are offered by Xilinx and other third-party vendors, to implement system-level functions such as digital signal processing (DSP), bus interfaces, networking protocols, image processing, embedded processors, and peripherals. Xilinx has been instrumental in shifting designs from ASIC-based implementation to FPGA-based implementation. Editions The Subscription Edition is the licensed version of Xilinx ISE, and a free trial version is available for download. The Web Edition is the free version of Xilinx ISE, that can be downloaded and used for no charge. It provides synthesis and programming for a limited number of Xilinx devices. In particular, devices with a large number of I/O pins and large gate matrices are disabled. The low-cost Spartan family of FPGAs is fully supported by this edition, as well as the family of CPLDs, meaning small developers and educational institutions have no overheads from the cost of development software. License registration is required to use the Web Edition of Xilinx ISE, which is free and can be renewed an unlimited number of times. Device Support Hardware Support ISE supports Xilinx's 7-series (except of Spartan-7) and older devices including CPLDs (XC9500 and CoolRunner). For development targeting newer Xilinx's devices (UltraScale and UltraScale+ series), the Xilinx Vivado has to be used. Operating System Support Xilinx officially supports Microsoft Windows, Red Hat Enterprise 4, 5, & 6 Workstations (32 & 64 bits) and SUSE Linux Enterprise 11 (32 & 64 bits). Certain other Linux distributions can run Xilinx ISE WebPack with some modifications or configurations, including Gentoo Linux, Arch Linux, FreeBSD and Fedora. See also Xilinx Vivado Intel Quartus Prime ModelSim References External links Xilinx - ISE webpage Xilinx - Official website Computer-aided design software Electronic design automation software Digital electronics
47943370
https://en.wikipedia.org/wiki/Oscar%20H.%20Ibarra
Oscar H. Ibarra
Oscar H. Ibarra (born September 29, 1941 in Negros Occidental, Philippines) is a Filipino-American theoretical computer scientist, prominent for work in automata theory, formal languages, design and analysis of algorithms and computational complexity theory. He was a Professor of the Department of Computer Science at the University of California-Santa Barbara until his retirement in 2011. Previously, he was on the faculties of UC Berkeley (1967-1969) and the University of Minnesota (1969-1990). He is currently a Distinguished Professor Emeritus at UCSB. Life and career Ibarra received a BS degree in Electrical Engineering from the University of the Philippines and MS and PhD degrees, also in Electrical Engineering, from the University of California, Berkeley in 1965 and 1967, respectively. Ibarra was awarded a John Simon Guggenheim Memorial Foundation Fellowship in 1984. In 1993, he was elected a Fellow of the American Association for the Advancement of Science. He is a Fellow of the Institute of Electrical and Electronics Engineers and the Association for Computing Machinery. In 2001, he received the IEEE Computer Society's Harry H. Goode Memorial Award. He was elected member of the European Academy of Sciences (EAS) in 2003. He was awarded the Blaise Pascal Medal in Computer Science from EAS in 2007, and in 2008 he was elected a Foreign Member of Academia Europaea in the Informatics Section. In 2008, he was awarded a Distinguished Visiting Fellowship from the UK Royal Academy of Engineering. In July 2015, during the 40th anniversary celebration of the journal, Theoretical Computer Science, Ibarra was named the most prolific author in its 40-year history. He was listed in the Institute for Scientific Information (ISI) database of Highly Cited Researchers in Computer Science in 2003 and in the Computer Science Bibliography DBLP. References Selected bibliography Ibarra, O. H., "A Note Concerning Nondeterministic Tape Complexities", J. ACM 19(4): 608-612 (1972). Ibarra, O. H., "On Two-way Multihead Automata", J. Comput. Syst. Sci. 7(1): 28-36 (1973). Ibarra, O. H. and Chul E. Kim, "Fast Approximation Algorithms for the Knapsack and Sum of Subset Problems", J. ACM 22(4): 463-468 (1975). Ibarra, O. H., "Reversal-Bounded Multicounter Machines and Their Decision Problems", J. ACM 25(1): 116-133 (1978). Ibarra, O. H., "Some Computational Issues in Membrane Computing", MFCS 2005: 39–5. External links Theoretical computer scientists Living people University of California, Santa Barbara faculty University of the Philippines alumni University of California, Berkeley alumni 1941 births Filipino emigrants to the United States People from Negros Occidental
49764386
https://en.wikipedia.org/wiki/Subgraph%20%28operating%20system%29
Subgraph (operating system)
Subgraph OS is a Linux distribution designed to be resistant to surveillance and interference by sophisticated adversaries over the Internet. It is based on Debian. The operating system has been mentioned by Edward Snowden as showing future potential. Subgraph OS is designed to be locked down and with features which aim to reduce the attack surface of the operating system, and increase the difficulty required to carry out certain classes of attack. This is accomplished through system hardening and a proactive, ongoing focus on security and attack resistance. Subgraph OS also places emphasis on ensuring the integrity of installed software packages through deterministic compilation. Features Some of Subgraph OS's notable features include: Linux kernel hardened with the grsecurity and PaX patchset. Linux namespaces and xpra for application containment. Mandatory file system encryption during installation, using LUKS. Resistance to cold boot attacks. Configurable firewall rules to automatically ensure that network connections for installed applications are made using the Tor anonymity network. Default settings ensure that each application's communication is transmitted via an independent circuit on the network. GNOME Shell integration for the OZ virtualization client, which runs apps inside a secure Linux container, targeting ease-of-use by everyday users. Security The security of Subgraph OS (which uses sandbox containers) has been questioned in comparison to Qubes (which uses virtualization), another security focused operating system. An attacker can trick a Subgraph user to run a malicious unsandboxed script via the OS's default Nautilus file manager or in the terminal. It is also possible to run malicious code containing .desktop files (which are used to launch applications). Malware can also bypass Subgraph OS's application firewall. Also, by design, Subgraph does not isolate the network stack like Qubes OS. See also Tails (operating system) Qubes OS References External links Debian-based distributions Operating system security Linux distributions
61828066
https://en.wikipedia.org/wiki/Kori%20Inkpen
Kori Inkpen
Kori Marie Inkpen (also published as Kori Inkpen Quinn) is a Canadian computer scientist specializing in human-computer interaction at Microsoft Research. A consistent theme of her research has been the interaction of children with computers. Inkpen is a 1992 graduate of Dalhousie University, and completed her Ph.D. in 1997 at the University of British Columbia (UBC). At UBC, she credits Maria Klawe and a project led by Klawe on educational electronic games for sparking her interest in human-computer interaction and encouraging her to continue in academic computer science. Her dissertation, Adapting the Human-Computer Interface to Support Collaborative Learning Environments, was jointly supervised by Klawe and Kellogg S. Booth. After postdoctoral research at the University of Washington, she was a faculty member at Simon Fraser University from 1998 to 2001 and at Dalhousie University from 2001 to 2007 before joining Microsoft in 2008. In 2017 the Canadian Human-Computer Communications Society gave her their CHCCS/SCDHM Achievement Award "for her many contributions to the field of human-computer interaction, especially her work on collaboration technologies". References External links A Conversation with the CHCCS 2017 Achievement Award Winner: Kori Inkpen, Microsoft Research Year of birth missing (living people) Living people Canadian women computer scientists Canadian computer scientists Human–computer interaction researchers Dalhousie University alumni University of British Columbia alumni Simon Fraser University faculty Dalhousie University faculty Microsoft Research people
266699
https://en.wikipedia.org/wiki/FileMaker
FileMaker
FileMaker is a cross-platform relational database application from Claris International, a subsidiary of Apple Inc. It integrates a database engine with a graphical user interface (GUI) and security features, allowing users to modify the database by dragging new elements into layouts, screens, or forms. It is available in desktop, server, iOS and web-delivery configurations. FileMaker Pro, the desktop app, evolved from a DOS application, originally called simply FileMaker, but was then developed primarily for the Apple Macintosh and released in April 1985. It was rebranded as FileMaker Pro in 1990. Since 1992 it has been available for Microsoft Windows and for the classic Mac OS and macOS, and can be used in a cross-platform environment. FileMaker Go, the mobile app, was released for iOS devices in July 2010. FileMaker Server allows centralized hosting of apps which can be used by clients running the desktop or mobile apps. It is also available hosted by Claris, called FileMaker Cloud. History FileMaker began as an MS-DOS-based computer program named Nutshell – developed by Nashoba Systems of Concord, Massachusetts, in the early 1980s. Nutshell was distributed by Leading Edge, an electronics marketer that had recently started selling IBM PC-compatible computers. With the introduction of the Macintosh, Nashoba combined the basic data engine with a new forms-based graphical user interface (GUI). Leading Edge was not interested in newer versions, preferring to remain a DOS-only vendor, and kept the Nutshell name. Nashoba found another distributor, Forethought Inc., and introduced the program on the Macintosh platform as FileMaker in April 1985. When Apple introduced the Macintosh Plus in 1986 the next version of FileMaker was named FileMaker Plus to reflect the new model's name. Forethought was purchased by Microsoft, which was then introducing their PowerPoint product that became part of Microsoft Office. Microsoft had introduced its own database application, Microsoft File, shortly before FileMaker, but was outsold by FileMaker and therefore Microsoft File was discontinued. Microsoft negotiated with Nashoba for the right to publish FileMaker, but Nashoba decided to self-publish the next version, FileMaker 4. Purchase by Claris Shortly thereafter, Apple Computer formed Claris, a wholly owned subsidiary, to market software. Claris purchased Nashoba to round out its software suite. By then, Leading Edge and Nutshell had faded from the marketplace because of competition from other DOS- and later Windows-based database products. FileMaker, however, continued to succeed on the Macintosh platform. Claris changed the product's name to FileMaker II to conform to its naming scheme for other products, such as MacWrite II, but the product changed little from the last Nashoba version. Several minor versions followed. In 1990, it was released as FileMaker Pro 1.0. And in September 1992, Claris released a cross-platform version for both the Mac and Windows; except for a few platform-specific functions, the program's features and user interface were the same. Up to this point FileMaker had no real relational capabilities; it was limited to automatically looking up and importing values from other files. It only had the ability to save a state—a filter and a sort, and a layout for the data. Version 3.0, released around 1995, introduced new relational and scripting features. By 1995, FileMaker Pro was the only strong-selling product in Claris's lineup. In 1998, Apple moved development of some of the other Claris products in-house, dropped most of the rest, and changed Claris's name to FileMaker, Inc., to concentrate on that product. In 2020, FileMaker International Inc. changed its name (back) to Claris International Inc. and announced Claris Connect workflow software. Later updates Version 4.0, introduced in 1997, added a plug-in architecture much like that of Adobe Photoshop, which enabled third-party developers to add features to FileMaker. A bundled plug-in, the Web Companion, allowed the database to act as a web server. Other plug-ins added features to the interface and enabled FileMaker to serve as an FTP client, perform external file operations, and send messages to remote FileMaker files over the Internet or an intranet. Version 5 introduced a new file format (file extension ) Version 7, released in 2004, introduced a new file format (file extension ) supporting file sizes up to 8 terabytes (an increase from the 2 gigabytes allowed in previous versions). Individual fields could hold up to 4 gigabytes of binary data (container fields) or 2 gigabytes of 2-byte Unicode text per record (up from 64 kilobytes in previous versions). FileMaker's relational model was enriched, offering multiple tables per file and a graphical relationship editor that displayed and allowed manipulation of related tables in a manner that resembled the entity-relationship diagram format. Accompanying these foundational changes, FileMaker Inc. also introduced a developer certification program. In 2005 FileMaker Inc. announced the FileMaker 8 product family, which offered the developer an expanded feature set. These included a tabbed interface, script variables, tooltips, enhanced debugging, custom menus, and the ability to copy and paste entire tables and field definitions, scripts, and script steps within and between files. Version 8.5, released in 2006, added an integrated web viewer (with the ability to view such things as shipment tracking information from FedEx and Wikipedia entries) and named layout objects. FileMaker 9, released on July 10, 2007, introduced a quick-start screen, conditional formatting, fluid layout auto-resizing, hyperlinked pointers into databases, and external SQL links. FileMaker 10 was released on January 5, 2009, before that year's Macworld Conference & Expo, and offered scripts that can be triggered by user actions and a redesigned user interface similar to that of Mac OS X Leopard (10.5) applications. FileMaker 11, released on March 9, 2010, introduced charting, which was further streamlined in FileMaker 12, released April 4, 2012. That version also added themes, more database templates (so-called starter solutions) and simplified creation of iOS databases. FileMaker Go 11 (July 20, 2010) and FileMaker Go 12 for iPhone and iPad (April 4, 2012) allow only the creation, modification, and deletion of records on these handheld devices. Design and schema changes must be made within the full FileMaker Pro application. FileMaker Go 12 offers multitasking, improved media integration, export of data to multiple formats and enhanced container fields. FileMaker 13, released after the launches of iOS 7 and OS X Mavericks (10.9), first shipped in December 2013. The client and server products were enhanced to support many mobile and web methods of data access. FileMaker Go 13, the parallel iPad–iPhone product, has now become a single client for both devices, and the Server Admin tool now runs in HTML5, no longer requiring a Java app. FileMaker 14 platform released on May 15, 2015. This included FileMaker Pro 14, FileMaker Pro 14 Advanced, FileMaker Server 14 and FileMaker Go 14. This was followed by version 15 in May 2016 and version 16 in May 2017; both including equivalent Pro, Pro Advanced, Server and Go versions. In late 2016, FileMaker began annually publicizing a software roadmap of future features they are working on as well as identifying features they are moving away from or may deprecate in the near future. FileMaker Inc. had always had a hard time describing what FileMaker software is because it is more than just a database; it includes the user interface, security, rapid application development tools, etc. FileMaker Inc. initiated a new marketing program at their annual developers conference in August 2018 to address its poor description categories: "Workplace Innovation Platform". FileMaker Cloud In 2016, FileMaker Cloud was introduced, including a Linux server (CentOS), which was offered exclusively through the Amazon Marketplace. In November 2019, FileMaker Cloud was reintroduced as a software as a service product offered directly from Claris for FileMaker Pro 18.0.3 using FileMaker Server Cloud 2.18 service on Amazon servers, but managed by Claris instead of through the Amazon Marketplace, and making use of the new FileMaker ID authentication. Linux and Docker In October 2020, Claris released a Linux version of FileMaker Server, first on CentOS (19.1) then on Ubuntu (19.2). A third-party hosting service, fmcloud.fm, uses this version to host FileMaker databases on a Docker architecture. Version history * (*) indicates both FileMaker Pro/FileMaker Pro Advanced (Developer Edition in v4-6) or FileMaker Server/FileMaker Server Advanced FileMaker files are compatible between Mac and Windows. File type extensions are: since FileMaker Pro 2.0 since FileMaker Pro 3.0 since FileMaker Pro 5.0 (including 5, 5.5, 6.0) since FileMaker Pro 7.0 (including 7, 8, 8.5, 9, 10, 11 and FileMaker Go 1.0) since FileMaker Pro 12 (including 12, 13, 14, 15, 16, 17, 18, 19) Self-running applications (runtime, kiosk mode) are platform-specific only. Internationalization and localization FileMaker is available in worldwide English, Simplified Chinese, Dutch, French, German, Italian, Japanese, Korean, Brazilian Portuguese, Spanish, and Swedish. There are also specific versions of FileMaker for users of Central European, Indian and Middle Eastern languages. These versions offer spellchecking, data entry, sorting and printing options for languages of the respective region. They also contain localized templates and localized instant web publishing. The Central European version FileMaker includes English, Russian, Polish, Czech and Turkish interfaces. There are customized templates for Russian, Polish, Czech, Turkish. In addition Russian, Greek, Estonian, Lithuanian, Latvian, Serbian, Bulgarian and Hungarian are supported to varying degrees. The version intended for Southeast Asian languages has only an English user interface, but supports Indic-language data entry, sorting and indexing in Hindi, Marathi, Bengali, Panjabi, Gujarati, Tamil, Telugu, Kannada and Malayalam. Similarly, the Middle Eastern version has only English and French user interfaces, but with its option to change the text direction to right-to-left, it does support Arabic and Hebrew data entry. Scripting FileMaker Pro and FileMaker Pro Advanced include scripting capabilities and a variety of built-in functions for automation of common tasks and complex calculations. Numerous steps are available for navigation, conditional execution of script steps, editing records, and other utilities. FileMaker Pro Advanced provides a script debugger which allows the developer to set break points, monitor data values and step through script lines. FileMaker 13 introduced a useful script that more deeply queries container field document metadata. Dynamic Markup Language The FileMaker Dynamic Markup Language or FDML was a markup language used in the earlier versions of FileMaker introduced in 1998. FDML is also often referred to as Claris Dynamic Markup Language or CDML, named after its former company Claris. FDML was an extension of HTML that used special tags, such as [FMP-Record][/FMP-Record] to display FileMaker data on Web pages. FileMaker officially dropped support for FDML in 2004. SQL and ODBC support Since version 9, FileMaker has included the ability to connect to a number of SQL databases without resorting to using SQL, including MySQL, SQL Server, and Oracle. This requires installation of the SQL database ODBC driver (in many cases a third-party license per client driver) to connect to a SQL database. Through Extended SQL Services (ESS), SQL databases can be used as data sources in FileMaker's relationship graph, thus allowing the developer to create new layouts based on the SQL database; create, edit, and delete SQL records via FileMaker layouts and functions; and reference SQL fields in FileMaker calculations and script steps. It is a cross-platform relational database application. Versions from FileMaker Pro 5.5 onwards also have an ODBC interface. FileMaker 12 introduced a new function, ExecuteSQL, which allows the user to perform an SQL query against the FileMaker database to retrieve data, but does not allow data modification or deletion, or schema changes. One major flaw with ODBC support is the lack of one-to-one field-type mapping from FileMaker to external industry-standard databases. Further issues are caused by the fact that FileMaker is not "strict" in its data types. A FileMaker field can be marked as "numeric" and will return this mapping to an ODBC driver; however, FileMaker allows non-numeric characters to be stored in this "numeric" field type unless the field is specifically marked as strictly "numeric". Through a third party, Actual Technologies, FileMaker 15 and forward also support ODBC connectivity to IBM I 7.3 (AS/400), IBM Db2 11.1, and PostgreSQL 9.6.12. Using the Actual Adapter, these ODBC connections can also make ESS connections and be used as sources in the Relationship Graph. Integration FileMaker 16 provides integrations via cURL, JSON, REST-based FileMaker Data API support. Tableau Web Data Connector is offered to visualize FileMaker data. The REST-based API license is a free trial that expired September 27, 2018. FileMaker 17 offers a permanent REST-based Data API. Standard licensing include 2GB of outbound data per user per month. Container data does not count towards this limit, and inbound Data API data transfer is unlimited. FileMaker 19 for Linux and FileMaker Cloud provide a OData gateway, allowing JSON and XML output (Atom). See also Bento, a simplified personal database application from FileMaker Inc. (discontinued mid-2013) References External links Content management systems Collaborative software Document management systems Database engines Proprietary database management systems Desktop database application development tools IOS software MacOS database-related software Classic Mac OS software Windows database-related software Cross-platform software Mobile software
57070751
https://en.wikipedia.org/wiki/Zealot%20Campaign%20%28Malware%29
Zealot Campaign (Malware)
The Zealot Campaign is a cryptocurrency mining malware collected from a series of stolen National Security Agency (NSA) exploits, released by the Shadow Brokers group on both Windows and Linux machines to mine cryptocurrency, specifically Monero. Discovered in December 2017, these exploits appeared in the Zealot suite include EternalBlue, EternalSynergy, and Apache Struts Jakarta Multipart Parser attack exploit, or . The other notable exploit within the Zealot vulnerabilities includes vulnerability , known as DotNetNuke (DNN) which exploits a content management system so that the user can install a Monero miner software. An estimated USD $8,500 of Monero having been mined on a single targeted computer. The campaign was discovered and studied extensively by F5 Networks in December 2017. How it works With many of the Zealot exploits being leaked from the NSA, the malware suite is widely described as having “an unusually high obfuscated payload”, meaning that the exploit works on multiple levels to attack the vulnerable server systems, causing large amounts of damage. The term “Zealot” was derived from the StarCraft series, namely a type of warrior. Introduction This multi-layered attack begins with two HTTP requests, used to scan and target vulnerable systems on the network. Similar attacks in the past were only targeted to either Windows or Linux-based systems, yet Zealot stands out by being prepared for both with its version of Apache Struts exploit along with using DNN. Post-exploitation stage After the operating system (OS) has been identified via a JavaScript, the malware then loads an OS-specific exploit chains: Linux/macOS If the targeted system runs on either Linux or macOS, the Struts payload will install a Python agent for the post-exploitation stage. After checking the target system to see if it has already been infected, it then downloads a cryptocurrency mining software, often referred to as a “mule”. From there, it obfuscates an embedded Python code to process. Different from other botnet malware, the Zealot campaigns request the Command & Control (C&C) server-specific User-Agent and Cookie headers, meaning that anyone but the malware will receive a different response. Due to Zealot encrypting via a RC4 cipher, see below, most network inspection and security software were able to see that the malware was on the network, but were not able to scan it. Windows If the targeted OS is Windows, the Struts payload downloads an encoded PowerShell interpreter. Once it is decoded two times, the program then runs another obfuscated script, which in turn leads the device to a URL to download more files. That file, known as PowerShell script “scv.ps1”, is a heavily obfuscated script which allows the attacker to deploy mining software on the targeted device. The deployed software can also use a Dynamic-link Library (DLL) mining malware, which is deployed using the reflective DLL injection technique to attach the malware to the PowerShell processing itself, as to remain undetected. Scanning for a firewall Prior to moving onto the next stage, the program also checks to see if the firewall is active. If yes, it will pipe an embedded base64 embedded Python code to circumvent the firewall. Another possible solution is known as the “Little Snitch”, which will possibly terminate the firewall if active. Infecting internal networks From the post-exploitation stage, the program scans the target system for Python 2.7 or higher, if it is not found on the system, it will then download it. Following that, it then downloads a Python module (probe.py) to propagate the network, the script itself is highly obfuscated with a base encryption of base64 and is then zipped up to 20 times. The downloaded zip file could be named several iterations, all of which are derived from the StarCraft game. The files included are listed below: Zealot.py – main script executing the EternalBlue and EternalSynergy exploits, see below. A0.py – EternalSynergy exploit with built-in shellcode for Windows 7 A1.py – EternalBlue exploit for Windows 7, receives shellcode as an argument A2.py – EternalBlue exploit for Windows 8, receives a shellcode as an argument M.py – SMB protocol wrapper Raven64.exe – scans the internal network via port 445 and invokes the zealot.py files After all these files run successfully, the miner software is then introduced. Mining Known commonly as the “mule” malware, this PowerShell script is named the “minerd_n.ps2” within the compressed files that are downloaded and executed via the EternalSynergy exploit. The software then utilizes the target system’s hardware to process mining for cryptocurrency. This mining software has reportedly stolen close to $8,500 from one victim, yet total amounts of mined Monero are still speculated among researchers. Exploits involved EternalBlue Initially utilized in the WannaCry ransomware attack in 2017, this exploit was specifically utilized as a mining software with the Zealot campaign. EternalSynergy While not much is known about this exploit, it was used in cooperation with EternalBlue, along with other exploits in the Zealot campaign and others. Most notably, EternalSynergy was involved in the Equifax hack, WannaCry ransomware, and cryptocurrency mining campaigns. DNN An ASP.NET based content management system, DNN (formerly DotNetNuke) sends a serialized object via a vulnerable DNNPersonalization cookie during the HTTP request stage. Using an “ObjectDataProvider” and an “ObjectStateFormatter”, the attacker then embeds another object into the victim’s shell system. This invoked shell system will then deliver the same script that was delivered in the Apache Struts exploit. The DNN acts as a secondary backup for the attackers, should the Apache Struts exploit fail. Apache Struts Jakarta multipart parser Used to deliver a PowerShell script to initiate the attack, this exploit is one of the two HTTP requests sent during the initial stage of infection. Among the first discovered of the exploits of the Zealot campaign, the Jakarta Parser exploit allowed hackers to exploit a “Zero-Day” flaw in the software to hack into the financial firm, Equifax in March 2017. This particular exploit was the most notable and public of the exploits, as it was utilized in a largely public case, and was still being utilized until December 2017, when the exploit was patched. Uses The Lazarus Group The Bangladeshi-based group utilized a spear-phishing method, known commonly as Business Email Compromise (BCE), to steal cryptocurrency from unsuspecting employees. Lazarus primarily targeted employees of cryptocurrency financial organizations, which was executed via a Word document, claiming to be a legitimate-appearing European company. When the document was opened, the embedded trojan virus would then load onto the system computer and begin to steal credentials and other malware. While the specific Malware is still unknown, it does have ties to the Zealot malware. Equifax Data Breach (2017) Among the several exploits involved the March 2017 Equifax data breach, the Jakarta Parser, EternalBlue, and EternalSynergy were heavily involved with attacking the servers. Instead of the software being utilized to mine cryptocurrency, it was used to mine the data of over 130 million Equifax customers. References Malware Cryptocurrencies
12690403
https://en.wikipedia.org/wiki/James%20H.%20Pomerene
James H. Pomerene
James Herbert Pomerene (June 22, 1920 – December 7, 2008) was an electrical engineer and computer pioneer. Biography Pomerene was born June 22, 1920 in Yonkers, New York. His father was Joel Pomerene and mother was Elsie Bower. He received the BS degree in electrical engineering from Northwestern University in 1942. In 1945 he married Edythe Schwenn and had three children. In 1946, he joined the Electronic Computer Project at the Institute for Advanced Study (IAS) in Princeton, New Jersey under the leadership of John von Neumann. The project built a parallel stored program computer called the IAS machine that was the prototype for a number of machines such as the MANIAC I, ORACLE, and ILLIAC series. Pomerene designed and implemented the adder portion of the arithmetic unit. Collaborating with engineers such as Bruce Gilchrist and Y.K. Wong, they invented a fast adder which incorporated a speed up technique for asynchronous adders reducing the time for additive carry-overs to propagate. This design was actually later incorporated in one commercial computer, the Philco TRANSAC S-2000, introduced in 1957, the first commercial transistorized computer. Pomerene became chief engineer on the IAS computer project from 1951 to 1956. In Summer 1956, Pomerene joined the IBM Corporation in Poughkeepsie, where he and several others started the development of various electronic computer systems such as the IBM 7030 and Harvest computers. He was appointed an IBM Fellow in 1976. He held 37 patents when he retired from IBM in 1993. Pomerene was a Life Fellow of the IEEE and a member of the National Academy of Engineering. He received the IEEE Edison Medal in 1993, and the Eckert-Mauchly Award in 2006. He died December 7, 2008 in Chappaqua, New York. Selected papers Gilchrist, B.; Pomerene, J.; Wong, S.Y., "Fast carry logic for digital computers" IRE Transactions on Electronic Computers, EC-4 (Dec.1955), pp. 133–136. Esterin, B.; Gilchrist, B.; Pomerene, J. H., "A Note on High Speed Digital Multiplication" IRE Transactions on Electronic Computers, vol. EC-5, p. 140 (1956). References Further reading Gilchrist, Bruce, "In Memoriam, James Pomerene (1920 - 2008)", New Castle Now, February 6, 2009. 1920 births 2008 deaths American electrical engineers American computer scientists Fellow Members of the IEEE Members of the United States National Academy of Engineering IEEE Edison Medal recipients IBM employees IBM Fellows People from Yonkers, New York Scientists from New York (state) Northwestern University alumni
353880
https://en.wikipedia.org/wiki/Computer%20art
Computer art
Computer art is any art in which computers play a role in production or display of the artwork. Such art can be an image, sound, animation, video, CD-ROM, DVD-ROM, video game, website, algorithm, performance or gallery installation. Many traditional disciplines are now integrating digital technologies and, as a result, the lines between traditional works of art and new media works created using computers has been blurred. For instance, an artist may combine traditional painting with algorithm art and other digital techniques. As a result, defining computer art by its end product can thus be difficult. Computer art is bound to change over time since changes in technology and software directly affect what is possible. The term "computer art" On the title page of the magazine Computers and Automation, January 1963, Edmund Berkeley published a picture by Efraim Arazi from 1962, coining for it the term "computer art." This picture inspired him to initiate the first Computer Art Contest in 1963. The annual contest was a key point in the development of computer art up to the year 1973. History The precursor of computer art dates back to 1956–1958, with the generation of what is probably the first image of a human being on a computer screen, a (George Petty-inspired) pin-up girl at a SAGE air defense installation. Desmond Paul Henry invented the Henry Drawing Machine in 1960; his work was shown at the Reid Gallery in London in 1962, after his machine-generated art won him the privilege of a one-man exhibition. By the mid-1960s, most individuals involved in the creation of computer art were in fact engineers and scientists because they had access to the only computing resources available at university scientific research labs. Many artists tentatively began to explore the emerging computing technology for use as a creative tool. In the summer of 1962, A. Michael Noll programmed a digital computer at Bell Telephone Laboratories in Murray Hill, New Jersey to generate visual patterns solely for artistic purposes. His later computer-generated patterns simulated paintings by Piet Mondrian and Bridget Riley and became classics. Noll also used the patterns to investigate aesthetic preferences in the mid-1960s. The two early exhibitions of computer art were held in 1965: Generative Computergrafik, February 1965, at the Technische Hochschule in Stuttgart, Germany, and Computer-Generated Pictures, April 1965, at the Howard Wise Gallery in New York. The Stuttgart exhibit featured work by Georg Nees; the New York exhibit featured works by Bela Julesz and A. Michael Noll and was reviewed as art by The New York Times. A third exhibition was put up in November 1965 at Galerie Wendelin Niedlich in Stuttgart, Germany, showing works by Frieder Nake and Georg Nees. Analogue computer art by Maughan Mason along with digital computer art by Noll were exhibited at the AFIPS Fall Joint Computer Conference in Las Vegas toward the end of 1965. In 1968, the Institute of Contemporary Arts (ICA) in London hosted one of the most influential early exhibitions of computer art called Cybernetic Serendipity. The exhibition, curated by Jasia Reichardt, included many of those often regarded as the first digital artists, Nam June Paik, Frieder Nake, Leslie Mezei, Georg Nees, A. Michael Noll, John Whitney, and Charles Csuri. One year later, the Computer Arts Society was founded, also in London. At the time of the opening of Cybernetic Serendipity, in August 1968, a symposium was held in Zagreb, Yugoslavia, under the title "Computers and visual research". It took up the European artists movement of New Tendencies that had led to three exhibitions (in 1961, 63, and 65) in Zagreb of concrete, kinetic, and constructive art as well as op art and conceptual art. New Tendencies changed its name to "Tendencies" and continued with more symposia, exhibitions, a competition, and an international journal (bit international) until 1973. Katherine Nash and Richard Williams published Computer Program for Artists: ART 1 in 1970. Xerox Corporation's Palo Alto Research Center (PARC) designed the first Graphical User Interface (GUI) in the 1970s. The first Macintosh computer is released in 1984, since then the GUI became popular. Many graphic designers quickly accepted its capacity as a creative tool. Andy Warhol created digital art using a Commodore Amiga where the computer was publicly introduced at the Lincoln Center, New York in July 1985. An image of Debbie Harry was captured in monochrome from a video camera and digitized into a graphics program called ProPaint. Warhol manipulated the image adding colour by using flood fills. Output devices Formerly, technology restricted output and print results: early machines used pen-and-ink plotters to produce basic hard copy. In the early 1960s, the Stromberg Carlson SC-4020 microfilm printer was used at Bell Telephone Laboratories as a plotter to produce digital computer art and animation on 35-mm microfilm. Still images were drawn on the face plate of the cathode ray tube and automatically photographed. A series of still images were drawn to create a computer-animated movie, early on a roll of 35-mm film and then on 16-mm film as a 16-mm camera was later added to the SC-4020 printer. In the 1970s, the dot matrix printer (which was much like a typewriter) was used to reproduce varied fonts and arbitrary graphics. The first animations were created by plotting all still frames sequentially on a stack of paper, with motion transfer to 16-mm film for projection. During the 1970s and 1980s, dot matrix printers were used to produce most visual output while microfilm plotters were used for most early animation. In 1976, the inkjet printer was invented with the increase in use of personal computers. The inkjet printer is now the cheapest and most versatile option for everyday digital color output. Raster Image Processing (RIP) is typically built into the printer or supplied as a software package for the computer; it is required to achieve the highest quality output. Basic inkjet devices do not feature RIP. Instead, they rely on graphic software to rasterize images. The laser printer, though more expensive than the inkjet, is another affordable output device available today. Graphic software Adobe Systems, founded in 1982, developed the PostScript language and digital fonts, making drawing, painting, and image manipulation software popular. Adobe Illustrator, a vector drawing program based on the Bézier curve introduced in 1987 and Adobe Photoshop, written by brothers Thomas and John Knoll in 1990 were developed for use on MacIntosh computers, and compiled for DOS/Windows platforms by 1993. Robot painting A robot painting is an artwork painted by a robot. Raymond Auger's Painting Machine, made in 1962, was one of the first robotic painters as was AARON, an artificial intelligence/artist developed by Harold Cohen in the mid-1970s. Joseph Nechvatal began making large computer-robotic paintings in 1986. Artist Ken Goldberg created an 11' x 11' painting machine in 1992 and German artist Matthias Groebel also built his own robotic painting machine in the early 1990s. Neural style transfer Non-photorealistic rendering (using computers to automatically transform images into stylized art) has been a subject of research since the 1990s. Around 2015, neural style transfer using convolutional neural networks to transfer the style of an artwork onto a photograph or other target image became feasible. One method of style transfer involves using a framework such as VGG or ResNet to break the artwork style down into statistics about visual features. The target photograph is subsequently modified to match those statistics. Notable applications include Prisma, Facebook Caffe2Go style transfer, MIT's Nightmare Machine, and DeepArt. See also Algorithm art ASCII art Digital painting Digital art Fractal art Generative art New media art Software art Internet art Systems art Video game art / Modding Glitch art 3D printing art Artificial intelligence art References Further reading Honor Beddard and Douglas Dodds. (2009). Digital Pioneers. London: V&A Publishing. Timothy Binkley. (1988/89). "The Computer is Not A Medium", Philosophic Exchange. Reprinted in EDB & kunstfag, Rapport Nr. 48, NAVFs EDB-Senter for Humanistisk Forskning. Translated as "L'ordinateur n'est pas un médium", Esthétique des arts médiatiques, Sainte-Foy, Québec: Presses de l'Université du Québec, 1995. Timothy Binkley. (1997). "The Vitality of Digital Creation" The Journal of Aesthetics and Art Criticism, 55(2), Perspectives on the Arts and Technology, pp. 107–116. Thomas Dreher: History of Computer Art Virtual Art: From Illusion to Immersion (MIT Press/Leonardo Books) by Oliver Grau Charlie Gere. (2006). White Heat, Cold Logic: Early British Computer Art, co-edited with Paul Brown, Catherine Mason and Nicholas Lambert, MIT Press/Leonardo Books. Mark Hansen. (2004). New Philosophy for New Media. Cambridge, MA: MIT Press. Dick Higgins. (1966). Intermedia. Reprinted in Donna De Salvo (ed.), Open Systems Rethinking Art c. 1970, London: Tate Publishing, 2005. Lieser, Wolf. Digital Art. Langenscheidt: h.f. ullmann. 2009 Lopes, Dominic McIver. (2009). A Philosophy of Computer Art. London: Routledge Lev Manovich. (2002, October). Ten Key Texts on Digital Art: 1970–2000. Leonardo - Volume 35, Number 5, pp. 567–569. Frieder Nake. (2009, Spring). The Semiotic Engine: Notes on the History of Algorithmic Images in Europe. Art Journal, pp. 76–89. Perry M., Margoni T., (2010) [https://ssrn.com/abstract=1647584 From music tracks to Google maps: Who owns computer-generated works?] in Computer Law and Security Review, Vol. 26, pp. 621–629, 2010 Edward A. Shanken. (2009). Art and Electronic Media''. London: Phaidon. Grant D. Taylor (2014). When The Machine Made Art: The Troubled History of Computer Art. New York: Bloomsbury. External links Art movements Postmodern art Contemporary art Creativity techniques The arts Multimedia ru:Компьютерное искусство
3445364
https://en.wikipedia.org/wiki/PCLSRing
PCLSRing
PCLSRing (also known as Program Counter Lusering) is the term used in the ITS operating system for a consistency principle in the way one process accesses the state of another process. Problem scenario This scenario presents particular complications: Process A makes a time-consuming system call. By "time-consuming", it is meant that the system needs to put Process A into a wait queue and can schedule another process for execution if one is ready-to-run. A common example is an I/O operation. While Process A is in this wait state, Process B tries to interact with or access Process A, for example, send it a signal. What should be the visible state of the context of Process A at the time of the access by Process B? In fact, Process A is in the middle of a system call, but ITS enforces the appearance that system calls are not visible to other processes (or even to the same process). ITS-solution: transparent restart If the system call cannot complete before the access, then it must be restartable. This means that the context is backed up to the point of entry to the system call, while the call arguments are updated to reflect whatever portion of the operation has already been completed. For an I/O operation, this means that the buffer start address must be advanced over the data already transferred, while the length of data to be transferred must be decremented accordingly. After the Process B interaction is complete, Process A can resume execution, and the system call resumes from where it left off. This technique mirrors in software what the PDP-10 does in hardware. Some PDP-10 instructions like BLT may not run to completion, either due to an interrupt or a page fault. In the course of processing the instruction, the PDP-10 would modify the registers containing arguments to the instruction, so that later the instruction could be run again with new arguments that would complete any remaining work to be done. PCLSRing applies the same technique to system calls. This requires some additional complexity. For example, memory pages in User space may not be paged out during a system call in ITS. If this were allowed, then when the system call is PCLSRed and tries to update the arguments so the call can be aborted, the page containing the arguments might not be present, and the system call would have to block, preventing the PCLSR from succeeding. To prevent this, ITS doesn't allow memory pages in User space to be paged out after they're first accessed during a system call, and system calls typically start by touching pages in User space they know they will need to access. Unix-solution: restart on request Contrast this with the approach taken in the UNIX operating system, where there is restartability, but it is not transparent. Instead, an I/O operation returns the number of bytes actually transferred (or the EINTR error if the operation was interrupted before any bytes were actually transferred), and it is up to the application to check this and manage its own resumption of the operation until all the bytes have been transferred. In the philosophy of UNIX, this was given by Richard P. Gabriel as an example of the "worse is better" principle. Asynchronous approaches A different approach is possible. It is apparent in the above that the system call has to be synchronous—that is, the calling process has to wait for the operation to complete. This is not inevitable: in the OpenVMS operating system, all I/O and other time-consuming operations are inherently asynchronous, which means the semantics of the system call is "start the operation, and perform one or more of these notifications when it completes" after which it returns immediately to the caller. There is a standard set of available notifications (such as set an event flag, or deliver an asynchronous system trap), as well as a set of system calls for explicitly suspending the process while waiting for these, which are a) fully restartable in the ITS sense, and b) much smaller in number than the set of actual time-consuming system calls. OpenVMS provides alternative "start operation and wait for completion" synchronous versions of all time-consuming system calls. These are implemented as "perform the actual asynchronous operation" followed by "wait until the operation sets the event flag". Any access to the process context during this time will see it about to (re)enter the wait-for-event-flag call. Notes References Concurrent computing
3430046
https://en.wikipedia.org/wiki/NetBoot
NetBoot
NetBoot was a technology from Apple which enabled Macs with capable firmware (i.e. New World ROM) to boot from a network, rather than a local hard disk or optical disc drive. NetBoot is a derived work from the Bootstrap Protocol (BOOTP), and is similar in concept to the Preboot Execution Environment. The technology was announced as a part of the original version of Mac OS X Server at Macworld Expo on 5 January 1999. NetBoot has continued to be a core systems management technology for Apple, and has been adapted to support modern Mac Intel machines. NetBoot, USB, and FireWire are some of the external volume options for operating system re-install. NetBoot is not supported on newer Macs with T2 security chip or Apple silicon. Process A disk image with a copy of macOS, macOS Server, Mac OS 9, or Mac OS 8 is created using System Image Utility and is stored on a server, typically macOS Server. Clients receive this image across a network using many popular protocols including: HTTPS, AFP, TFTP, NFS, and multicast Apple Software Restore (ASR). Server-side NetBoot image can boot entire machines, although NetBoot is more commonly used for operating system and software deployment, somewhat similar to Norton Ghost. Client machines first request network configuration information through DHCP, then a list of boot images and servers with BSDP and then proceed to download images with protocols mentioned above. Both Intel and PowerPC-based servers can serve images for Intel and PowerPC-based clients. NetInstall NetInstall is a similar feature of macOS Server which utilizes NetBoot and ASR to deliver installation images to network clients (typically on first boot). Like NetBoot, NetInstall images can be created using the System Image Utility. NetInstall performs a function for macOS similar to Windows Deployment Services for Microsoft clients, which depend on the Preboot Execution Environment. Legacy Mac OS 8.5 and Mac OS 9 use only BOOTP/DHCP to get IP information, followed by a TFTP transfer of the Mac OS ROM file. Next, two volumes are mounted via AppleTalk over TCP on which the client disk images reside. All in all, the Classic Mac OS uses three images; a System image which contains the operating system and may contain applications. Next a private image (or scratch disk) is mounted in an overlay over the read-only System image. Finally, an applications image is mounted. This image, however, may be empty. See also Remote Install Mac OS X References External links Analysis of the Use of the Boot Server Discovery Protocol in NetBoot Apple detailed Boot Server Discovery Protocol Documentation Network booting
27761618
https://en.wikipedia.org/wiki/Etihad%20Atheeb%20Telecom
Etihad Atheeb Telecom
Etihad Atheeb Telecommunications Co. (), trading as GO (), is the second fixed-line operator to acquire a license from the Communications and Information Technology Commission (CITC) to provide fixed services including voice and broadband services based in Saudi Arabia. Background Etihad Atheeb Telecom was established in 2008 as a Saudi joint stock company, the company's major shareholders are Atheeb Trading Company (16.1%), Batelco (15.0%), Al-Nahla Trading Company (13.7%), and Traco Trading Ltd Co. (5.8%). Other founding shareholders include Saudi Internet Ltd Co., Atheeb Ltd. Co. for Computer and Communication, and Atheeb Maintenance and Service. Board of directors Management team On June 13, 2010, Etihad Atheeb Telecom board of directors announced the appointment of Raed Kayyal as acting chief executive officer on to replace former chief executive officer Ahmed Abbas Sindi. Founding and IPO Etihad Atheeb Telecom was one of ten applicants to bid for a public fixed services license on March 10, 2007, The Communications and Information Technology Commission (CITC) approved the list of eligible applicants for fixed services licenses on April 15, 2007, which included Etihad Atheeb Telecom and two other operators. Following the license announcement, Etihad Atheeb Telecom acquired a 3.5 GHz frequency spectrum covering thirteen regional divisions across Saudi Arabia valued at over 500 million riyals (US$138 million). On February 25, 2008, the Council of Ministers approved the establishment of Etihad Atheeb Telecom as a Saudi joint stock company with an authorized capital of 1 billion riyals (US$266 million). As a condition of the approval, Etihad Atheeb Telecom would offer at least 25% of its shares in an initial public offering (IPO). In January 2009, Etihad Atheeb Telecom announced an IPO of 30% of the company's shares with a total value of 300 million riyals, at a price of 10 riyals per share. Etihad Atheeb Telecom IPO marked the first IPO in Saudi Arabia since all new offerings had been halted in August 2008 in the midst of unfavorable market conditions as a result of the global economic crisis. The announcement was made by Prince Abdul Aziz bin Ahmed bin Abdul Aziz where he stated: The IPO closed successfully on February 2, 2009, with coverage in excess to (350%), the company's IPO had been significantly oversubscribed, and analysts interpreted the oversubscription as a positive indicator of confidence in the IPO market of Saudi Arabia. Commercial launch On March 18, 2008, Etihad Atheeb Telecom announced a $165 million WiMAX 802.16e infrastructure contract with Motorola, The contract included construction of the first phase of infrastructure with a comprehensive service package including end-to-end delivery of network planning, installation, optimization and support services, Etihad Atheeb Telecom also contracted with each of ZTE and Wipro. Prince Abdul Aziz bin Ahmed bin Abdul Aziz, was elected chairman of the board of directors and the company launched its commercial trademark “GO” () on February 18, 2009, in a large ceremony held in Riyadh. Later in March 2009, Etihad Atheeb Telecom commenced the trial phase of its WiMAX network. Early registration campaign Etihad Atheeb Telecom received the official approval from the Communications and Information Technology Commission (CITC) for its license on April 5, 2009. Following the license, GO announced its “early registration campaign” in Riyadh and Jeddah, the early registration campaign continued for two months prior to the official launch. Commercial service launch and expansion On June 6, 2009, Etihad Atheeb Telecom held a press conference to where the company announced the commercial launch of its WiMAX Broadband internet services, covering Riyadh. Etihad Atheeb Telecom also announced a US$48 million contract with China ZTE Corporation. Under the agreement ZTE will help GO construct a high-speed WiMAX network covering five major cities in Saudi Arabia, including the eastern region. By August 2009, GO had expanded its WiMAX network coverage to cover fully Jeddah and Medina, Followed by Mecca in September. By the end of 2009 Etihad Atheeb Telecom had fully covered the western region of the kingdom of Saudi Arabia with its WiMAX network. On January, 2010, GO announced the official launch of its services in Dammam and Khobar, followed by an expansion to include Hofuf and Qatif, thus fully covering the eastern region by the end of January. In total, GO covers eleven cities with its WiMAX network. Voice service On November 15, 2009, the company announced the launch of its nomadic voice services, formally marking the end of Saudi Telecom's landline monopoly in the kingdom of Saudi Arabia. GO is the first operator to offer nomadic voice services over WiMAX in the region. Products and services GO provides broadband Internet services over WiMAX, advanced fixed line VoIP services and fax services. References Internet in Saudi Arabia Telecommunications companies of Saudi Arabia Telecommunications companies established in 2006
27938415
https://en.wikipedia.org/wiki/List%20of%20hackers
List of hackers
Here is a list of notable hackers who are known for their hacking acts. 0–9 0x80 A Mark Abene (Phiber Optik) Ryan Ackroyd (Kayla) Mustafa Al-Bassam (Tflow) Mitch Altman Jacob Appelbaum (ioerror) Julian Assange (Mendax) Trishneet Arora Andrew Auernheimer (weev) B Loyd Blankenship (The Mentor) Erik Bloodaxe Barrett Brown Max Butler C Brad Carter (RBCP, Red box Chili Pepper) Jean-Bernard Condat Sam Curry Cyber Anakin D Kim Dotcom John Draper (Captain Crunch) Sir Dystic E Alexandra Elbakyan Mohamed Elnouby Farid Essebar Nahshon Even-Chaim (Phoenix) F Ankit Fadia Bruce Fancher (Dead Lord) G Joe Grand (Kingpin) Richard Greenblatt Virgil Griffith (Romanpoet) Rop Gonggrijp Guccifer Guccifer 2.0 H Jeremy Hammond Susan Headley (Susan Thunder) Markus Hess Billy Hoffman (Acidus) George Hotz (geohot) Andrew Huang Marcus Hutchins I J The Jester (hacktivist) Jonathan James Joybubbles (Joe Engressia, Highrise Joe) K Kyle Milliken Samy Kamkar Karl Koch (hagbard) Alan Kotok Jan Krissler Patrick K. Kroupa (Lord Digital) Kris Kaspersky Tillie Kottmann L Adrian Lamo Chris Lamprecht (Minor Threat) Gordon Lyon (Fyodor) M MafiaBoy Moxie Marlinspike Morgan Marquis-Boire Gary Mckinnon (Solo) Jude Milhon (St. Jude) Kevin Mitnick (Condor) Mixter Hector Monsegur (Sabu) HD Moore Robert Tappan Morris (rtm) Dennis Moran (Coolio) Jeff Moss (Dark Tangent) Katie Moussouris Andy Müller-Maguhn MLT (Matthew Telfer) N Craig Neidorf (Knight Lightning) O Beto O'Rourke (Psychedelic Warlord) Higinio Ochoa P Justin Tanner Petersen (Agent Steal) Kevin Poulsen (Dark Dante) Q R Eric S. Raymond (ESR) Christien Rioux (DilDog) Leonard Rose (Terminus) Oxblood Ruffin Joanna Rutkowska S Peter Samson David Schrooten (Fortezza) Roman Seleznev (Track2) Alisa Shevchenko Rich Skrenta Dmitry Sklyarov Edward Snowden Richard Stallman (rms) StankDawg Matt Suiche Peter Sunde Gottfrid Svartholm (Anakata) Kristina Svechinskaya Aaron Swartz T Ehud Tenenbaum Cris Thomas (Space Rogue) John Threat Topiary Tron (Boris Floricic) Justine Tunney U V Kimberley Vanvaeck (Gigabyte) W Steve Wozniak Chris Wysopal (Weld Pond) Robert Willis X Y YTCracker Z Peiter Zatko (Mudge) See also Tech Model Railroad Club List of computer criminals List of fictional hackers List of hacker groups List of hacker conferences Hackerspace Phreaking References Hacker culture Hackers Hackers
878464
https://en.wikipedia.org/wiki/Silvaco
Silvaco
Silvaco Inc. develops and markets electronic design automation (EDA) and technology CAD (TCAD) software and semiconductor design IP (SIP). The company is headquartered in Santa Clara, California, and has a global presence with offices located in North America, Europe, and throughout Asia. Since its founding in 1984, Silvaco has grown to become a large privately held EDA company. The company has been known by at least two other names: Silvaco International, and Silvaco Data Systems. History Founded by Dr. Ivan Pesic (13 September 1951, Resnik, Montenegro — 20 October 2012, Japan) in 1984, the company is privately held and internally funded. It is headquartered in Santa Clara, California, with fourteen offices worldwide. In 2003 Silvaco acquired Simucad Inc., a privately held company providing logic simulation EDA software. Silvaco re-launched the brand by spinning out its EDA product line in 2006 under the Simucad name. As of 17 February 2010, Simucad Design Automation and Silvaco Data Systems were merged back together forming Silvaco, Inc. In 2006, Silvaco sued Intel for misappropriation of trade secrets in the case of Silvaco Data Systems v. Intel Corp., however ultimately the judgment of the Court was in favor of Intel. In 2012, David Halliday was appointed CEO after the death of the company founder Ivan Pesic. In 2015, Silvaco appointed a new CEO, David Dutton. The company also acquired Invarian, Inc., a privately held company providing power integrity analysis software, and acquired Infiniscale SA, a privately held company in France providing variability analysis software. In 2016, Silvaco added semiconductor design IP (SIP) to its portfolio with the acquisition of the privately held company IPextreme, Inc. Silvaco also entered into another new market segment with the acquisition of the privately held company edXact in France. The tools from edXact are used for analysis, reduction, and comparison of extracted parasitic netlists. In 2017, Silvaco acquired SoC Solutions, a privately held company providing semiconductor IP. In 2018, Silvaco acquired NanGate, a privately held company providing tools and services for creation, optimization, characterization, and validation of physical library IP. The company also announced a partnership with Purdue University and the Purdue Research Foundation for the commercialization of the NEMO tool suite, which is used for nanoelectronics modeling and atomistic simulation. In 2019, Silvaco appointed Dr. Babak Taheri as its new Chief Executive Officer. Prior to his appointment, he was Silvaco's Chief Technical Officer. In 2020, Silvaco acquired the assets of Coupling Wave Solutions S.A., a privately held company providing silicon substrate noise analysis software. Silvaco also acquired the memory compiler technologies and standard cell libraries of Dolphin Design. Products Silvaco delivers EDA and semiconductor TCAD software products and semiconductor design IP with support and engineering services. Worldwide customers include leading foundries, fabless semiconductor companies, OEMs, integrated semiconductor manufacturers, and universities. TCAD Products Process Simulation Victory Process – 2D/3D process simulator Device Simulation Victory Device 2D/3D device simulator Other tools Virtual Wafer Fab (VWF) – Emulation of wafer manufacturing to perform design-of-experiments and optimization. EDA Products The company supplies integrated EDA software in the areas of analog/mixed-signal/RF circuit simulation, custom IC CAD, interconnect modeling, and standard cell library development and characterization. SPICE modeling and analog & mixed-signal simulation Utmost IV – Device characterization and SPICE modeling SmartSpice – Analog circuit simulator SmartSpice RF – Frequency and time domain RF circuit simulator SmartView – Simulation waveform viewer Custom IC CAD Gateway – Schematic editor Expert – Layout editor Guardian – DRC/LVS/Net physical verification Hipex – Full-chip parasitic extraction Jivaro – Parasitic reduction and analysis VarMan – High-sigma variability analysis Interconnect Modeling Clever – Parasitic extractor for realistic 3D structures Library Platform Cello – Standard cell library creation, migration and optimization Viola – Standard cell library and I/O cell characterization Liberty Analyzer – Analysis and validation of timing, power, noise, and area data from characterization SIP Products The company markets a wide variety of semiconductor design IP (SIP). In May 2019, the company announced that the semiconductor design IP of Samsung Foundry (SF) is now marketed, licensed, and supported through Silvaco. SIP categories include: Interface PHYs Interface controllers Automotive controllers AMBA IP cores and subsystems Security cores Analog cores Embedded processors Analog front-ends and codecs Foundation IP Standard cell libraries Embedded memories I/Os References Electronic design automation companies Companies based in Santa Clara, California Software companies established in 1984
48226714
https://en.wikipedia.org/wiki/623d%20Air%20Control%20Squadron
623d Air Control Squadron
The 623d Air Control Squadron (623 ACS) is an operational unit of the United States Air Force assigned to the 18th Wing. The 623d is based out of Kadena Air Base, Okinawa, Japan. The 623d is tasked to provide Command & Control within a sector of the Japanese Air Defense System. The 623d conducts operations out of Japanese Air Self Defense Force facilities located at Naha Air Base, Kasuga Air Base and Iruma Air Base. The 623d operates the Southwest Sector Interface Control Cell, conducting joint and bilateral tactical datalink operations. Mission Primary mission is two-fold (1) is to provide rapidly deployable Theater Control Operations Teams (TCOT) in order to coordinate and direct US air and air defense artillery (ADA) employment within a sector of the Japan Air Defense Ground Environment (JADGE) system. When activated as a TCOT the unit is directly subordinate to 613th Air Operations Center Chief of Combat Operations. (2) Operate Japan's Joint/Bilateral Southwest Sector Interface Control Cell (SICC), to fuse joint and multinational data provided by regional C2 units, track producers, and tactical data link (TDL) participants to execute Integrated Air & Missile Defense (IAMD) and Ballistic Missile Defense (BMD) operations in support of the Defense of Japan. History World War II The 623D Air Control Squadron, traces its origins to the 305th Fighter Control Squadron (FCS), United States Army Air Forces. The 305th FCS was originally organized on 31 March 1943, then activated 1 April 1943 at Bradley Field, Connecticut, but not manned until 7 April (by temporary personnel from the 93rd Fighter Control Squadron, the unit responsible for training the 305th FCS). Permanent personnel began arriving on 10 April, and on 21 April the squadron's commander arrived. All temporary personnel were released about this same time. By late August 1943 the 305th FCS was adequately manned to start operational training. While at Bradley Field, the 305th FCS served as the operational training unit for First Air Force's I Fighter Command. The 305th FCS provided fighter control training for single-engine P-47 Thunderbolt fighter groups, which obtained their new aircraft from the Republic Aviation production plant on Long Island prior to their deployment to overseas combat theaters. The 305th FCS accomplished the controlling of aircraft and furnishing homing facilities by means of semi-mobile VHF radio control net with stations located at various sites in Connecticut. The 305th FCS then moved to Blackstone Army Airfield, Virginia on 1 September 1943, where it again served as an operational training unit, this time for Third Air Force's III Fighter Command. The 305th FCS again provided fighter control training for newly arriving P-47 Thunderbolts and P-51 Mustangs as they became available. While controlling fighters in the Blackstone area, the 305th FCS conducted training with a signal air warning company and AAA battalions. The 305th FCS was alerted for an overseas move in October, but was subsequently removed from the list. After a short stint at Blackstone Army Airfield, the 305th FCS moved to Galveston Army Airfield, Texas on 20 December 1943. Here in Galveston the 305th FCS joined the 72nd Fighter Wing. This time the 305th FCS controllers were training the 2nd Air Force pilots on how to work with fighter control. With its primary mission in Galveston, the 305th FCS sent detachments throughout the Second Air Force's area to train fighter control missions. Four detachments went to Nebraska, another to Louisiana, and a sixth to another location in Texas. The squadron was alerted on 24 March 1944 for an overseas move, and in April all of the outlying detachments were pulled back to Galveston Army Airfield. The 305th FCS left Galveston Army Airfield on 3 May 1944 en route to Fort Lawton, Seattle, Washington. After being moved to Fort Lawton, Washington, on 3 May 1944, the 305th FCS began preparations for embarkation to the Pacific Theater. The 305th FCS departed Fort Lawton on 25 May 1944 in transit to the Territory of Hawaii aboard the Cape Newenham. The squadron reached Stanley Army Air Field, Territory of Hawaii on 2 June 1944. There the 305th was assigned to Seventh Air Force's VII Fighter Command. A training program commenced on 16 June, with the 305th FCS being assisted initially by the 318th Fighter Control Squadron and, from 1 July, by the 302nd Fighter Control Squadron. On 15 August 1944 the squadron was relieved from assignment to the VII Fighter Command and joined the 7th Fighter Wing of Army Air Forces Pacific Ocean Area (AAFPOA). The 305th FCS was attached on 1 September 1944 to the 7th Provisional Control Group (Special), of the 7th Wing. On 5 October the squadron moved to Bellows Field and was subsequently called upon to furnish cadre for a number of new fighter control squadrons. On 20 January 1945, the 305th FCS was attached to the new 7th Fighter Wing Aircraft Warning Control Group. Detachment 1 305th FCS, was formed at Bellows Field on 15 February 1945 and on 19 March moved to a combat zone initially attached to the 318th Fighter Group. Detachment 1, 305th FCS came to Japan as part of the Tactical Air Force, Tenth Army Ryuku Islands invasion force. This force served as the joint US Army Air Forces and US Marine Corps airpower arm for Tenth Army during Operation ICEBERG. By the end of April 1945, Detachment 1 305th FCS had moved to Ie Shima to provide fighter control for the three runways and various fighter units located on the island. Detachment 2 305th FCS, was organized at Bellows Field on 5 March 1945 and moved to a combat zone by April 1945. Detachment 2 was located at Guam when discontinued on 14 July 1945. The squadron itself was moved to Fort Shafter, Territory of Hawaii, on 4 April 1945 in order to prepare for movement to a forward zone. All local operations came to a standstill late in June as the squadron prepared for its movement. On 27 June the 305th FCS was reassigned back to Seventh Air Force. Detachment 1 305th FCS earned the Ryukus campaign streamer, but the 305th Squadron itself earned only the Asiatic-Pacific Service Streamer for its World War II duty. After the initial invasion operations of Okinawa, the Headquarters 305th FCS moved from Fort Shafter, T.H., 15 July 1945, to Kadena, Okinawa. Although arriving initially to Kadena in early September 1945 the squadron as eventually headquartered at Camp Bishigawa, Okinawa by the end of September 1945. The 305th FCS established the Okinawa Air Control Center at Camp Bishigawa, call sign "Okinawa Control" with its principle radar station at Yontan Mountain Radar, call sign "Walter Control". The 305th FCS provided invasion force protection and fighter/bomber control until the formal surrender of Japan on 2 September 1945. Following the surrender of Japan the 305th FCS was reassigned to 301st Fighter Wing, Eighth Air Force and remained at Camp Bishigawa, Okinawa. Detachment 1 305th FCS was discontinued about January 1946 and relocated to Camp Bishigawa, when Ie Shima was closed before the end of 1945. A new Detachment 1 was located at Hedo Misake, Okinawa and a smaller radar was set up to cover a narrow blind stop caused by mountains to the north. By June 1946, this site called "Point Tare", callsign "Moonshine Radar", was operational. The Point Tare site gave the squadron two operational radar sites. Post War Re-designated the 623rd Aircraft Control and Warning (AC&W) Squadron on 2 July 1946, the squadron assumed air direction and control duties of the entire Ryuku Island chain of Japan. It remained in charge of the Okinawa Air Control Center and kept its detachment at Hedo Misake, Okinawa. On 12 October 1946, the 623rd AC&W Sq sent unit personnel to establish Detachment 2, a Direction Finding (DF) Station on Aguni Shima. By April 1947, the 623rd AC&W Sq had lost its flight control mission and was concentrating entirely upon air defense of the Ryuku Islands. Detachment 2 at Aguni Shima remained operational for only a short time, and was closed in 30 June 1947. As the United States Army Air Force transitioned to become the United States Air Force on 18 September 1947, so did the 623rd AC&W Sq, with no change in its Air Defense mission. On 1 November, the 623rd AC&W Sq was attached to the 3rd Operational Group (Provisional), subordinate to the 301st Fighter Wing. The new 529th Aircraft Control and Warning Group joined the 301st Fighter Wing officially on 15 April 1948, but without personnel or equipment. During May 1948, resources of the 623rd AC&W Sq were used to begin manning this group and the newly organized 624th Aircraft Control and Warning Squadron, which also had no personnel or equipment of its own. On 16 July 1948, the 623rd AC&W Sq had been formally assigned to the newly formed 529th Aircraft Control & Warning Group. Until about 12 August 1948, the 529th Group and the 624th AC&W Sq were little more than paper units, while the 623rd AC&W Sq was still fully manned. On 18 August 1948 the 529th AC&W Gp, with the 623rd AC&W Sq and the newly created 624th AC&W Sq was absorbed by the 51st Fighter Wing, when the 301st Fighter Wing was inactivated. Yontan Radar was designated as a Tactical Control Center during August 1948, and luckily so, on 3–4 October 1948, Okinawa was pounded by Typhoon Libby. Typhoon Libby severely damaged Detachment 1 623rd AC&W Sq, Point Tare, so much so that the early warning site was never re-opened. Detachment 1, 623rd AC&W Sq was placed in caretaker status and the men and equipment were relocated back to Camp Bishigawa. The same typhoon also badly damaged the Okinawan Air Control Center at Camp Bishigawa and Yontan Radar remained inoperable for the remainder of October 1948. Due to this damage, the 623rd AC&W Sq initiated planning for a new Air Defense Control Center at Kadena Air Base, a second Tactical Control Center and two new early warning sites. The Yontan Tactical Control Center functioned as the Air Defense Control Center during this transition. In January 1949, the Point Tare station was finally closed and the site abandoned, the land was returned to the Okinawan people. On 1 April 1949, the 529th AC&W Gp was reassigned from the 51st Fighter Wing directly to Thirteenth Air Force, taking with it the 623rd and 624th AC&W Squadrons. This reassignment was short lived though, as 13th Air Force was returned to Clark AB, Philippines in May 1949. On 16 May 1949, the 529th AC&W Gp with the 623rd and 624th AC&W Squadrons were reassigned directly to Twentieth Air Force, who assume the mission of the defense of the Ryukyu Islands and was reassigned to Kadena AB, Okinawa. The new Yaetake and Miyako Jima early warnings sites became operational on 15 Mar 1950 and were assigned to the 624th AC&W Sq, who reported to the 623rd AC&W Sq Air Defense Control Center at Yontan Mountain. The new Okinawa Air Defense Control Center at Stillwell Park, Kadena AB opened in June 1950, and was manned by the 623rd AC&W Sq. The 623rd AC&W Sq's Yontan Mountain site reverted to a Tactical Control Center. Korean War On 27 June 1950, the United Nations Security Council voted to assist the South Koreans in resisting the invasion of their nation by North Korea. At that time, the 22 B-29s of the 19th Bombardment Group stationed at Andersen Field on Guam were the only aircraft capable of hitting the Korean peninsula, and this unit was ordered to move to Kadena Air Base on Okinawa and begin attacks on North Korea. These raids began on 28 June 1950. In August 1950, the 307th Bombardment Group deployed from MacDill Air Force Base, Florida to Kadena AB, Okinawa. The 623rd AC&W Sq supported these bomber operations as they operated from Okinawa, en route to the Korean peninsula. By April 1951, there had been two additional early warning sites added to the 624th AC&W Sq, in order to better support defensive operations of the Ryuku Islands and bomber operations flowing northward against North Korea. The new EW sites became operational at Kume Shima and Okino-Erabu Jima, Japan, and were operated by 624th AC&W Sq. These new EW sights also reported to the 623rd AC&W Sq, Stilwell Park ADCC at Kadena AB. On 21 May 1951 the 529th Group operations section assumed operational control of the ADCC at Kadena AB, leaving the 623rd AC&WS responsible for the ADCC's administration, supply and maintenance. On 26 August 1951, the 851st AC&W Sq was activated and assumed operational responsibility for the Air Defense control Center from the 623rd AC&W Sq. This new squadron had to be manned solely from 623rd AC&WS resources. Effective 3 September 1951, the 623rd AC&W Sq delegated operation of the Yontan Radar to its newly created Detachment 1. The 623rd AC&W Sq assumed responsibility from the 624th AC&W Sq for operation of the Yaetake EW station, delegating it as Detachment 2. By the end of 1951, the 623rd AC&W Sq was awarded the Korean Service Streamer for its support to the 19th Bombardment Wing and the 307th Bombardment Group (Medium), both of which operated from Kadena AB and were participating in daily combat in Korea. The 529th AC&W Gp with the 851st AC&W Sq, completed a relocation from Camp Bishigawa to Naha AB on 01 Aug 1952. The YaeTake Station was closed on 27 April 1953 and all personnel were moved out by 2 May to allow for construction of a new radar facility in a rather limited space location. The 529th AC&W Group transferred under the control of the 6351st Air Base Wing on 1 July 1952. The fighting on the Korean peninsula ended on 27 July 1953, when an armistice was signed. While the 623rd no longer provided inbound routing for bombers to flow into the Korean Theater, it still kept a watchful eye to the north. Detachment 2 623rd AC&WS commenced operations of the Yae Take station on 13 June 1954. The 529th AC&W Group became part of the 51st Fighter-Interceptor Wing on 1 August 1954. Cold War Following the end of the Korean War, Detachment 1, 20th AF was created and assumed operational control of the 529th AC&W Gp on 16 Aug 1954. On 01 Mar 1955, the 313th Air Division assumed command from Twentieth Air Force. During this change the Operations Division of the 313th Air Division assumed operational control of the air defense of the Ryukus Islands, and the operations of the ADCC from 20th AF Det 1, which was inactivated. In early 1955 a new radar site. located at Yoza Dake was under construction as a replacement for the Yontan Mountain site, which was to be closed when the new station became operational. The 529th AC&W Gp, was relieved of operational control of air defenses, and was inactivated on 15 Mar 1955. The personnel from the 851st AC&W Sq, which had operated the ADCC were assigned to the newly formed Detachment 1, 313th Air Division and the 851st was inactivated. The 623rd AC&W and 624th AC&W Sqs were reassigned directly to the 313th Air Division at this time. Shortly after 313th Air Division assumed responsibilities of air defense, the 623rd AC&W Sq added the new Air Defense Direction Center (ADCC) at Yoza Dake Air Station, which became operational 24 May 1956. The Yoza Dake ADDC assumed responsibility for all of the operations in the Southern Zone of the Ryuku Islands with the 623rd AC&W Sq's Yaetake sight becoming the ADCC of the 624th AC&W Sq, with the responsibility of directing Northern Zone operations. Following the activation the 623rd AC&W Sq new site at Yoza Dake, the Yontan GCI site was closed down. With the Yontan closure, the Yaetake station, absorb some of the men and responsibilities of the Yontan site. With the new north/south realignment, the AC&W Sq's Detachments were also reorganized on 31 Jul 1956. The 623rd AC&W Sq assumed operations of the 624th AC&W Sq Detachment 1, Miyako EW Station, and it became the new Detachment 1, 623rd AC&W Sq. The 624th AC&W Sq Detachment 2, Kume EW Station, became the new Detachment 2 of the 623rd AC&W Sq. On 13 Aug 1956 the 623rd AC&W Sq Headquarters officially relocated to Yoza Dake Air Station. The north/south alignment functioned for a year before the 313th Air Division Detachment 1 decided to reorganized the defensive structure to streamline operations. The separate Northern and Southern Sectors of Air Defense operations were eliminated and combined into one Air Defense Identification Zone for the entirety of the Ryukus Islands. This merger reduced the ADCC function at Yae Take to that of an Alternate ADCC, with the Yoza Dake ADCC becoming primary for the 313th Air Division. By 8 March 1958 the 624th AC&W Sq was inactivated and the 623rd AC&W Sq was wholly reorganized. The 623rd AC&W Sq Headquarters relocated from Yozadake Air Station to Naha Air Base. 623rd AC&W Sqs Detachments 1 and 2 remained unchanged, Yozadake was reorganized as Detachment 3, Okino-Erabu and Yae Take where assumed from the inactivation of 624th AC&W Sq and became 623rd AC&W Sq's Detachment 4 and 5 respectively. By 27 Mar 1958 the 51st Fighter-Interceptor Wing, at Naha AB, once again assumed operational control of the Air Defense of the Ryukus Islands, this time from the 313th Air Division. The 51st FIW Combat Operations Division assumed control from the Operations Division of the 313th Air Division. In late 1958 Detachment 5, Yae Take was virtually discontinued, its radar dismantled, the station became a communications relay site. During August 1960 the Yae Take communications site was transferred to the 30th Air Defense Artillery Brigade and renamed site 18. The USAF maintains a communications site at the station. With this action, the 623rd AC&WS began a long period of relative stability with respect to its assignment, its components and its operations. From August 1960 to August 1967 the squadron's facilities underwent gradual improvement in order to withstand many of the Okinawan typhoons which had plagued its radar sites since the squadron first arrived at the island. The 623rd AC&W deployed to South Korea during January 1968 in support of Operation Combat Fox, the United States response to the USS Pueblo Incident. Japanese Reversion In 1969, Japan's Prime Minister Eisaku Sato and President Richard Nixon agreed to the reversion of the Ryuku Islands to Japanese control. On 27 March 1971 the 18th Tactical Fighter Wing at Kadena AB assumed responsibility for the air defense of Okinawa preparatory to the inactivation of the 51st Fighter-Interceptor Wing.. When the 51st was inactivated on 31 May 1971 the 623rd AC&WS was reassigned to the 18th Wing. The Okinawa Reversion Agreement document was signed simultaneously in Washington D.C. and Tokyo on June 17, 1971. After the announcement of this agreement, the 623rd AC&W Sq started planning for turning over the entire defensive structure (C2 structure/radar/facilities) of the Ryuku Islands to the JASDF. Personnel of the Japan Air Self Defense Force had, in fact, began orientation visits to the 623rd's facilities in mid-1971 to discuss plans. More visits followed, and on 12 November 1971 and a plan for the transfer of radar facilities to JASDF was signed by USAF and JASDF authorizes. A team of 35 JASDF officers visited the sites again during December 1971. On May 15, 1972 the control of the Ryuku Islands was given back to Japanese control and the United States Civil Administration of the Ryuku Islands was abolished. This is when the real work of the 623rd AC&W Sq began. JASDF Air Defense Operations Teams (ADOTs) destined for each radar site arrived at Yoza Dake on 5 September 1972 for special orientation and familiarization before continuing onto their respective Air Stations. The ADOTs then went to their respective Air Stations to start on-the-job training with their USAF counterparts. All of the initial JASDF personnel were at their sites by 6 October 1972. The first site was to be turned over on 31 Dec 1972. On this date Okino Erabu Air Station was turned over to the JASDF and 623rd AC&W Sq Detachment 4 was discontinued. Detachment 1 623rd AC&W Sq at Miyako Jima was discontinued with the JASDF assuming control of that Air Station and defense operations on 15 Feb 1973. The JASDF took over Yoza Dake Air Station operations from Detachment 3 623rd AC&W Sq on 01 Mar 1973, assuming responsibility for detection and identification of the entire Ryuku chain. Detachment 3 was discontinued on 31 Mar 1973. Detachment 2 623rd AC&W Sq Kume Air Station was discontinued on 15 May 1973. The final piece of the defense structure, the 623rd AC&W Sq's ADCC at Naha Air Base was transferred to the JASDF on 31 Jun 1973. Eight short days later, the 623rd Aircraft Control and Warning Squadron was inactivated on 8 July 1973. Reactivation The unit was reactivated on 1 April 1983, as the 623rd Tactical Control Squadron, with the mission of providing an operationally ready Forward Air Control Post (81st Tactical Control Flight) for worldwide employment, Ground Control Intercept (GCI) support for local training requirements, and Theater Control Operations Team (TCOT) personnel for GCI support and USAF integration within the Japanese Air Defense System (JADS). The new 623rd TCS was assigned to the 5th Tactical Control Group out of Osan AB, South Korea. The newly formed 623rd Tactical Control Squadron was reconstituted with a main unit and three subordinate and geographically separated operating locations; OLAA, OLAB and OLAC. The main unit, located at Kadena AB, was composed of the command section, operations, orderly room and the 81st Tactical Control Flight. The Kadena location was tasked to support three F-15 squadrons assigned to the 18TFW and maintain the overall squadron training and stan/eval programs. Additionally the subordinate 81st TCF was responsible for worldwide employment of a forward radar element. OLAA was established as an Air Defense Liaison Element (ADLE) located at Fuchu Air Station near Tokyo, Japan. This ADLE provided 24-hour US Liaison between US Forces Japan and JASDF. OLAB was also an ADLE, and it was located at Naha AB in southern Okinawa. The Naha ADLE was also tasked to provide 24-hour US Liaison service to the JASDF within the Southwest Defense Sector of Japan. Each of these ADLE was composed of six to nine Control Technicians, one Officer in Charge and one NCOIC. OLAC provided a weapons control element at Misawa AB, Japan. They were tasked to provide three GCI teams for two squadrons of F-16s assigned to the 432 TFW. The unique nature of the new 623rd TCS (minus 81 TCF) was that it owned no equipment of its own, all of the C2 systems and radars were provided by the JASDF. Many of the provided systems were the very same systems that the 623rd AC&W Sq had operated prior to reversion in 1973. The 623rd controllers made a "clean sweep" of the control awards during the 1982 William Tell Competition. The 623rd deployed to northern Japan in September 1983 to assist in controlling airspace for aircraft searching for bodies from Korean Airlines Flight 007, shot down by the Soviet Union. That garnered a letter for the unit from the South Korean minister of national defense for their outstanding humanitarian effort. The 623rd controllers returned again to the 1984 William Tell Competition, taking the top control prize, the Lt Col William W. "Dad" Friend Trophy for top control team. The William Tell dominance by the 623rd continued at the 1986 edition of the Air-to-Air meet. The 5th TACG team won the top control prize, the Lt Col William W. "Dad" Friend Trophy, marking a third trophy for the 623rd. The 623rd members provided 4 of 6 team members, led by team chief, Captain Neal Kumasaka 623rd TCS. In February 1987, the 623rd TCS was reorganized and the 81st TCF became a separate organization, the 81st TCS. The 623rd TCS was assigned directly under Fifth Air Force, 17 February 1987, while the 81st TCS was reassigned to the 5th Tactical Control Group and redesignated as the 81st Tactical Control Squadron. The 623rd TCS was realigned under the newly activated 18th Operations Group, 18th Wing, 1 October 1991. The unit underwent another name change and mission reorganization in April 1992. The 623rd TCS became the 623d Air Control Squadron with two detachments, Det 1 at Fuchu Air Base, Japan operating an Air Defense Liaison Element, and Det 2 at Misawa Air Base, Japan operating a single Tactical Control Operations Team. The Naha AB forward operating location Air Defense Liaison Element was merged into the Kadena AB main Squadron location. The Kadena location provided the Command element, Standardization and Evaluation and Trainings Sections along with providing two Tactical Control Operations Teams that executed out of Yozadake Subbase (JASDF), Okinawa. On 1 August 1994 the 623rd Air Control Squadron's two detachments became separate flights under the USAF's objective wing reorganization "one base, one wing, one boss" initiative. Consequently, the 623d Air Control Squadron at Kadena Air Base was re-designated the 623d Air Control Flight, Detachment 1 at Fuchu Air Base became the 624th Air Control Flight, while Detachment 2 at Misawa Air Base became the 610th Air Control Flight. The 623d ACF's mission was to provide rapid response, combat ready tactical control operations teams, integrated battle staff, weapons control and command and control liaison elements in support of US and bilateral interests throughout the entire PACOM theater. The 623d ACF has participated in every biennial Exercise KEEN SWORD, designed to increase combat readiness and interoperability of U.S. forces and the Japan Self-Defense Force, since the flight's reorganization in 1994. The 623d ACF operated out of two JASDF facilities, Naha Air Base's Direction Center and Yozadake Sub Base. The 623d last controlled out of the Yozadake Sub Base in 2010, prior to the JASDF shutting down the JADGE Operations Facility for construction of a new radar and operations system, the AN/FPS-5 Gamera. On March 14, 2011 members of Lightsword deployed to Iruma Air Base (JASDF), Honshu, Japan to assist in the command and control of US relief efforts for Operation Tomodachi, the disaster relief effort following the 2011 Tōhoku earthquake and tsunami. The Pacific Pivot Unveiled in 2011, the "Pacific Pivot" aimed to transition US Military resources away from the Middle East and towards the world's most populous and economically diverse area. This increase in military presence in the Pacific drove a review of the Command and Control structure of the US Air Force in Japan. The existing C2 structure had atrophied down to two flights, the 623D ACF, Kadena AB and 610th ACF, Misawa AB, and an element, 5AF ADLE, Yokota AB, to conduct coordinated air operations during contingency operations and support bilateral exercises and/or daily flying operations during peacetime. During this review there was an identified gap in tactical communications that could only be filled by an Interface Control Cell, to fuse common tactical pictures into on Common Operational Picture. The biggest limiting factor identified was within the Southwest Sector of Japan. This Sector was considered the strategic hub or "keystone of the Pacific" for the US Military. The 623D ACF was identified to stand-up and conduct operations of the Southwest Sector Interface Control Cell. This drove an increase in funding, personnel and facilities for the 623D. Additionally, identified in the review was the lack of an Air Defense Artillery Fire Control (ADAFCO), which is required in order to direct US air and air defense artillery employment. The ADAFCO is required in order to effectively coordinate with US and JASDF surface to air missile systems. In 2016 the 623d's first full-time assigned ADAFCO arrived, CAPT Owen Sill, 94th Army Air and Missile Defense Command. The mission increase and personnel influx required a greater support structure to adequately operate and command it. This drove the 623D Air Control Flight to be reorganized and the 623D Air Control Squadron was activated on 15 Apr 2016. The newly reorganized 623D Air Control Squadron was tasked with a two-fold mission of (1) to provide rapidly deployable Joint Theater Operations Control Teams (TCOT) to provide host nation liaison and coordinate and direct US air and air defense artillery employment within a sector of the Japan Air Defense Ground Environment system and to (2) Operate Japan's Joint/Bilateral Southwest Sector Interface Control Cell (SICC), to fuse joint and multinational data provided by regional C2 units, track producers, and tactical data link (TDL) participants to execute Integrated Air & Missile Defense (IAMD) and Ballistic Missile Defense (BMD) operations in support of the Defense of Japan. The SICC is responsible for daily coordination and operations with 5AF, USFJ, PACAF, III MEF, 7th Fleet, 94th AAMDC and Japanese Self Defense Forces. In January 2019, PACAF procured and delivered a new C2 system, called the Theater Operationally Resilient Command and Control (TORCC). The TORCC system is a data fusion engine combining Tactical Display Framework (TDF), Battlespace Command and Control Center (BC3), and a virtual Air Defense Systems Integrator (ADSI). This C2 system allows the 623 ACS to truly be fast, light, and lethal in support of a vast array of mission types including Multi-Domain Command and Control, Humanitarian Assistance and Disaster Response, contingencies, and providing peace and stability within the USINDOPACOM AOR. The system will enable deployed operations and allow tactical datalink and sensor information to be ingested and fused. Awards won by unit members include three time William Tell, Lt Col William W. "Dad" Friend Trophy, 1982, 1984 and 1986. PACAF's first-ever Command and Control Warrior of the Year award in 1996, PACAF Enlisted Weapons Director of the Year in 1997, PACAF Command and Control Battle Management Operator of the Year - Airmen Category 2012, Headquarters' Air Force Command & Control Battle Management Operator of the Year - NCO Category 2013 (SSgt William Gulley), PACAF's Command and Control Battle Management Operator of the Year - Officer Category 2013 (Capt Shannon Greene), PACAF's C2 Crew of the Year 2014, PACAF's C2 Crew of the Year 2016 and Headquarters' Air Force Command & Control Battle Management Operator of the Year - Officer Category 2016 (Capt Alison Cruise). The 623d has earned two service streamers: The World War II Asia-Pacific Theater and the Korean Theater. Decorations include eleven Air Force Outstanding Unit Awards and the Republic of Vietnam Gallantry Cross with Palm. The 623d ACS is currently assigned to the 18th Operations Group, Kadena AB and supports bilateral air operations through weapons control, battle staff, and liaison functions between the Japan Air Defense Command and the Commander, 5th Air Force. Based out of Kadena Air Base, the 623d remains ready to integrate USAF, joint, and bilateral combat aerospace operations and defend United States and Japanese mutual interests in the Pacific region. The 623d ACS maintains system expertise on an indigenous primary command and control system at Naha Air Base, Kasuga Air Base and Iruma Air Base. The TCOT may deploy to other sites within the PACOM AOR to provide host nation liaison and command and control. Emblem Description/Blazon On a disc chequy azure and argent, in base a radar antenna or fimbriated Sable emitting six sound waves in graduating size bendwise of the last, three mullets, two and one, above the antenna and one in dexter base all of the third; all within a narrow border yellow. Attached above the disk, a blue scroll edged with a narrow Yellow border and inscribed "SEMPER VIGILANTES" in Yellow letters. Attached below the disk, a blue scroll edged with a narrow yellow border and inscribed "623D AIR CONTROL SQ" in yellow letters. Emblem significance Ultramarine blue and Air Force yellow are the Air Force colors. Blue alludes to the sky, the primary theater of Air Force operations. Yellow refers to the sun and the excellence required of Air Force personnel. The checkered background is representative of the plotting boards peculiar to an Air Control and Warning units of the past. The radar antenna symbolizes the primary "weapon" of the unit, teaching and guiding aircraft with electronic waves. The stars represent the unit's four original detachments (Yozadake AS, Kume AS, Okino-Erabu AS and Miyako AS). Lineage 305th Fighter Control Squadron (USAAF) Constituted on 31 March 1943 Activated on 1 April 1943 623rd Aircraft Control & Warning Squadron Redesignated on 2 July 1946 Inactivated on 8 July 1973 623rd Airborne Command and Control Flight Redesignated on 23 Nov 1979 Activated on 1 Jan 1980 Inactivated on 1 Jun 1980 623rd Tactical Control Squadron Redesignated on 18 Jan 1983 Activated on 1 April 1983 623rd Air Control Squadron Redesignated on 1 April 1992 623d Air Control Flight Redesignated on 1 August 1994 623d Air Control Squadron Redesignated on 15 April 2016 Assignments I Fighter Command - 1 Apr 1943 72d Fighter Wing - 19 Dec 1943 2nd Air Force - 21 Mar 1944 7th Air Force - 2 Jun 1944 VII Fighter Command - 2 Jun 1944 7th Fighter Wing - 15 Aug 1944 7th Provisional Control Group (Special)(Attached) - 10-30 Sep 1944 576th Signal Air Warning Battalion - 1 Oct 1944 7th Fighter Wing Aircraft Control Group - 20 Jan 1945 7th Air Force - 14 Jul 1945 VII Bomber Command - 01 Dec 1945 8th Air Force - 01 Jan 1946 301st Fighter Wing - Jan 1946 3rd Operational Group (Provisional)(Attached) (301st Fighter Wing) - 01 Nov 1947-1 May 1948 529th Aircraft Control and Warning Group - 2 May 1948 313th Air Division - 15 Mar 1955 51st Fighter-Interceptor Wing (Attached) - 27 Mar 1958-17 Jul 1960 51st Fighter-Interceptor Wing - 18 Jul 1960 18th Tactical Fighter Wing - 21 May 1971 Inactivated 08 Jul 1973 Activated 01 Jan 1980 Pacific Air Forces - 01 Jan 1980 Inactivated 01 Jun 1980 Activated 01 Apr 1983 5th Tactical Air Control Group - 01 Apr 1983 5th Air Force - 17 Feb 1987 18th Operations Group - 01 Oct 1991–Present Operational Locations Bradley Field, Connecticut - 01 Mar 1943 Blackstone Army Air Field, Virginia - 01 Sep 1943 Galveston Army Air Field, Texas - 20 Dec 1943 Fort Lawton, Washington - 3 May 1944 Stanley Army Air Field, Territory of Hawaii - 02 Jun 1944 Bellows Field, Territory of Hawaii - 15 Feb 1945 Ie Shima, Japan - 19 Mar 1945 (Detachment 1) Fort Shafter, Territory of Hawaii - 04 Apr 1945 Camp Bishigawa, Japan - Nov 1945 Ie Shima Det 1 - Sep 1945 Yontan Mountain - 01 Jan 1946 - 24 May 1956 Point Tare - May 1946 - Oct 1948 Aguni Shima - Oct 1946 - Jun 1947 Stillwell Park, Kadena AB, Japan - 01 Jun 1950 Yontan Mountain (Camp Bishagawa) - 01 Jan 1946 - 24 May 1956 Yaetake - 03 Sep 1951 - 31 Jul 1956 Yozadake Air Station - 24 May 1956 - 31 Mar 1973 Yozadake Air Station, Japan - 13 Aug 1956 Det 1 Miyako Air Station - Mar 1950 - 15 Feb 1973 Det 2 Kume Air Station - Apr 1951 - 15 May 1973 Naha AB, Japan - 20 Dec 1957 - 8 July 1973 Det 1 Miyako Air Station - 1950 - 15 Feb 1973 Det 2 Kume Air Station - Apr 1951 - 15 May 1973 Det 3 Yozadake Air Station - 08 Mar 1958 - 31 Mar 1973 Det 4 Okino Erabu Air Station - 08 Mar 1958 - 31 Dec 1972 Det 5 Yaetake - 08 Mar 1958 - 08 Aug 1960 Kadena AB, Japan - 01 Jan 1980 - 01 Jun 1980 Kadena AB, Japan - 01 Apr 1983 Yozadake Sub Base, Japan (JASDF) - 01 Apr 1994 - 2010 (Forward Operating Location) OLAA Fuchu AB, Japan (JASDF) - 01 Apr 1983 - 31 Mar 1992 OLAB Misawa AB, Japan - 01 Apr 1983 - 31 Mar 1992 Ohminato Sub Base, Japan (JASDF) - 01 Apr 1983 - 31 Mar 1992 (Forward Operating Location) OLAC Naha AB, Japan 01 (JASDF) - Apr 1983 - 31 Mar 1992 Kadena AB, Japan - 01 Apr 1992 Naha AB, Japan (JASDF) - 01 Apr 1983 – Present (Forward Operating Location) Yozadake Sub Base, Japan (JASDF) - 01 Apr 1994 - 2010 (Forward Operating Location) Det 1 Fuchu AB, Japan (JASDF) - 01 Apr 1983 - 31 Jul 1994 Det 2 Misawa Air Base, Japan - 01 Apr 1992 - 31 Jul 1994 Kadena AB, Japan - 01 Apr 1994–Present Naha AB, Japan (JASDF) - 01 Apr 1994 – Present (Forward Operating Location) Yozadake Sub Base, Japan (JASDF) - 01 Apr 1994 - 2010 (Forward Operating Location) Past commanders Capt Dangerfield (Acting) - 07 Apr 1943 Capt Walter H. Birch - 21 Apr 1943 Maj Carl L. Cook - 06 Nov 1943 LT Paul W. Brownfield - 20 Mar 1944 LT Chester Cohen - 21 Dec 1944 Capt Paul W. Brownfield - 23 Feb 1945 Capt Burton Kirby - Sep 1945 Maj Franklin L. Fisher - 18 Jan 1946 Capt Ivan L. Corzine - Oct 1947 Major Donald H. Higgins - 05 Nov 1947 Major Sewal Y. Austin - Feb 1948 Capt James A. Ward - Sep 1948 Capt Lewis R. Meek - Dec 1948 Capt John Welch - Aug 1949 Capt James Hislopby - Sep 1949 Major Charles F Himes - Apr 1950 Capt Charles F. Hobart - 28 Jul 1950 Lt Col William Worden - Aug 1950 Major Harry C. Ross - 21 Aug 1951 Major George C. Schmidt - 21 Nov 1951 Major John E. Morgan - Mar 1952 Major Wendell A. Steele - Apr 1954 Capt Edward J. Jiru - 22 June 1954 Major Joseph F. Girius, Jr. - 07 Jul 1954 Major Jay Pryor - Sep 1954 Major Horace M. Jacks - 06 Oct 1954 Lt Col Harry O. Flathmann - 1 May 1955 Major Maurice M. Gouchoe - 13 Jul 1956 Lt Col James R. Greary, Jr. - 03 Jan 1957 Lt Col William A Beard - 03 Jan 1958 Major Maurice Morrison - 14 Mar 1958 Lt Col William R. Crooks - 23 Jul 1959 Major Frank W. Dawson - 20 Dec 1960 Lt Col Edward A. Sanders - 30 Jan 1961 Lt Col Roland L. Wolfe - 11 Aug 1962 Major Martin R Ring - 10 Feb 1963 Lt Col Ronald M. Cottrill - Dec 1964 Lt Col James A. Gerwick - 27 Jan 1967 Lt Col James A. Gerwick - 26 Jun 1968 Lt Col Clarence P. Elder - 16 Dec 1968 Major Nathan L. Walker - 11 Jun 1970 Lt Col Thomas L. Fulton, Jr. - 12 May 1971 Lt Col David L. Oakes - 26 Jul 1971 Lt Col Thomas L. Fulton, Jr. - 21 Jun 1972 Colonel Robert W. Casey - 27 Jul 1972 Lt Col Ferdinand J. Kubala - 26 May 1973 Major Dennis E. Moe - 01 Apr 1983 Major Robert F. Williams, Jr. - 16 Jul 1984 Lt Col Franklin K. Reyher, Jr. - 24 Aug 1986 Lt Col Kris Lamphere - 13 Aug 1990 Lt Col James M Johnson III - 13 Apr 1992 Lt Col Howard Don - Jun 1994 Lt Col Kenneth G. "Doc" Eide - Jul 1995 Major Robert C. Clinton - May 1996 Capt Donna L. Denman - Jun 1997 Major Daniel L. "Duke" Whitten - Jul 1997 Major Daniel Reilly - 28 Jun 1998 Major James A. "Dill" Pickle - 23 Jun 2000 Major John M. "Squeak" Askew - 13 Jul 2001 Major Edward A. "Oscar" Meyer - 10 Jul 2002 Major Thomas W. Coppersmith - 23 Jun 2003 Major Jessica Baker - 6 May 2004 Lt Col Michael S. "Crank" Christie - 23 Jan 2005 Major Charles W. "Viking" Dennison - 19 Apr 2007 Major Nathaniel "Nate" Dash - Dec 2007 Major Anthony J. Owens - Aug 2008 Major Jeff C. "Walleye" Watts - May 2010 Lt Col Paul S. "Sparky" Nichols - 14 May 2012 Lt Col Daniel V. "Lucky" Biehl - 23 Jun 14 Lt Col Joel "Deuce" Doss - 23 Jun 17 Lt Col Jeffrey A. "Mac" McKiernan - 30 Apr 19 Lt Col Jeffrey T. "JT" Mitchell - 11 June 21 Notable Members Lt Col William R. Dunn - April 1958 to April 1960 - The first American ace of World War II Brigadier General Jack T. Martin - 1945 to 1946 References External links 623rd ACF transitions to 623rd ACS Air Force News Service 20 April 2016, retrieved 25 April 2016 Air Force officer receives prestigious Order of Saint Barbara Okinawa Stripes 05 Mar 16, retrieved 4 July 17 18th Operations Group Fact Sheet AFI 13-1 BCC V3 Air Defense Command and Control Operations 623rd ACF gains new commander 623rd Air Control Flight Partners with Japan air Self Defense Force Air Force News Service July 2, 2015 Kadena, JASDF Airmen Strengthen Bilateral Ties through Aviation Training Relocation Air Force News Service 15 September 2015 Small USAF Unit in Kadena Performs Big Job Small Kadena unit unique in use of Japanese assets Kadena Flight Provides Air Picture for Exercise Controlling air from ground Building on bilateral Legacy Miyako Jima 1958, 623rd Det 1 623rd AC&W Squadron Memorial Reunion Film Part 1 Part 2 Part 3 Largest Kadena Flying Exercise Successfully Completed MACS-4 enhances communication at LORE Henry J. Blais U.S. World War II veteran The Library of Congress - Veterans History Project - Bernard Gross Air control squadrons of the United States Air Force
26409901
https://en.wikipedia.org/wiki/Lincoln%20High%20School%20%28Tallahassee%2C%20Florida%29
Lincoln High School (Tallahassee, Florida)
Lincoln High School is a public high school located in Leon County, Tallahassee, Florida. It offers an accelerated Advanced Placement (AP) program. In 2010, Newsweek ranked Lincoln High School as one of the top 100 high schools in the United States. In 2021 Lincoln was only 1 point shy from being placed back on this prestigious list. On the state level, Lincoln has continuously ranked in the 90th percentile among all Florida high schools. Lincoln has also been ranked as one of the highest high schools in Florida in regards to college readiness. The "Best High Schools" report in 2016 also revealed that Lincoln High boasted the top college-readiness level of all area schools. History Campus Lincoln High School was established in 1974 following the closure of the Old Lincoln High School in 1970 and relocated from its former location downtown to its current site in the southeast portion of the county. The school was constructed as a large one building high school with the address being 3838 Conner Blvd which was later changed to 3838 Trojan Trail in July 1982 when students petitioned the county to change the name to symbolize their school spirit. Lincolns concept as a large one building school also changed in late 1980's when the school had no choice but to add additional buildings to the campus. Buildings and parking lots continued to be added to campus through the early 2000s. In 2006 Lincoln was approved for a campus wide remodel which would include all buildings, parking lots, sports fields, school entrance and the construction of a new two level classroom building along with a new cafeteria building at the front of the school, the construction/remodel lasted from 2006-2008. During this time, the City of Tallahassee also rerouted the path of Trojan Trail to better accommodate traffic needs for the school. Once both constructions were completed portions of the Lincoln campus and surrounding area were unrecognizable from before. Following the major construction/remodel projects, a garden/courtyard area was added to the back of the school in 2009. The area was dedicated to long-time principal Martha Bunch who had served Lincoln for 20 years. The garden was also a way to thank her for overseeing Lincoln during its most expanding times, both in population and construction. Since the numerous expanding projects of the 1990s and 2000s Lincoln has only added a new JROTC building in the spring of 2015. Principals Douglas Frick was the founding principal of Lincoln High School and served in that position until 1989. He was replaced that summer by William Montford, who had previously been the principal at Amos P. Godby High School. Montford served as Lincoln's principal until his election as the Leon County School Superintendent in the fall of 1996 which required him to step down during the 96-97 school year. Martha Bunch, an assistant principal under Montford since 1989, assumed the position of principal until the summer of 2008 when she was moved to a district level position. During the 2008–2009 school year, Merry Ortega served as interim principal while also working closely under the guidance of Bunch. To many the 08-09 school year was known as "the year with two principals". In the summer of 2009, Swift Creek Middle School principal, Allen Burch was appointed to Lincoln as both Ortega and Bunch retired from the school district. He is currently serving in that capacity. 1974 - 1989: Douglas Frick 1989 - 1996: William J. Montford 1996 - 2008: Martha Bunch 2008 - 2009: Merry Ortega 2009 - Current: Allen Burch Enrollment During the early years of Lincoln, enrollment averaged around 1800 students. It was then in the 1990's that Lincoln saw a large uptick in enrollment as the northeastern/eastern portions of Leon County saw substantial housing/subdivision growth. From the years 1994 and 1998 Lincoln's total enrollment grew from 2073 to just below 2500 making Lincoln the largest high school in the district. Enrollment then dropped to around 1798 following the opening of Chiles High school in 1999 on the north side of the county as an effort to relieve overcrowding. The opening of Chiles was a priority to then superintendent Bill Montford who had recently been principal of Lincoln and saw the impact that the large enrollment was having on the school. However enrollment at Lincoln was not low for long, within a couple of years Lincoln found itself home to 2000 students yet again. For the past decade Lincoln has continued to bounce anywhere between 2000 and 2100 students each school year. By the year 2025 Lincoln is anticipated to see another uptick in enrollment as several more housing developments are planned to be constructed in the Lincoln school zone. The demographics for the 2015–2016 school year are as follows: Total: (2,092) 100% Black students: 37% White students: 51% Minority students: 12% Colors and mascot Prior to the opening of Lincoln in September 1974, the entering students participated in the selection of green, gold, and white as school colors, signifying youth, vigor, merit, honor and wisdom. The mascot selected was the Trojan, which signifies personal strength, loyalty, courage, and leadership. A six-foot statue of a Trojan, erected in the summer of 2007, stands in front of the main office. Athletics Lincoln High School's football program has been recognized at the national level as an elite high school program. Lincoln was named by Bleacher Report as the #4 top football factory in the nation for high school recruits. In 2012, Lincoln gained national exposure by beating St. Paul High School (Covington, Louisiana) in the Mercedes-Benz Superdome during the Allstate Sugar Bowl Prep Kickoff Classic. In 2013, Lincoln participated in a kickoff classic game on ESPN, defeating South Gwinnett High School. Lincoln has won state championships in 1999, 2001, and 2010. Lincoln was state finalist in 1999, 2001, 2008, 2010, 2012. Lincoln was regional champion in 1989, 1999, 2000, 2001, 2002, 2006, 2008, 2010, and 2012. Lincoln was district champion in 1989, 1990, 1996, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, and 2015. Lincoln was city champion of Tallahassee in 1989, 1990, 1995, 1996, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, and 2011. Lincoln's football team won the 1999 6A state championship by defeating Miami Southridge High School at Ben Hill Griffin Stadium. In 2001, Lincoln won their 2nd state title, this time in the 4A classification, by defeating St. Thomas Aquinas High School in Doak Campbell Stadium. In 2008, Lincoln finished as state runner-up, losing to Plant High School from Tampa. In 2010, Lincoln brought home the 4A State Championship with a win over Seffner Armwood High School in the Citrus Bowl in Orlando, Florida. The 2012 Lincoln Trojans finished as state runner-up after being defeated by St. Thomas Aquinas High School in the Citrus Bowl. Kevin Carter, Antonio Cromartie, Zach Piller, Boo Williams, PJ Alexander, Craphonso Thorpe, Pat Watkins, Don Pumphrey, Jr., Padric Scott, Jawanza Starling, B. J. Daniels and Omari Hand have represented the Trojans in the National Football League. After the 2009 football season, the school was named a "Nike Elite" program, which includes a multi-year contract with Nike and makes the Trojans the only Nike Elite school in northern Florida, one of only five schools in the state to earn the honor, and one of 45 in the nation. The boys' soccer team won the 1996 Class 6A State Championship. They also were the 1995 State runners-up to Bloomingdale High School and the 2005 State runners-up to Auburndale High School. In 2010, they reached the Final Four. In baseball, Lincoln has had eight players selected out of high school by the Major League Baseball Amateur Draft: 1965: Thomas Lomack was selected by the Cleveland Indians 1986: Reggie Jefferson, Cincinnati Reds 2003: Clegg Snipes, Pittsburgh Pirates 2004: Michael Criswell, Colorado Rockies; Joe Bauserman, Pittsburgh Pirates 2006: Cole Figueroa and J.C. Figueroa, Toronto Blue Jays 2012: Raphael Andrades, Kansas City Royals In 2003, Lincoln finished as state runner-up in the 5A classification after losing the championship game to Lakewood Ranch High School. In 2007 the boys' lacrosse team became the first public school team in Tallahassee to defeat the Maclay School Marauders. The team won its first district title during the 2009–2010 season by defeating Leon and Chiles. They finished the year as one of the last eight teams playing in the state, eventually falling to the Bolles School in the regional final. The team is coached by Mark Williams. Lincoln's dominant wrestling team is ran by head coach Mike Crowder. Coach Crowder's teams at Lincoln have combined for a 305–74 record, with 1 State Champion, 6 State Finalists, and 29 Individual State place winners. His teams have been Regional Champions 6 times, Regional Runner-ups 7 times, District Champions 17 times, District Runner-ups 4 times, and City Champions for 21 years. Coach Crowder helped establish the FHSAA State Dual Meet Tournament, in which his team qualified in its inaugural year in 2017. Because of these accomplishments on and off the mat, Mike has been recognized many times by his peers. He has been North West Florida Coach of the year 4 times, All Big Bend Coach of the year 10 times, FACA District 3 coach of the year 5 times, and was a FACA Hall of Fame inductee in 2018. Academic extracurriculars Lincoln has won several Big Bend Brain Bowl championships in the past three decades. Their last championship in the Tallahassee Democrat-sponsored tournament came in 2011 with team members securing a $2500 top prize. Prior to this title, the Lincoln Gold team won the regional title in 2004 by going undefeated in the playoffs. In 2006-2007 the Lincoln Gold team won first-place honors at the University of Florida fall tournament and the Rickards High School fall tournament. They took second place at the Chiles High School tournament and the Florida State University tournament. On March 12, 2007, the team finished in first place at the Tallahassee Democrat Bowl. Lincoln's Mu Alpha Theta club finished in second place at the State Convention, and in third place at Nationals in 2003. In 2004, Lincoln again placed second and third at the State and National Conventions, respectively, with the margin between first and second places at the State Convention being the smallest in history. That year, the Precalculus team became the first from Tallahassee to win a State title in any division. The 2006 Calculus Team was the first team from Lincoln to ever win a National title in their division. In recent years, the Lincoln band has earned the Florida Bandmasters Association's Otto J. Kraushaar Award, which is awarded to band programs in the state of Florida that receive Superior ratings at all marching and concert evaluations throughout the year. The Symphonic Band, Lincoln's top concert ensemble, participated in the 2005 Midwest Clinic in Chicago, Illinois. The Symphonic Band also appeared at the 2004 CBDNA/NBA Southern Division Conference, the 2002 and 2009 National Concert Band Festival, the 2001 University of Florida Concert Band Invitational, the 1999 ABA Convention, the 1997 Southeastern Band Clinic, and 2011 University of Georgia Janfest. Lincoln High students have participated in History Fair since 1987–1988, and have advanced to the National competition for 17 straight years, earning two national third-place finishes. In 2017, Graham O'Donnell, a junior, won the National Championship for The Who Wants to Be a Mathematician contest. He won $5,000 for himself, and $5,000 for the Lincoln Mu Alpha Theta club. Advanced Placement programs Lincoln High School offers Advanced Placement (AP) programs to its students. The courses include: English Language and Composition English Literature and Composition Macroeconomics/US Government and Politics European History Human Geography Psychology U.S. History World History Chinese French Latin Spanish Art History Music Theory Studio Art Biology Chemistry Environmental Science Physics 1 Physics 2 Physics C: Mechanics Calculus AB and BC Statistics Notable alumni Javorius Allen - Baltimore Ravens and former University of Southern California running back P. J. Alexander - offensive lineman for Syracuse University and NFL's New Orleans Saints, Denver Broncos and Atlanta Falcons Kevin Carter - former NFL player/sports broadcaster Antonio Cromartie - New York Jets defensive back B.J. Daniels - Seattle Seahawks, former University of South Florida quarterback Kyan Douglas - TV personality Cole Figueroa - former New York Yankees infielder Reggie Jefferson - former Boston Red Sox player Zach Piller - former Tennessee Titans offensive lineman Don Pumphrey, Jr. - former Tampa Bay Buccaneer offensive lineman Fred Rouse - former Florida State Seminoles wide receiver Jawanza Starling - former Houston Texans and USC safety Craphonso Thorpe - former Indianapolis Colts wide receiver Pat Watkins - former Dallas Cowboys safety Boo Williams - former New Orleans Saints tight end Oluwatoyin Salau - activist, murdered in Tallahassee, Florida Faculty Jacquez Green - former NFL player; offensive coordinator and head track coach 2010–2012; became offensive coordinator again starting in 2014 Dean Palmer - former MLB player; baseball coach References External links Lincoln High School High schools in Leon County, Florida Public high schools in Florida 1975 establishments in Florida Educational institutions established in 1975
237495
https://en.wikipedia.org/wiki/Information%20system
Information system
An information system (IS) is a formal, sociotechnical, organizational system designed to collect, process, store, and distribute information. From a sociotechnical perspective, information systems are composed by four components: task, people, structure (or roles), and technology. Information systems can be defined as an integration of components for collection, storage and processing of data of which the data is used to provide information, contribute to knowledge as well as digital products that facilitate decision making. A computer information system is a system that is composed of people and computers that processes or interprets information. The term is also sometimes used to simply refer to a computer system with software installed. "Information systems" is also an academic field study about systems with a specific reference to information and the complementary networks of computer hardware and software that people and organizations use to collect, filter, process, create and also distribute data. An emphasis is placed on an information system having a definitive boundary, users, processors, storage, inputs, outputs and the aforementioned communication networks. In many organizations, the department or unit responsible for information systems and data processing is known as "information services". Any specific information system aims to support operations, management and decision-making. An information system is the information and communication technology (ICT) that an organization uses, and also the way in which people interact with this technology in support of business processes. Some authors make a clear distinction between information systems, computer systems, and business processes. Information systems typically include an ICT component but are not purely concerned with ICT, focusing instead on the end-use of information technology. Information systems are also different from business processes. Information systems help to control the performance of business processes. Alter argues for advantages of viewing an information system as a special type of work system. A work system is a system in which humans or machines perform processes and activities using resources to produce specific products or services for customers. An information system is a work system whose activities are devoted to capturing, transmitting, storing, retrieving, manipulating and displaying information. As such, information systems inter-relate with data systems on the one hand and activity systems on the other. An information system is a form of communication system in which data represent and are processed as a form of social memory. An information system can also be considered a semi-formal language which supports human decision making and action. Information systems are the primary focus of study for organizational informatics. Overview Silver et al. (1995) provided two views on IS that includes software, hardware, data, people, and procedures. The Association for Computing Machinery defines "Information systems specialists [as] focus[ing] on integrating information technology solutions and business processes to meet the information needs of businesses and other enterprises." There are various types of information systems, for example: transaction processing systems, decision support systems, knowledge management systems, learning management systems, database management systems, and office information systems. Critical to most information systems are information technologies, which are typically designed to enable humans to perform tasks for which the human brain is not well suited, such as: handling large amounts of information, performing complex calculations, and controlling many simultaneous processes. Information technologies are a very important and malleable resource available to executives. Many companies have created a position of chief information officer (CIO) that sits on the executive board with the chief executive officer (CEO), chief financial officer (CFO), chief operating officer (COO), and chief technical officer (CTO). The CTO may also serve as CIO, and vice versa. The chief information security officer (CISO) focuses on information security management. The six components that must come together in order to produce an information system are: Hardware: The term hardware refers to machinery and equipment. In a modern information system, this category includes the computer itself and all of its support equipment. The support equipment includes input and output devices, storage devices and communications devices. In pre-computer information systems, the hardware might include ledger books and ink. Software: The term software refers to computer programs and the manuals (if any) that support them. Computer programs are machine-readable instructions that direct the circuitry within the hardware parts of the system to function in ways that produce useful information from data. Programs are generally stored on some input/output medium, often a disk or tape. The "software" for pre-computer information systems included how the hardware was prepared for use (e.g., column headings in the ledger book) and instructions for using them (the guidebook for a card catalog). Data: Data are facts that are used by systems to produce useful information. In modern information systems, data are generally stored in machine-readable form on disk or tape until the computer needs them. In pre-computer information systems, the data are generally stored in human-readable form. Procedures: Procedures are the policies that govern the operation of an information system. "Procedures are to people what software is to hardware" is a common analogy that is used to illustrate the role of procedures in a system. People: Every system needs people if it is to be useful. Often the most overlooked element of the system is the people, probably the component that most influence the success or failure of information systems. This includes "not only the users, but those who operate and service the computers, those who maintain the data, and those who support the network of computers." Internet: is a combination of data and people . (Although this component isn't necessary to function). Data is the bridge between hardware and people. This means that the data we collect is only data until we involve people. At that point, data is now information. Types of information system The "classic" view of Information systems found in textbooks in the 1980s was a pyramid of systems that reflected the hierarchy of the organization, usually transaction processing systems at the bottom of the pyramid, followed by management information systems, decision support systems, and ending with executive information systems at the top. Although the pyramid model remains useful since it was first formulated, a number of new technologies have been developed and new categories of information systems have emerged, some of which no longer fit easily into the original pyramid model. Some examples of such systems are: decision support system social information systems process control system management information system intelligent system enterprise systems data warehouses enterprise resource planning computing platform expert systems search engines geographic information system global information system multimedia information system office automation. A computer(-based) information system is essentially an IS using computer technology to carry out some or all of its planned tasks. The basic components of computer-based information systems are: Hardware- these are the devices like the monitor, processor, printer, and keyboard, all of which work together to accept, process, show data, and information. Software- are the programs that allow the hardware to process the data. Databases- are the gathering of associated files or tables containing related data. Networks- are a connecting system that allows diverse computers to distribute resources. Procedures- are the commands for combining the components above to process information and produce the preferred output. The first four components (hardware, software, database, and network) make up what is known as the information technology platform. Information technology workers could then use these components to create information systems that watch over safety measures, risk and the management of data. These actions are known as information technology services. Certain information systems support parts of organizations, others support entire organizations, and still others, support groups of organizations. Recall that each department or functional area within an organization has its own collection of application programs or information systems. These functional area information systems (FAIS) are supporting pillars for more general IS namely, business intelligence systems and dashboards . As the name suggests, each FAIS supports a particular function within the organization, e.g.: accounting IS, finance IS, production-operation management (POM) IS, marketing IS, and human resources IS. In finance and accounting, managers use IT systems to forecast revenues and business activity, to determine the best sources and uses of funds, and to perform audits to ensure that the organization is fundamentally sound and that all financial reports and documents are accurate. Other types of organizational information systems are FAIS, Transaction processing systems, enterprise resource planning, office automation system, management information system, decision support system, expert system, executive dashboard, supply chain management system, and electronic commerce system. Dashboards are a special form of IS that support all managers of the organization. They provide rapid access to timely information and direct access to structured information in the form of reports. Expert systems attempt to duplicate the work of human experts by applying reasoning capabilities, knowledge, and expertise within a specific domain. Information system development Information technology departments in larger organizations tend to strongly influence the development, use, and application of information technology in the business. A series of methodologies and processes can be used to develop and use an information system. Many developers use a systems engineering approach such as the system development life cycle (SDLC), to systematically develop an information system in stages. The stages of the system development lifecycle are planning, system analysis, and requirements, system design, development, integration and testing, implementation and operations, and maintenance. Recent research aims at enabling and measuring the ongoing, collective development of such systems within an organization by the entirety of human actors themselves. An information system can be developed in house (within the organization) or outsourced. This can be accomplished by outsourcing certain components or the entire system. A specific case is the geographical distribution of the development team (offshoring, global information system). A computer-based information system, following a definition of Langefors, is a technologically implemented medium for: recording, storing, and disseminating linguistic expressions, as well as for drawing conclusions from such expressions. Geographic information systems, land information systems, and disaster information systems are examples of emerging information systems, but they can be broadly considered as spatial information systems. System development is done in stages which include: Problem recognition and specification Information gathering Requirements specification for the new system System design System construction System implementation Review and maintenance. As an academic discipline The field of study called information systems encompasses a variety of topics including systems analysis and design, computer networking, information security, database management, and decision support systems. Information management deals with the practical and theoretical problems of collecting and analyzing information in a business function area including business productivity tools, applications programming and implementation, electronic commerce, digital media production, data mining, and decision support. Communications and networking deals with telecommunication technologies. Information systems bridges business and computer science using the theoretical foundations of information and computation to study various business models and related algorithmic processes on building the IT systems within a computer science discipline. Computer information system(s) (CIS) is a field studying computers and algorithmic processes, including their principles, their software and hardware designs, their applications, and their impact on society, whereas IS emphasizes functionality over design. Several IS scholars have debated the nature and foundations of Information Systems which have its roots in other reference disciplines such as Computer Science, Engineering, Mathematics, Management Science, Cybernetics, and others. Information systems also can be defined as a collection of hardware, software, data, people, and procedures that work together to produce quality information. Related terms Similar to computer science, other disciplines can be seen as both related and foundation disciplines of IS. The domain of study of IS involves the study of theories and practices related to the social and technological phenomena, which determine the development, use, and effects of information systems in organizations and society. But, while there may be considerable overlap of the disciplines at the boundaries, the disciplines are still differentiated by the focus, purpose, and orientation of their activities. In a broad scope, the term Information Systems is a scientific field of study that addresses the range of strategic, managerial, and operational activities involved in the gathering, processing, storing, distributing, and use of information and its associated technologies in society and organizations. The term information systems is also used to describe an organizational function that applies IS knowledge in the industry, government agencies, and not-for-profit organizations. Information Systems often refers to the interaction between algorithmic processes and technology. This interaction can occur within or across organizational boundaries. An information system is a technology an organization uses and also the way in which the organizations interact with the technology and the way in which the technology works with the organization's business processes. Information systems are distinct from information technology (IT) in that an information system has an information technology component that interacts with the processes' components. One problem with that approach is that it prevents the IS field from being interested in non-organizational use of ICT, such as in social networking, computer gaming, mobile personal usage, etc. A different way of differentiating the IS field from its neighbours is to ask, "Which aspects of reality are most meaningful in the IS field and other fields?" This approach, based on philosophy, helps to define not just the focus, purpose, and orientation, but also the dignity, destiny and, responsibility of the field among other fields. Career pathways Information Systems workers enter a number of different careers: Information System Strategy Management Information Systems – A management information system (MIS) is an information system used for decision-making, and for the coordination, control, analysis, and visualization of information in an organization. Project Management – Project management is the practice of initiating, planning, executing, controlling, and closing the work of a team to achieve specific goals and meet specific success criteria at the specified time. Enterprise Architecture – A well-defined practise for conducting enterprise analysis, design, planning, and implementation, using a comprehensive approach at all times, for the successful development and execution of strategy. IS Development IS Organization IS Consulting IS Security IS Auditor There is a wide variety of career paths in the information systems discipline. "Workers with specialized technical knowledge and strong communications skills will have the best prospects. Workers with management skills and an understanding of business practices and principles will have excellent opportunities, as companies are increasingly looking to technology to drive their revenue." Information technology is important to the operation of contemporary businesses, it offers many employment opportunities. The information systems field includes the people in organizations who design and build information systems, the people who use those systems, and the people responsible for managing those systems. The demand for traditional IT staff such as programmers, business analysts, systems analysts, and designer is significant. Many well-paid jobs exist in areas of Information technology. At the top of the list is the chief information officer (CIO). The CIO is the executive who is in charge of the IS function. In most organizations, the CIO works with the chief executive officer (CEO), the chief financial officer (CFO), and other senior executives. Therefore, he or she actively participates in the organization's strategic planning process. Research Information systems research is generally interdisciplinary concerned with the study of the effects of information systems on the behaviour of individuals, groups, and organizations. Hevner et al. (2004) categorized research in IS into two scientific paradigms including behavioural science which is to develop and verify theories that explain or predict human or organizational behavior and design science which extends the boundaries of human and organizational capabilities by creating new and innovative artifacts. Salvatore March and Gerald Smith proposed a framework for researching different aspects of Information Technology including outputs of the research (research outputs) and activities to carry out this research (research activities). They identified research outputs as follows: Constructs which are concepts that form the vocabulary of a domain. They constitute a conceptualization used to describe problems within the domain and to specify their solutions. A model which is a set of propositions or statements expressing relationships among constructs. A method which is a set of steps (an algorithm or guideline) used to perform a task. Methods are based on a set of underlying constructs and a representation (model) of the solution space. An instantiation is the realization of an artefact in its environment. Also research activities including: Build an artefact to perform a specific task. Evaluate the artefact to determine if any progress has been achieved. Given an artefact whose performance has been evaluated, it is important to determine why and how the artefact worked or did not work within its environment. Therefore, theorize and justify theories about IT artefacts. Although Information Systems as a discipline has been evolving for over 30 years now, the core focus or identity of IS research is still subject to debate among scholars. There are two main views around this debate: a narrow view focusing on the IT artifact as the core subject matter of IS research, and a broad view that focuses on the interplay between social and technical aspects of IT that is embedded into a dynamic evolving context. A third view calls on IS scholars to pay balanced attention to both the IT artifact and its context. Since the study of information systems is an applied field, industry practitioners expect information systems research to generate findings that are immediately applicable in practice. This is not always the case however, as information systems researchers often explore behavioral issues in much more depth than practitioners would expect them to do. This may render information systems research results difficult to understand, and has led to criticism. In the last ten years, the business trend is represented by the considerable increase of Information Systems Function (ISF) role, especially with regard to the enterprise strategies and operations supporting. It became a key factor to increase productivity and to support value creation. To study an information system itself, rather than its effects, information systems models are used, such as EATPUT. The international body of Information Systems researchers, the Association for Information Systems (AIS), and its Senior Scholars Forum Subcommittee on Journals (23 April 2007), proposed a 'basket' of journals that the AIS deems as 'excellent', and nominated: Management Information Systems Quarterly (MISQ), Information Systems Research (ISR), Journal of the Association for Information Systems (JAIS), Journal of Management Information Systems (JMIS), European Journal of Information Systems (EJIS), and Information Systems Journal (ISJ). A number of annual information systems conferences are run in various parts of the world, the majority of which are peer reviewed. The AIS directly runs the International Conference on Information Systems (ICIS) and the Americas Conference on Information Systems (AMCIS), while AIS affiliated conferences include the Pacific Asia Conference on Information Systems (PACIS), European Conference on Information Systems (ECIS), the Mediterranean Conference on Information Systems (MCIS), the International Conference on Information Resources Management (Conf-IRM) and the Wuhan International Conference on E-Business (WHICEB). AIS chapter conferences include Australasian Conference on Information Systems (ACIS), Information Systems Research Conference in Scandinavia (IRIS), Information Systems International Conference (ISICO), Conference of the Italian Chapter of AIS (itAIS), Annual Mid-Western AIS Conference (MWAIS) and Annual Conference of the Southern AIS (SAIS). EDSIG, which is the special interest group on education of the AITP, organizes the Conference on Information Systems and Computing Education and the Conference on Information Systems Applied Research which are both held annually in November. See also Related studies Information management Computer science Human–computer interaction Informatics Bioinformatics Health informatics Business informatics Cheminformatics Disaster informatics Geoinformatics Information science Web sciences Management information system (MIS) Formative context Data processing Library science Components Data architect Data modeling Data processing system Data Reference Model Database EATPUT Metadata Predictive Model Markup Language Semantic translation Three schema approach Implementation Enterprise information system Environmental Modeling Center Information processing system INFORMS References Further reading Rainer, R. Kelly and Cegielski, Casey G. (2009). "Introduction to Information Systems: Enabling and Transforming Business, 3rd Edition" Kroenke, David (2008). Using MIS – 2nd Edition. Lindsay, John (2000). Information Systems – Fundamentals and Issues. Kingston University, School of Information Systems Dostal, J. School information systems (Skolni informacni systemy). In Infotech 2007 – modern information and communication technology in education. Olomouc, EU: Votobia, 2007. s. 540 – 546. . O'Leary, Timothy and Linda. (2008). Computing Essentials Introductory 2008. McGraw-Hill on Computing2008.com Imperial College London – Information Systems Engineering degree – Information Systems Engineering Sage, S.M. "Information Systems: A brief look into history", Datamation, 63–69, Nov. 1968. – Overview of the early history of IS. Cy18mis546 Group 5 – http://www.ramblingdata.com.s3-website.us-east-2.amazonaws.com External links Association for Information Systems (AIS) IS History website Center for Information Systems Research – Massachusetts Institute of Technology European Research Center for Information Systems Index of Information Systems Journals European eCompetence Frame Information
6122939
https://en.wikipedia.org/wiki/WTFPL
WTFPL
WTFPL is a permissive free software license, compatible with the GNU GPL. As a public domain like license, the WTFPL is essentially the same as dedication to the public domain. It allows redistribution and modification of the work under any terms. The title is an abbreviation of "Do What The Fuck You Want To Public License". The first version of the WTFPL, released in March 2000, was written by Banlu Kemiyatorn for his own software project. Sam Hocevar, Debian's former project leader, wrote version 2. Characteristics The WTFPL intends to be a permissive, public-domain-like license. The license is not a copyleft license. The license differs from public domain in that an author can use it even if they do not necessarily have the ability to place their work in the public domain according to their local laws. The WTFPL does not include a no-warranty disclaimer, unlike other permissive licenses, such as the MIT License. Though the WTFPL is untested in court, the official website offers a disclaimer to be used in software source code. Terms Version 2 The text of Version 2, the most current version of the license, written by Sam Hocevar: DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE Version 2, December 2004 Copyright (C) 2004 Sam Hocevar <[email protected]> Everyone is permitted to copy and distribute verbatim or modified copies of this license document, and changing it is allowed as long as the name is changed. DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. You just DO WHAT THE FUCK YOU WANT TO. Version 1 do What The Fuck you want to Public License Version 1.0, March 2000 Copyright (C) 2000 Banlu Kemiyatorn (]d). 136 Nives 7 Jangwattana 14 Laksi Bangkok Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Ok, the purpose of this license is simple and you just DO WHAT THE FUCK YOU WANT TO. Application To apply the WTFPL to a creative work the author of the work should add a copy of the terms of the license version they wish to use alongside or near the work with which the license is applied. The name of the author named in the license should not be changed unless the name of the license is changed as well—in which case a new version is thereby created. Reception Usage The WTFPL is not in wide use among open-source software projects; according to Black Duck Software, the WTFPL is used by less than one percent of open-source projects. Examples include the OpenStreetMap Potlatch online editor, the video game Liero (version 1.36) and MediaWiki extensions. More than 4,000 Wikimedia Commons files were published under the terms of the WTFPL. Discussion The license was confirmed as a GPL-compatible free software license by the Free Software Foundation, but its use is "not recommended". In 2009, the Open Source Initiative chose not to approve the license as an open-source license, saying: The WTFPL version 2 is an accepted Copyfree license. It is also accepted by Fedora as a free license and GPL-compatible. Some software authors have said that the license is not very serious; forks have tried to address wording ambiguity and liability concerns. OSI founding president Eric S. Raymond interpreted the license as written satire against the restrictions of the GPL and other software licenses; WTFPL version 2 author Sam Hocevar later confirmed that the WTFPL is a parody of the GPL. Free-culture activist Nina Paley said she considered the WTFPL a free license for cultural works. See also Beerware CC0 Unlicense License-free software Public domain software Software using the WTFPL (category) Public-domain-equivalent license References External links Free and open-source software licenses Free content licenses Permissive software licenses Public copyright licenses
31923901
https://en.wikipedia.org/wiki/LIO%20%28SCSI%20target%29
LIO (SCSI target)
In computing, Linux-IO (LIO) Target is an open-source implementation of the SCSI target that has become the standard one included in the Linux kernel. Internally, LIO does not initiate sessions, but instead provides one or more Logical Unit Numbers (LUNs), waits for SCSI commands from a SCSI initiator, and performs required input/output data transfers. LIO supports common storage fabrics, including FCoE, Fibre Channel, IEEE 1394, iSCSI, iSCSI Extensions for RDMA (iSER), SCSI RDMA Protocol (SRP) and USB. It is included in most Linux distributions; native support for LIO in QEMU/KVM, libvirt, and OpenStack makes LIO also a storage option for cloud deployments. LIO is maintained by Datera, Inc., a Silicon Valley vendor of storage systems and software. On January 15, 2011, LIO SCSI target engine was merged into the Linux kernel mainline, in kernel version 2.6.38, which was released on March 14, 2011. Additional fabric modules have been merged into subsequent Linux releases. A competing generic SCSI target module for Linux is SCST. For the narrower purpose providing a Linux iSCSI target, the older IET ("iSCSI Enterprise Target") and STGT ("SCSI Target Framework") modules also enjoy industry support. Background The SCSI standard provides an extensible semantic abstraction for computer data storage devices, and as such has become a "lingua franca" for data storage systems. The SCSI T10 standards define the commands and protocols of the SCSI command processor (sent in SCSI CDBs), and the electrical and optical interfaces for various implementations. A SCSI initiator is the endpoint that initiates a SCSI session. A SCSI target is the endpoint that waits for initiator commands and executes the required I/O data transfers. The SCSI target usually exports one or more LUNs for initiators to operate on. The LIO Linux SCSI Target implements a generic SCSI target that provides remote access to most data storage device types over all prevalent storage fabrics and protocols. LIO neither directly accesses data nor does it directly communicate with applications. LIO provides a highly efficient, fabric-independent and fabric-transparent abstraction for the semantics of numerous data storage device types. Architecture LIO implements a modular and extensible architecture around a versatile and highly efficient, parallelized SCSI command processing engine. The SCSI target engine implements the semantics of a SCSI target. The LIO SCSI target engine is independent of specific fabric modules or backstore types. Thus, LIO supports mixing and matching any number of fabrics and backstores at the same time. The LIO SCSI target engine implements a comprehensive SPC-3/SPC-4 feature set with support for high-end features, including SCSI-3/SCSI-4 Persistent Reservations (PRs), SCSI-4 Asymmetric Logical Unit Assignment (ALUA), VMware vSphere APIs for Array Integration (VAAI), T10 DIF, etc. LIO is configurable via a configfs-based kernel API, and can be managed via a command-line interface and API (targetcli). SCSI target The concept of a SCSI target isn't narrowly restricted to physical devices on a SCSI bus, but instead provides a generalized model for all receivers on a logical SCSI fabric. This includes SCSI sessions across interconnects with no physical SCSI bus at all. Conceptually, the SCSI target provides a generic block storage service or server in this scenario. Backstores Backstores provide the SCSI target with generalized access to data storage devices by importing them via corresponding device drivers. Backstores don't need to be physical SCSI devices. The most important backstore media types are: Block: The block driver allows using raw Linux block devices as backstores for export via LIO. This includes physical devices, such as HDDs, SSDs, CDs/DVDs, RAM disks, etc., and logical devices, such as software or hardware RAID volumes or LVM volumes. File: The file driver allows using files that can reside in any Linux file system or clustered file system as backstores for export via LIO. Raw: The raw driver allows using unstructured memory as backstores for export via LIO. As a result, LIO provides a generalized model to export block storage. Fabric modules Fabric modules implement the frontend of the SCSI target by encapsulating and abstracting the properties of the various supported interconnect. The following fabric modules are available. FCoE The Fibre Channel over Ethernet (FCoE) fabric module allows the transport of Fibre Channel protocol (FCP) traffic across lossless Ethernet networks. The specification, supported by a large number of network and storage vendors, is part of the Technical Committee T11 FC-BB-5 standard. LIO supports all standard Ethernet NICs. The FCoE fabric module was contributed by Cisco and Intel, and released with Linux 3.0 on July 21, 2011. Fibre Channel Fibre Channel is a high-speed network technology primarily used for storage networking. It is standardized in the Technical Committee T11 of the InterNational Committee for Information Technology Standards (INCITS). The QLogic Fibre Channel fabric module supports 4- and 8-gigabit speeds with the following HBAs: QLogic 2400 Series (QLx246x), 4GFC QLogic 2500 Series (QLE256x), 8GFC (fully qual'd) The Fibre Channel fabric module and low-level driver (LLD) were released with Linux 3.5 on July 21, 2012. With Linux 3.9, the following QLogic HBAs and CNAs are also supported: QLogic 2600 Series (QLE266x), 16GFC, SR-IOV QLogic 8300 Series (QLE834x), 16GFS/10 GbE, PCIe Gen3 SR-IOV QLogic 8100 Series (QLE81xx), 8GFC/10 GbE, PCIe Gen2 This makes LIO the first open source target to support 16-gigabit Fibre Channel. IEEE 1394 The FireWire SBP-2 fabric module enables Linux to export local storage devices via IEEE 1394, so that other systems can mount them as an ordinary IEEE 1394 storage device. IEEE 1394 is a serial bus interface standard for high-speed communications and isochronous real-time data transfer. It was developed by Apple as "FireWire" in the late 1980s and early 1990s, and Macintosh computers have supported "FireWire target disk mode" since 1999. The FireWire SBP-2 fabric module was released with Linux 3.5 on July 21, 2012. iSCSI The Internet Small Computer System Interface (iSCSI) fabric module allows the transport of SCSI traffic across standard IP networks. By carrying SCSI sessions across IP networks, iSCSI is used to facilitate data transfers over intranets and to manage storage over long distances. iSCSI can be used to transmit data over local area networks (LANs), wide area networks (WANs), or the Internet, and can enable location-independent and location-transparent data storage and retrieval. The LIO iSCSI fabric module also implements a number of advanced iSCSI features that increase performance and resiliency, such as Multiple Connections per Session (MC/S) and Error Recovery Levels 0-2 (ERL=0,1,2). LIO supports all standard Ethernet NICs. The iSCSI fabric module was released with Linux 3.1 on October 24, 2011. iSER Networks supporting remote direct memory access (RDMA) can use the iSCSI Extensions for RDMA (iSER) fabric module to transport iSCSI traffic. iSER permits data to be transferred directly into and out of remote SCSI computer memory buffers without intermediate data copies (direct data placement or DDP) by using RDMA. RDMA is supported on InfiniBand networks, on Ethernet with data center bridging (DCB) networks via RDMA over Converged Ethernet (RoCE), and on standard Ethernet networks with iWARP enhanced TCP offload engine controllers. The iSER fabric module was developed together by Datera and Mellanox Technologies, and first released with Linux 3.10 on June 30, 2013. SRP The SCSI RDMA Protocol (SRP) fabric module allows the transport of SCSI traffic across RDMA (see above) networks. As of 2013, SRP was more widely used than iSER, although it is more limited, as SCSI is only a peer-to-peer protocol, whereas iSCSI is fully routable. The SRP fabric module supports the following Mellanox host channel adapters (HCAs): Mellanox ConnectX-2 VPI PCIe Gen2 HCAs (x8 lanes), single/dual-port QDR 40 Gbit/s Mellanox ConnectX-3 VPI PCIe Gen3 HCAs (x8 lanes), single/dual-port FDR 56 Gbit/s Mellanox ConnectX-IB PCIe Gen3 HCAs (x16 lanes), single/dual-port FDR 56 Gbit/s The SRP fabric module was released with Linux 3.3 on March 18, 2012. In 2012, c't magazine measured almost 5000 MB/s throughput with LIO SRP Target over one Mellanox ConnectX-3 port in 56 Gbit/s FDR mode on a Sandy Bridge PCI Express 3.0 system with four Fusion-IO ioDrive PCI Express flash memory cards. USB The USB Gadget fabric module enables Linux to export local storage devices via the Universal Serial Bus (USB), so that other systems can mount them as an ordinary storage device. USB was designed in the mid-1990s to standardize the connection of computer peripherals, and has also become common for data storage devices. The USB Gadget fabric module was released with Linux 3.5 on July 21, 2012. targetcli targetcli is a user space single-node management command line interface (CLI) for LIO. It supports all fabric modules and is based on a modular, extensible architecture, with plug-in modules for additional fabric modules or functionality. targetcli provides a CLI that uses an underlying generic target library through a well-defined API. Thus the CLI can easily be replaced or complemented by a UI with other metaphors, such as a GUI. targetcli is implemented in Python and consists of three main modules: the underlying rtslib and API. the configshell, which encapsulates the fabric-specific attributes in corresponding 'spec' files. the targetcli shell itself. Detailed instructions on how to set up LIO targets can be found on the LIO wiki. Linux distributions targetcli and LIO are included in most Linux distributions per default. Here is an overview over the most popular ones, together with the initial inclusion dates: See also The SCST Linux SCSI target software stack Fibre Channel Fibre Channel over Ethernet (FCoE) IEEE 1394 / Firewire InfiniBand iSCSI iSCSI Extensions for RDMA (iSER) SCSI RDMA Protocol (SRP) USB Notes References External links Datera website SCSI Linux Free software
1208979
https://en.wikipedia.org/wiki/List%20of%20Linux%20audio%20software
List of Linux audio software
The following is an incomplete list of Linux audio software. Audio players GStreamer-based Amarok is a free music player for Linux and other Unix-like operating systems. Multiple backends are supported (xine, helix and NMM). Banshee is a free audio player for Linux which uses the GStreamer multimedia platforms to play, encode, and decode Ogg Vorbis, MP3, and other formats. Banshee supports playing and importing audio CDs and playing and synchronizing music with iPods. Audioscrobbler API support. Clementine is a cross-platform, open-source, Qt based audio player, written in C++. It can play Internet radio streams; managing some media devices, playlists; supports visualizations, Audioscrobbler API. It was made as a spin-off of Amarok 1.4 and is a rougher version of said program. Exaile is a free software audio player for Unix-like operating systems that aims to be functionally similar to KDE’s Amarok. Unlike Amarok, Exaile is a Python program and uses the GTK toolkit. Guayadeque Music Player is a free and open-source audio player written in C++ using the wxWidgets toolkit. Muine is an audio player for the GNOME desktop environment. Muine is written in C# using Mono and Gtk#. The default backend is GStreamer framework but Muine can also use xine libraries. Quod Libet is a GTK based audio player, written in Python, using GStreamer or Xine as back ends. Its distinguishing features are a rigorous approach to tagging (making it especially popular with classical music fans) and a flexible approach to music library management. It supports regular expression and Boolean algebra-based searches, and is stated to perform efficiently with music libraries of tens of thousands of tracks. Rhythmbox is an audio player inspired by Apple iTunes. Songbird is a cross-platform, open-source media player and web browser. It is built using code from the Firefox web browser. The graphical user interface (GUI) is very similar to Apple iTunes, and it can sync with Apple iPods. Like Firefox, Songbird is extensible via downloadable add-ons. It's able to display lyrics retrieved from the net, and also the ones embedded through metadata (ID3v2 tag) after adding the LyricMaster plug-in. Linux official support for Songbird was discontinued in April, 2010. But in December, 2011 a group of programmers forked it openly as Nightingale. Music Player Daemon based Cantata is a Qt-based front-end for Music Player Daemon. Ario is a light GTK2 client to MPD Other aTunes is a free, cross-platform audio player for operating systems supporting the programming language Java (Unix-like: Linux, BSD, Macintosh), and Windows. aTunes can also play Internet radio streams and automatically display associated artist information, song videos, and song lyrics. Audacious is a free media player for Linux or Linux-based systems. It can be expanded via plug-ins, including support for all popular codecs. On most systems a useful set of plug-ins is installed by default, supporting MP3, Ogg Vorbis and FLAC files. Audacious' classic interface looks and feels very similar to Winamp. It is compatible with LADSPA plug-ins. cmus is a small and fast text-mode music player for Linux and many other Unix-like operating systems. DeaDBeeF (as in 0xDEADBEEF) is a modular audio player for Linux, *BSD, OpenSolaris, macOS, and other UNIX-like systems. JuK is a free software audio player for KDE, the default player since KDE 3.2. JuK supports collections of MP3, Ogg Vorbis, and FLAC audio files. mpg123 is a real time MPEG 1.0/2.0/2.5 audio player/decoder for layers 1, 2 and 3 (MPEG 1.0 layer 3 a.k.a. MP3 most commonly tested). Among others working with Linux, Mac OS X, FreeBSD, SunOS4.1.3, Solaris 2.5, HPUX 9.x, SGI Irix and Cygwin or plain Windows. It is free software licensed under LGPL 2.1 Music on Console (MOC) is an ncurses-based console audio player. It is designed to be powerful and easy to use, and its command structure and window layouts are similar to the Midnight Commander console file manager. It is very configurable, with Advanced Linux Sound Architecture (ALSA), Open Sound System (OSS) or JACK Audio Connection Kit (JACK) outputs, customizable color schemes, interface layouts, key bindings, and tag parsing. Sound eXchange (SoX) is a cross-platform command-line audio editor. X MultiMedia System (XMMS) is a GTK1-based multimedia player which works on many platforms, but has some features which only work under Linux. XMMS can play media files such as .ogg, MP3, MOD's, WAV and others with the use of input plug-ins. It is a free software audio player similar to Winamp that runs on many Unix-like operating systems. However, development of XMMS has been deprecated in favor of XMMS2, a new audio player built from scratch on the more modern GTK2 libraries. See also Audacious on this page as a successor to the historic XMMS. Tomahawk is a cross-platform music player built with social media and multi-source music streaming in mind. It features support for services like Spotify, Grooveshark, Dilandau, SoundCloud, 4shared, Jamendo, Last.fm, Ampache, Owncloud, Ex.fm and Subsonic. Distributions and add-ons dyne:bolic Musix GNU+Linux Planet CCRMA, an add-on for Red Hat Linux with many tools and system mods puredyne (Ubuntu based) Ubuntu Studio, an Ubuntu-based distribution geared toward multimedia AVLinux (Debian-based) Graphical programming Pure Data (Pd), graphical programming language. Audio programming languages (text-based) ChucK, an audio programming language for realtime synthesis, composition, and performance. Csound, composition, synthesis and processing. Nyquist, Lisp-based language for sound generation and analysis. Audacity supports plug-ins written in Nyquist. SuperCollider, a language like Smalltalk for real-time audio synthesis. DJ tools Mixxx is a cross-platform and DJ package supporting a wide range of file formats, MIDI/HID controllers and timecode vinyl. xwax Drum machines Hydrogen, drum machine and sequencer Recording, editing and mastering Digital audio workstations (DAWs) Ardour, a multi-track audio recorder. GPLv2+ LMMS, music composer. GPL Qtractor, a full featured multitrack digital audio workstation (DAW), with audio and MIDI sequencer. GPL REAPER, a proprietary multitrack audio, MIDI recording and mastering software. Rosegarden, MIDI sequencer. GPL Tracktion, proprietary and commercial digital audio workstation. Traverso DAW, a multi-track audio recorder. GPL Harrison Mixbus and Mixbus32C, proprietary. Bitwig Studio, proprietary and commercial digital audio workstation. Audio editors and recorders Audacity, audio editor and recorder. Ecasound, audio recorder. Jokosher, audio editor. Sound eXchange (SoX). Sweep, audio editor. WaveSurfer Sequencers MilkyTracker, an old-school tracker. MusE, MIDI-audio sequencer. Renoise, commercial modern tracker-style sequencer. Rosegarden, a music composition and editing environment based on a MIDI sequencer. Seq24, a loop based midi sequencer. Other Baudline, signal analyzer. Buzztrax, music composer. Gnome Wave Cleaner, denoise, dehiss and amplify. Impro-Visor, edit and playback jazz solos over chord changes and rhythm. LinuxSampler, sampler. Mp3gain, adjust MP3 playback volume without re-encoding. Sound servers aRts, the KDE 3 soundserver. Phonon, the multimedia framework provided by Qt 4 and used in KDE 4. Enlightened Sound Daemon (EsounD, ESD). JACK Audio Connection Kit (JACK), real-time sound server. Network Audio System (NAS). Network-Integrated Multimedia Middleware (NMM). PulseAudio, a sound server, drop-in replacement for EsounD. PipeWire, a server for multimedia routing and pipeline processing. Synthesizers Amsynth DIN Is Noise (din), software synthesiser, musical instrument, uses computer mouse as bow. FluidSynth, with the interface QSynth. Gnaural, binaural beat and pink noise synthesizer. LMMS, tracker, sequencer, synthesizer. Pianoteq, digital physical modeling of pianos and related instruments. PySynth, a simple software synthesizer in Python. TiMidity, Play-convert MIDI files as-to PCM WildMIDI, Some alternative to TiMidity FluidSynth, a free open source software synthesizer which converts MIDI note data into an audio signal Yoshimi, software synthesizer. ZynAddSubFX, software synthesizer. Effects processing PulseEffects, effects processing for input and output audio streams with PulseAudio. FreqTweak, real-time audio processing with spectral displays. Linux Audio Developers Simple Plug-in API (LADSPA). Disposable Soft Synth Interface (DSSI), a virtual instrument (software synthesizer) plug-in architecture. Sound eXchange (SoX), the audio Swiss Army knife. LV2, is the new audio Linux standard for plug-ins. Format transcoding fre:ac OggConvert Sound eXchange (SoX) GNOME SoundConverter Radio broadcasting Airtime, an automation system for radio stations. Campcaster (discontinued), an automation system for radio stations. Icecast, free server software for streaming multimedia. OpenBroadcaster, LPFM IPTV broadcast automation tools. Radio listening Streamtuner, browse and listen to hundreds of streamed radio stations. Score and tablature edition software Frescobaldi, score writer Denemo, score editor LilyPond, score typesetter NoteEdit, score writer MuseScore, score writer TuxGuitar, a tabulature editor, score writer and player oriented for guitarists. See also Comparison of free software for audio List of music software References Linux audio software
2197220
https://en.wikipedia.org/wiki/SYSTAT%20%28software%29
SYSTAT (software)
SYSTAT is a statistics and statistical graphics software package, developed by Leland Wilkinson in the late 1970s, who was at the time an assistant professor of psychology at the University of Illinois at Chicago. Systat Software Inc. was incorporated in 1983 and grew to over 50 employees. In 1995, SYSTAT was sold to SPSS Inc., who marketed the product to a scientific audience under the SPSS Science division. By 2002, SPSS had changed its focus to business analytics and decided to sell SYSTAT to Cranes Software in Bangalore, India. Cranes formed Systat Software, Inc. to market and distribute SYSTAT in the US, and a number of other divisions for global distribution. The headquarters are in Chicago, Illinois. By 2005, SYSTAT was in its eleventh version having a revamped codebase completely changed from Fortran into C++. Version 13 came out in 2009, with improvements in the user interface and several new features. See also Comparison of statistical packages PeakFit TableCurve 2D TableCurve 3D References External links SYSTAT The story of SYSTAT as told by Wilkinson C++ software Statistical software Windows-only software
67256988
https://en.wikipedia.org/wiki/Susanne%20Hambrusch
Susanne Hambrusch
Susanne Edda Hambrusch is an Austrian-American computer scientist whose research topics include data indexing for range queries, and computational thinking in computer science education. She is a professor of computer science at Purdue University. Education and career Hambrusch earned an engineering diploma from TU Wien in 1977, and completed a Ph.D. in computer science at the Pennsylvania State University in 1982. Her dissertation, The Complexity of Graph Problems on VLSI, was supervised by Janos Simon. She has been a faculty member at Purdue University since 1982, and was head of the computer science department there from 2002 to 2007 and again from 2018 to 2020. From 2010 to 2013 she took a leave from Purdue to head the Computing and Communication Foundations Division of the National Science Foundation. Recognition Purdue University gave Hambrusch their Violet Haas Award, recognizing her contributions to the advancement of women at Purdue. Hambrusch was named as a 2020 ACM Fellow, "for research and leadership contributions to computer science education". References External links Home page Year of birth missing (living people) Living people American computer scientists American women computer scientists Austrian computer scientists Austrian women computer scientists Computer science educators TU Wien alumni Pennsylvania State University alumni Purdue University faculty Fellows of the Association for Computing Machinery American women academics 21st-century American women
51145104
https://en.wikipedia.org/wiki/Educational%20organizations%20in%20Cherthala
Educational organizations in Cherthala
Educational organizations in Cherthala, India include multiple primary, secondary and tertiary institutions. College of Engineering Cherthala College of Engineering Cherthala is affiliated to APJ Abdul Kalam Technological University (KTU) and Cochin University of Science and Technology (CUSAT), The college is approved by All India Council for Technical Education (AICTE). It is one of the leading technical institutions in Kerala. It opened in 2004. CEC has established excellence in the field of science and technology. CEC offers bachelor and master engineering courses in electronics, computer science, and electrical engineering. Nair Service Society College NSS College, Cherthala, was founded by the Nair Service Society, led by the late Mannathu Padmanabhan. This institution began as a Junior College in 1964. It was upgraded in 1968. Its first PG course was started in 1995. The college has an active NCC Unit, three socially committed National Service Scheme Units and a sports and arts wing. Students won laurels in University examinations, youth festivals and athletic contests. It offers Bachelor of Arts (English, Malayalam, History and Economics), Bachelor of science (Mathematics, Physics, Chemistry, Botany and Environment & Water Management), Master of Arts (Economics), Bachelor of Commerce and Master of Science in Mathematics. The college is affiliated to Kerala University. Sree Narayana College SN College It was inaugurated in 1964 by the late Sri. R. Sankar, the founder-secretary of S.N. Trusts and then Chief Minister of Kerala. It began as a junior college and evolved into a full college, offering ten degree courses and two PG courses. The college enrolls more than 2,500 students served by more than 90 faculty. Four NSS units are active in the college. A Population Education Club and a Women's Cell operate there. The NCC wing has had its cadets participate in Republic Day Parade. College sports teams have excelled, especially in Kabaddi and Kho-Kho. Its degree coursesa are Bachelor of Arts (Malayalam, Politics, Philosophy, History and Economics), Bachelor of Science (Computer Science, Physics, Chemistry, Botany, Zoology and Geology), Master of Arts (Economics), Bachelor of Commerce and Master of Science (Botany, Zoology, Physics). The college is affiliated to Kerala University. St. Michael's college St. Michael's college enrolls 1,700 students with 62 members on the teaching staff and 47 on the non-teaching staff. Its degree courses are: Bachelor of Arts (Economics), Bachelor of Science (Physics, Chemistry and Zoology), Master of Arts (Economics) and Bachelor of Ccommerce. The college is affiliated to Kerala University. KVM College of Engineering & Information Technology KVM College of Engineering & Information Technology opened in 2001. It is approved by AICTE and is recognized by Cochin University of Science & Technology (CUSAT). SNGM Institutions - Valamangalam, Thuravoor The Sree Narayana Guru Memorial Charitable and Educational (S.N.G.M.) Trust runs M.Ed. College, B.Ed. College, Teacher Training Institute, Polytechnic For Catering Technology, Pharmacy College, Arts & Science College, Senior Higher Secondary School and K.R. Gouriamma Engineering College For Women. The campus is situated in Valamangalam South village, 4 km east of Thuravoor-NH 47 junction. The institutions are affiliated with Kerala University. St. Joseph's School of Pharmacy The school is situated near Cherthala. ITIs, ITCs and nursing colleges St. Joseph's ITI - Kurakanchanda, Thuravoor South S.B College of Engineering & ITC Munisif Court Junction, Cherthala Sobha ITC, K.R.Puram.P.O, Pallippuram KVM Nursing College - KVM Hospital Cherthala Sacred Heart Nursing College & Hospital - Mathilakam, Cherthala EXCEL ITC south of private bus stand near st. marys G.H.school Cherthala-688524 ph.0478-3251396 Co-operative Training Centre/College This institute is under the management of State Co-operative Union, Kerala. It offers a Junior diploma in co-operation (JDC) and a Higher Diploma in Co-operation & Business Management (HDC&BM). (Govt approved courses) Qualifications: JDC requires SSLC (medium: Malayalam/English) HDC&BM accepts any degree (medium: Malayalam/English) Some seats are reserved for SC/ST, employees of co-operative societies and government departments. Subjects include Co-operation, Types of Co-operative Societies and their functions, Co-operative Laws, Other Laws applicable to Co-operative Societies, Co-operative Audit, Rural Development Management, Banking Accountancy Software Applications. The curriculum includes field studies and viva voce. Other educational institutes MGEF (Mahathma Gandhi Education Foundation) Mahathma Gandhi Education Foundation's had office is at Cherthala. MGEF conducts Information Technology Courses (Multimedia, Animation, Programming, .NET, CAD Engineering, Hardware Courses, Web Designing Courses etc.), Fire and Safety Courses, Fashion Technology, Modeling, Management Courses, Electronics and Electrical Engineering Courses. MGEF has franchises or study centers all over Kerala, Tamil Nadu, Karnataka, Andhra Pradesh and Madhya Pradesh. MGEF has international Offices in Bahrain and Saudi Arabia. MGEF provides 100% placement assistance through its Online Placement Cell and through its International Office at Bahrain and Saudi Arabia. MGEF Head Office is at Gandhi Bhavan, North of Devi Temple, Cherthala, Kerala. Schools Schools in and around Cherthala: Technical Higher Secondary School, Cherthala Holy Family Higher secondary School, Muttom, Cherthala Town St. Anns Public School, Muttom, Cherthala St. Joseph Public School, Pattanakad, Cherthala Little Flower School, St. Francis xaviers L.P school chandiroor St. Mary's Girls High School, Near Pvt. Bus Stand, Cherthala Sree Narayana memorial Government Boys Higher secondary School, Cherthala SNDSY UPS, Sreekandeswaram, Panavally SNHSS, Sreekandeswaram, Panavally Government Girls Higher secondary School, Cherthala Town Govt. Sanskrit High School, Charamangalam, Kayippuram Our lady of mercy, aroor Sree Rajarajeshwari English medium school, Kandamangalam HSS Kandamangalam Government Polytechnic College, Xray Jn., Cherthala Town Government Lower Primary School, Cherthala Town Pattariya Samajam High School, Pallippuram, Cherthala -688541 St. George's H.S. Thankey Govt.U.P.School Velliyakulam, Varanad P.O, Cherthala Govt.H.S.S Chandiroor Govt.H.S Aroor Govt. HSS Thevarvattom, Poochakkal P.O. Al-Ameen Public school Chandiroor Bishop Moore Vidyapith Panickaveettil Sir Sebastian Public School, Vayalar, Cherthala (CBSE school) St. Theres'High school, near cher...-arookuty road, Manappuram Government L.P. School, Konattussery, Kadakkarappally P.O. St.Mary of Leuca English Medium School, Pallippuram, Cherthala-688541 (CBSE School) Vaduthala Jama-ath Higher Secondary School - Arookutty Fr. Xavier Aresseril Memorial English Medium School, Arthunkal- 688530 (CBSE School) References Education in Alappuzha district
5299458
https://en.wikipedia.org/wiki/David%20McGoveran
David McGoveran
David McGoveran (born 1952) is an American computer scientist and physicist, software industry analyst, and inventor. In computer science, he is recognized as one of the pioneers of relational database theory. Education David McGoveran majored in physics and mathematics, and minored in cognition and communication at the University of Chicago from 1973 to 1976, with graduate studies in physics and psycholinguistics. He pursued additional graduate studies from 1976 to 1979 at Stanford University. Career While a student he was employed by the Enrico Fermi Institute's Laboratory for Astrophysics and Space Research (Chicago 1973-4), Dow Chemical Company's Western Applied Science and Technology Laboratories (Walnut Creek, CA 1974), and University of Chicago Hospitals and Clinics (1975-6). After graduation from University of Chicago, he founded the consulting firm of Alternative Technologies(Menlo Park, CA 1976) under the mentoring of H. Dean Brown and Cuthbert Hurd. While starting his consulting practice, he worked at SRI International (1976-9), his first consulting client. Between 1979 and 1981, he taught electronics engineering in the Professional Engineering Institute at Menlo College (Redwood City, CA) and was Chairman of the Computer Science and Business Departments at Condie College (San Jose, CA), developing the schools bachelor program in computer science. Alternative Technologies has provided consulting on the design and development of numerous software systems, specializing in mission critical and distributed applications. Clients have included AT&T, Blue Cross, Digital Equipment, Goldman Sachs, HP, IBM, Microsoft, MCI-Worldcom, Oracle, and many others. McGoveran's software engineering contributions include a collaborative conferencing system (1978); multi-tier relational CIM (computer integrated manufacturing) system (Fasttrack, 1982); relational access manager (1984–89); international electronic funds transfer (1984); trading systems databases (1986–91); OLCP requirements (1986); an object-relational portfolio management (1986–89); first Sybase SQL Server PC client (1987); client-server API requirements (1988); object-relational API requirements (1990); query optimizer requirements (1990); first middleware market analysis and forecast (1991); Database Connectivity Benchmark (1993); numerous high availability and scalable systems (1994–96); and designed BPMS products and established the BPM category (1998-2000) with HP and IBM. He has chaired various professional conferences (1975-2001). He assesses software opportunities and risks for vendors, venture capitalists and other investors; and occasionally serves as an expert in software intellectual property litigation. Research Mathematical Logic Work on applications of mathematical logic has pervaded Mr. McGoveran's career (1971–present). He has done original research and published on the structure of paradoxes, applications of quantum logic to schizophrenia, linguistic logic and computational semantics (under James D. McCawley), fuzzy logic, and applications of logic, including multi-valued logics, to databases. Transaction Management Beginning in 1981, Mr. McGoveran began consulting on the design of transaction processing systems, including distributed transactions. Investigations into the complexity and cost of distributed transactions, as well as the difficulty of maintaining transactional consistency in online applications led to research into alternatives to the traditional transaction models that used pessimistic concurrency control and enforced ACID properties. McGoveran defined physical transactions as the unit of recovery, logical transactions as the unit of consistency, and business transactions as the unit of audit The resulting adaptive transaction model introduces a transaction intrinsic definition of consistency, deferring the decision to combine the results of two or more transactions. His work on transaction management resulted in the award of US Patent No 7,103,597. Relational Data Model and Related Research McGoveran's research on E.F. Codd's relational model has focused on the issues of data modeling (database design), missing information, and view updating. The last two are considered by some database researchers to be the most difficult and controversial problems in relational database research. Having worked on the design and development of several early large scale, distributed, commercial relational database applications, McGoveran sought to improve upon the science of database design. This work lead to the development of new analyses of and solutions to the problem of "missing information" and avoiding the use of nulls and therefore many-valued logic the specification and uses of relation predicates (relation or set membership functions) as an application of Leibniz' Law a new design principle (with C. J. Date) now known as the Principle of Orthogonal Design (POOD) His work on logic applied to relational databases and on design without nulls (1993) has been republished several times. McGoveran tackled the problem of view updating with Christopher J. Date starting in 1993 after having developed methods for reversible schema migration for clients on Wall Street. His solution, based on relation predicates, formed the basis for the algorithms found in The Third Manifesto (Christopher J. Date, Hugh Darwen) for updating virtual relations (e.g., views). Date has credited McGoveran with originally suggesting the basic idea for the view updating approach, and which Hugh Darwen says represented a major shift in thinking on the issue. This work has resulted in two patents (U.S. Patent 7,620,664 and U.S. Patent 7,263,512). Some of McGoveran's work on databases is discussed at Fabian Pascal's Database Debunkings web site. EAI and Business Process Management After consulting on numerous data integration and enterprise application integration projects, and related middleware products, McGoveran recognized that process aspects of integration were largely overlooked. Most business process technology focused on analyzing and documenting existing business processes, then manually "reengineering" the processes to eliminate waste, remove bottlenecks, and improve cycle times. These efforts were largely disjoint from process automation systems and distributed control systems (which focused on highly repetitive, often continuous processes), and workflow technologies (which focused on highly repetitive sequential processes like document processing). McGoveran postulated an analogy between data management and process management. Just as the relational data model proposed separating the logical model of the data from the physical storage model, it seemed that a logical process model (i.e., the business process model) should be separated from its physical implementation (e.g., as messaging, remote invocation, services, etc.). As with the relational model, this would permit business process design via models that were logically separated from specifics of process implementation, process scheduling, and process optimization. By introducing process measurement and analytics into the proposed process management system, closed loop process control became theoretically possible. The result was a set of requirements and a canonical architecture for the then largely unknown business process management system (BPMS). The first commercial package compliant with this BPMS architecture ChangEngine - was then built and introduced by Hewlett-Packard in 1997-98 under McGoveran's direction. Subsequently, McGoveran introduced these concepts at DCI's EAI conference in 1999, through work as Sr. Technical Editor of the eAI Journal (Thomas Communications) and worked with companies like IBM, Vitria, Candle, Fuego, Savvion, and numerous others to help shape the market and the BPM category. Many workflow and business process reengineering (BPR) companies joined in the effort, transforming themselves into BPM companies during the period 1999-2010. Affiliations Secretary-treasurer of the Alternative Natural Philosophy Association (Cambridge University) from 1982-1986, and served as co-editor of the organizations newsletter with John Amson. Co-founder, Alternative Natural Philosophy Association West (ANPA West) and its non-profit corporation (1984), along with H. Pierre Noyes and Chris Gefwert, organized its first three conferences, and was recipient Second Annual Alternative Natural Philosopher Award in 1990. Co-founder, Database Associates with Colin White, Richard Finkelstein, and Paul Winsberg (1990). Wrote and published (initially with Colin White) the Database Product Evaluation Reports (1989-1996). Founded the 60 member Enterprise Integration Council (1999-2002). ACM Life Member (1983) Amer. Math. Society Life Member (1996) IEEE Member (1978). Consulting editor for an international research journal (1975-6) Associate editor for InfoDB (1990-4) Sr. technical editor of the eAI Journal/Business Integration Journal (1999-2006). He served as a judge in technology awards including the CrossRoads A-List, the eAI Journal and Business Integration Journal Awards, and the IBM Beacon Awards. Selected publications McGoveran has written articles in the fields of relational databases, transaction processing, business intelligence, enterprise application integration, business process management, mathematics, and physics, including over 100 monthly columns for eAI Journal (a.k.a. Business Integration Journal) throughout the life of the journal. Books McGoveran, D., Date, C. J. (1992). A Guide to SYBASE and SQL Server. Reading, MA: Addison-Wesley. . Date, C.J., Darwen, H., McGoveran, D. (1998). Relational Database Writings, 1994-1997. Reading, MA: Addison-Wesley. . Encyclopedia articles McGoveran, D., (1991). The Evaluation of Optimizers. Encyclopedia Computer Science and Technology: Volume 26, Supplement 11. New York, NY:Marcel Dekker. McGoveran, D., (1993). The Evaluation of Optimizers. Encyclopedia of Microcomputers: Volume 13. New York, NY: CRC Press. & . References External links Alternative Technologies Living people American technology writers 1952 births
16627085
https://en.wikipedia.org/wiki/Institut%20Gaspard%20Monge
Institut Gaspard Monge
The Gaspard Monge Institute of electronics and computer science is the research and teaching body of the University of Marne la Vallée in the fields of computer science, electronics, telecommunications and networks. It is named for Gaspard Monge. The Institute is composed of four branches: The Computing research laboratory The fields in which the Institute carries out its research are: text algorithms, combinatorial mathematics, computer science applied to linguistics, image synthesis, networks, signal and communications. The postgraduate degree for Fundamental Computer Science can also be prepared within the framework of the laboratory. The laboratory of communication systems This laboratory carries out research in the following fields: electromagnetism, applications and measures, numericals, radio communications, microsystems and microtechnologies, photonics and microwaves. The postgraduate degree in Electronics and Telecommunications can be prepared within the framework of the laboratory. The computer science training unit The teachings of this University department lead to the bachelor's degree in mathematics and computer science and the postgraduate degree in computer science The electronics training unit The degrees that are prepared within this unit are the bachelor's degree in material science and the postgraduate degree in electronics and telecommunications References University of Paris-Est Marne-la-Vallée
7667814
https://en.wikipedia.org/wiki/Shared%20resource
Shared resource
In computing, a shared resource, or network share, is a computer resource made available from one host to other hosts on a computer network. It is a device or piece of information on a computer that can be remotely accessed from another computer transparently as if it were a resource in the local machine. Network sharing is made possible by inter-process communication over the network. Some examples of shareable resources are computer programs, data, storage devices, and printers. E.g. shared file access (also known as disk sharing and folder sharing), shared printer access, shared scanner access, etc. The shared resource is called a shared disk, shared folder or shared document The term file sharing traditionally means shared file access, especially in the context of operating systems and LAN and Intranet services, for example in Microsoft Windows documentation. Though, as BitTorrent and similar applications became available in the early 2000s, the term file sharing increasingly has become associated with peer-to-peer file sharing over the Internet. Common file systems and protocols Shared file and printer access require an operating system on the client that supports access to resources on a server, an operating system on the server that supports access to its resources from a client, and an application layer (in the four or five layer TCP/IP reference model) file sharing protocol and transport layer protocol to provide that shared access. Modern operating systems for personal computers include distributed file systems that support file sharing, while hand-held computing devices sometimes require additional software for shared file access. The most common such file systems and protocols are: The "primary operating system" is the operating system on which the file sharing protocol in question is most commonly used. On Microsoft Windows, a network share is provided by the Windows network component "File and Printer Sharing for Microsoft Networks", using Microsoft's SMB (Server Message Block) protocol. Other operating systems might also implement that protocol; for example, Samba is an SMB server running on Unix-like operating systems and some other non-MS-DOS/non-Windows operating systems such as OpenVMS. Samba can be used to create network shares which can be accessed, using SMB, from computers running Microsoft Windows. An alternative approach is a shared disk file system, where each computer has access to the "native" filesystem on a shared disk drive. Shared resource access can also be implemented with Web-based Distributed Authoring and Versioning (WebDAV). Naming convention and mapping The share can be accessed by client computers through some naming convention, such as UNC (Universal Naming Convention) used on DOS and Windows PC computers. This implies that a network share can be addressed according to the following: where is the WINS name, DNS name or IP address of the server computer, and may be a folder or file name, or its path. The shared folder can also be given a ShareName that is different from the folder local name at the server side. For example, usually denotes a drive with drive letter C: on a Windows machine. A shared drive or folder is often mapped at the client PC computer, meaning that it is assigned a drive letter on the local PC computer. For example, the drive letter H: is typically used for the user home directory on a central file server. Security issues A network share can become a security liability when access to the shared files is gained (often by devious means) by those who should not have access to them. Many computer worms have spread through network shares. Network shares would consume extensive communication capacity in non-broadband network access. Because of that, shared printer and file access is normally prohibited in firewalls from computers outside the local area network or enterprise Intranet. However, by means of virtual private networks (VPN), shared resources can securely be made available for certified users outside the local network. A network share is typically made accessible to other users by marking any folder or file as shared, or by changing the file system permissions or access rights in the properties of the folder. For example, a file or folder may be accessible only to one user (the owner), to system administrators, to a certain group of users to public, i.e. to all logged in users. The exact procedure varies by platform. In operating system editions for homes and small offices, there may be a special pre-shared folder that is accessible to all users with a user account and password on the local computer. Network access to the pre-shared folder can be turned on. In the English version of the Windows XP Home Edition operating system, the preshared folder is named Shared documents, typically with the path . In Windows Vista and Windows 7, the pre-shared folder is named Public documents, typically with the path . Workgroup topology or centralized server In home and small office networks, a decentralized approach is often used, where every user may make their local folders and printers available to others. This approach is sometimes denoted a Workgroup or peer-to-peer network topology, since the same computer may be used as client as well as server. In large enterprise networks, a centralized file server or print server, sometimes denoted client–server paradigm, is typically used. A client process on the local user computer takes the initiative to start the communication, while a server process on the file server or print server remote computer passively waits for requests to start a communication session In very large networks, a Storage Area Network (SAN) approach may be used. Online storage on a server outside the local network is currently an option, especially for homes and small office networks. Comparison to file transfer Shared file access should not be confused with file transfer using the file transfer protocol (FTP), or the Bluetooth IRDA OBject EXchange (OBEX) protocol. Shared access involves automatic synchronization of folder information whenever a folder is changed on the server, and may provide server side file searching, while file transfer is a more rudimentary service. Shared file access is normally considered as a local area network (LAN) service, while FTP is an Internet service. Shared file access is transparent to the user, as if it was a resource in the local file system, and supports a multi-user environment. This includes concurrency control or locking of a remote file while a user is editing it, and file system permissions. Comparison to file synchronization Shared file access involves but should not be confused with file synchronization and other information synchronization. Internet-based information synchronization may, for example, use the SyncML language. Shared file access is based on server-side pushing of folder information, and is normally used over an "always on" Internet socket. File synchronization allows the user to be offline from time to time and is normally based on an agent software that polls synchronized machines at reconnect, and sometimes repeatedly with a certain time interval, to discover differences. Modern operating systems often include a local cache of remote files, allowing offline access and synchronization when reconnected. See also Resource contention Client portals Distributed file systems Network-attached storage (NAS) Tragedy of the commons, the economic theory of a shared-resource system where individuals behave contrary to the common good Virtual private network Web literacy, includes sharing via web technology Web publishing References Pirkola,G. C., "A file system for a general-purpose time-sharing environment", Proceedings of the IEEE, June 1975, volume 63 no. 6, pp. 918–924, ISSN 0018-9219 Pirkola, G. C. and John Sanguinetti, J., "The Protection of Information in a General Purpose Time-Sharing Environment", Proceedings of the IEEE Symposium on Trends and Applications 1977: Computer Security and Integrity'', vol. 10 no. 4, pp. 106–114 Application layer protocols Local area networks Network file systems
7677
https://en.wikipedia.org/wiki/Computer%20monitor
Computer monitor
A computer monitor is an output device that displays information in pictorial or text form. A monitor usually comprises a visual display, some circuitry, a casing, and a power supply. The display device in modern monitors is typically a thin film transistor liquid crystal display (TFT-LCD) with LED backlighting having replaced cold-cathode fluorescent lamp (CCFL) backlighting. Previous monitors used a cathode ray tube (CRT) and some Plasma (also called Gas-Plasma) displays. Monitors are connected to the computer via VGA, Digital Visual Interface (DVI), HDMI, DisplayPort, USB-C, low-voltage differential signaling (LVDS) or other proprietary connectors and signals. Originally, computer monitors were used for data processing while television sets were used for entertainment. From the 1980s onwards, computers (and their monitors) have been used for both data processing and entertainment, while televisions have implemented some computer functionality. The common aspect ratio of televisions, and computer monitors, has changed from 4:3 to 16:10, to 16:9. Modern computer monitors are easily interchangeable with conventional television sets and vice versa. However, as computer monitors do not necessarily include integrated speakers nor TV tuners (such as digital television adapters), it may not be possible to use a computer monitor as a TV set without external components. History Early electronic computers were fitted with a panel of light bulbs where the state of each particular bulb would indicate the on/off state of a particular register bit inside the computer. This allowed the engineers operating the computer to monitor the internal state of the machine, so this panel of lights came to be known as the 'monitor'. As early monitors were only capable of displaying a very limited amount of information and were very transient, they were rarely considered for program output. Instead, a line printer was the primary output device, while the monitor was limited to keeping track of the program's operation. Computer monitors were formerly known as visual display units (VDU), but this term had mostly fallen out of use by the 1990s. Technologies Multiple technologies have been used for computer monitors. Until the 21st century most used cathode ray tubes but they have largely been superseded by LCD monitors. Cathode ray tube The first computer monitors used cathode ray tubes (CRTs). Prior to the advent of home computers in the late 1970s, it was common for a video display terminal (VDT) using a CRT to be physically integrated with a keyboard and other components of the system in a single large chassis. The display was monochromatic and far less sharp and detailed than on a modern flat-panel monitor, necessitating the use of relatively large text and severely limiting the amount of information that could be displayed at one time. High-resolution CRT displays were developed for the specialized military, industrial and scientific applications but they were far too costly for general use; wider commercial use became possible after the release of a slow, but affordable Tektronix 4010 terminal in 1972. Some of the earliest home computers (such as the TRS-80 and Commodore PET) were limited to monochrome CRT displays, but color display capability was already a possible feature for a few MOS 6500 series-based machines (such as introduced in 1977 Apple II computer or Atari 2600 console), and the color output was a speciality of the more graphically sophisticated Atari 800 computer, introduced in 1979. Either computer could be connected to the antenna terminals of an ordinary color TV set or used with a purpose-made CRT color monitor for optimum resolution and color quality. Lagging several years behind, in 1981 IBM introduced the Color Graphics Adapter, which could display four colors with a resolution of pixels, or it could produce pixels with two colors. In 1984 IBM introduced the Enhanced Graphics Adapter which was capable of producing 16 colors and had a resolution of . By the end of the 1980s color CRT monitors that could clearly display pixels were widely available and increasingly affordable. During the following decade, maximum display resolutions gradually increased and prices continued to fall. CRT technology remained dominant in the PC monitor market into the new millennium partly because it was cheaper to produce and offered to view angles close to 180°. CRTs still offer some image quality advantages over LCDs but improvements to the latter have made them much less obvious. The dynamic range of early LCD panels was very poor, and although text and other motionless graphics were sharper than on a CRT, an LCD characteristic known as pixel lag caused moving graphics to appear noticeably smeared and blurry. Liquid crystal display There are multiple technologies that have been used to implement liquid crystal displays (LCD). Throughout the 1990s, the primary use of LCD technology as computer monitors was in laptops where the lower power consumption, lighter weight, and smaller physical size of LCDs justified the higher price versus a CRT. Commonly, the same laptop would be offered with an assortment of display options at increasing price points: (active or passive) monochrome, passive color, or active matrix color (TFT). As volume and manufacturing capability have improved, the monochrome and passive color technologies were dropped from most product lines. TFT-LCD is a variant of LCD which is now the dominant technology used for computer monitors. The first standalone LCDs appeared in the mid-1990s selling for high prices. As prices declined over a period of years they became more popular, and by 1997 were competing with CRT monitors. Among the first desktop LCD computer monitors was the Eizo FlexScan L66 in the mid-1990s, the SGI 1600SW, Apple Studio Display and the ViewSonic VP140 in 1998. In 2003, TFT-LCDs outsold CRTs for the first time, becoming the primary technology used for computer monitors. The main advantages of LCDs over CRT displays are that LCDs consume less power, take up much less space, and are considerably lighter. The now common active matrix TFT-LCD technology also has less flickering than CRTs, which reduces eye strain. On the other hand, CRT monitors have superior contrast, have a superior response time, are able to use multiple screen resolutions natively, and there is no discernible flicker if the refresh rate is set to a sufficiently high value. LCD monitors have now very high temporal accuracy and can be used for vision research. High dynamic range (HDR) has been implemented into high-end LCD monitors to improve color accuracy. Since around the late 2000s, widescreen LCD monitors have become popular, in part due to television series, motion pictures and video games transitioning to high-definition (HD), which makes standard-width monitors unable to display them correctly as they either stretch or crop HD content. These types of monitors may also display it in the proper width, by filling the extra space at the top and bottom of the image with a solid color ("letterboxing"). Other advantages of widescreen monitors over standard-width monitors is that they make work more productive by displaying more of a user's documents and images, and allow displaying toolbars with documents. They also have a larger viewing area, with a typical widescreen monitor having a 16:9 aspect ratio, compared to the 4:3 aspect ratio of a typical standard-width monitor. Organic light-emitting diode Organic light-emitting diode (OLED) monitors provide higher contrast, better color reproduction and viewing angles than LCDs but they require more power when displaying documents with white or bright backgrounds and have a severe problem known as burn-in, just like CRTs. They are less common than LCD monitors and are often more expensive. Measurements of performance The performance of a monitor is measured by the following parameters: Display geometry: Viewable image size - is usually measured diagonally, but the actual widths and heights are more informative since they are not affected by the aspect ratio in the same way. For CRTs, the viewable size is typically smaller than the tube itself. Aspect ratio - is the ratio of the horizontal length to the vertical length. Monitors usually have the aspect ratio 4:3, 5:4, 16:10 or 16:9. Radius of curvature (for curved monitors) - is the radius that a circle would have if it had the same curvature as the display. This value is typically given in millimeters, but expressed with the letter "R" instead of a unit (for example, a display with "3800R curvature" has a 3800mm radius of curvature. Display resolution is the number of distinct pixels in each dimension that can be displayed. For a given display size, maximum resolution is limited by dot pitch or DPI. Dot pitch or pixel pitch represents the size of the primary elements of the display. In CRTs, dot pitch is defined as the distance between sub-pixels of the same color. In LCDs it is the distance between the center of two adjacent pixels. Dot pitch is the reciprocal of pixel density. Pixel density is a measure of how densely packed the pixels on a display are. In LCDs, pixel density is the number of pixels in one linear unit along the display, typically measured in pixels per inch (px/in or ppi). Color characteristics: Luminance - measured in candelas per square meter (cd/m, also called a nit). Contrast ratio is the ratio of the luminosity of the brightest color (white) to that of the darkest color (black) that the monitor is capable of producing simultaneously. For example, a ratio of means that the brightest shade (white) is 20,000 times brighter than its darkest shade (black). Dynamic contrast ratio is measured with the LCD backlight turned off. Color depth - measured in bits per primary color or bits for all colors. Those with 10bpc (bits per channel) or more can display more shades of color (approximately 1 billion shades) than traditional 8bpc monitors (approximately 16.8 million shades or colors), and can do so more precisely without having to resort to dithering. Gamut - measured as coordinates in the CIE 1931 color space. The names sRGB or Adobe RGB are shorthand notations. Color accuracy - measured in ΔE (delta-E); the lower the ΔE, the more accurate the color representation. A ΔE of below 1 is imperceptible to the human eye. A ΔE of 24 is considered good and requires a sensitive eye to spot the difference. Viewing angle is the maximum angle at which images on the monitor can be viewed, without excessive degradation to the image. It is measured in degrees horizontally and vertically. Input speed characteristics: Refresh rate is (in CRTs) the number of times in a second that the display is illuminated (the number of times a second a raster scan is completed). In LCDs it is the number of times the image can be changed per second, expressed in hertz (Hz). Maximum refresh rate is limited by response time. Determines the maximum number of frames per second (FPS) a monitor is capable of showing. Response time is the time a pixel in a monitor takes to change between two shades. The particular shades depend on the test procedure, which differs between manufacturers. In general, lower numbers mean faster transitions and therefore fewer visible image artifacts such as ghosting. Grey to grey (GtG), measured in milliseconds (ms). Input latency is the time it takes for a monitor to display an image after receiving it, typically measured in milliseconds (ms). Power consumption is measured in watts. Size On two-dimensional display devices such as computer monitors the display size or view able image size is the actual amount of screen space that is available to display a picture, video or working space, without obstruction from the case or other aspects of the unit's design. The main measurements for display devices are: width, height, total area and the diagonal. The size of a display is usually by monitor manufacturers given by the diagonal, i.e. the distance between two opposite screen corners. This method of measurement is inherited from the method used for the first generation of CRT television, when picture tubes with circular faces were in common use. Being circular, it was the external diameter of the glass envelope that described their size. Since these circular tubes were used to display rectangular images, the diagonal measurement of the rectangular image was smaller than the diameter of the tube's face (due to the thickness of the glass). This method continued even when cathode ray tubes were manufactured as rounded rectangles; it had the advantage of being a single number specifying the size, and was not confusing when the aspect ratio was universally 4:3. With the introduction of flat panel technology, the diagonal measurement became the actual diagonal of the visible display. This meant that an eighteen-inch LCD had a larger visible area than an eighteen-inch cathode ray tube. The estimation of the monitor size by the distance between opposite corners does not take into account the display aspect ratio, so that for example a 16:9 widescreen display has less area, than a 4:3 screen. The 4:3 screen has dimensions of and area , while the widescreen is , . Aspect ratio Until about 2003, most computer monitors had a 4:3 aspect ratio and some had 5:4. Between 2003 and 2006, monitors with 16:9 and mostly 16:10 (8:5) aspect ratios became commonly available, first in laptops and later also in standalone monitors. Reasons for this transition was productive uses for such monitors, i.e. besides widescreen computer game play and movie viewing, are the word processor display of two standard letter pages side by side, as well as CAD displays of large-size drawings and CAD application menus at the same time. In 2008 16:10 became the most common sold aspect ratio for LCD monitors and the same year 16:10 was the mainstream standard for laptops and notebook computers. In 2010, the computer industry started to move over from 16:10 to 16:9 because 16:9 was chosen to be the standard high-definition television display size, and because they were cheaper to manufacture. In 2011, non-widescreen displays with 4:3 aspect ratios were only being manufactured in small quantities. According to Samsung, this was because the "Demand for the old 'Square monitors' has decreased rapidly over the last couple of years," and "I predict that by the end of 2011, production on all 4:3 or similar panels will be halted due to a lack of demand." Resolution The resolution for computer monitors has increased over time. From during the early 1980s, to during the late 1990s. Since 2009, the most commonly sold resolution for computer monitors is . Before 2013 top-end consumer LCD monitors were limited to at , excluding Apple products and CRT monitors. Apple introduced with Retina MacBook Pro at on June 12, 2012, and introduced a Retina iMac at on October 16, 2014. By 2015 most major display manufacturers had released resolution displays. Gamut Every RGB monitor has its own color gamut, bounded in chromaticity by a color triangle. Some of these triangles are smaller than the sRGB triangle, some are larger. Colors are typically encoded by 8 bits per primary color. The RGB value [255, 0, 0] represents red, but slightly different colors in different color spaces such as Adobe RGB and sRGB. Displaying sRGB-encoded data on wide-gamut devices can give an unrealistic result. The gamut is a property of the monitor; the image color space can be forwarded as Exif metadata in the picture. As long as the monitor gamut is wider than the color space gamut, correct display is possible, if the monitor is calibrated. A picture that uses colors that are outside the sRGB color space will display on an sRGB color space monitor with limitations. Still today, many monitors that can display the sRGB color space are not factory adjusted to display it correctly. Color management is needed both in electronic publishing (via the Internet for display in browsers) and in desktop publishing targeted to print. Additional features Universal features Power saving Most modern monitors will switch to a power-saving mode if no video-input signal is received. This allows modern operating systems to turn off a monitor after a specified period of inactivity. This also extends the monitor's service life. Some monitors will also switch themselves off after a time period on standby. Most modern laptops provide a method of screen dimming after periods of inactivity or when the battery is in use. This extends battery life and reduces wear. Indicator light Most modern monitors have two different indicator light colors wherein if video-input signal was detected, the indicator light is green and when the monitor is in power-saving mode, the screen is black and the indicator light is orange. Some monitors have different indicator light colors and some monitors have blinking indicator light when in power-saving mode. Integrated accessories Many monitors have other accessories (or connections for them) integrated. This places standard ports within easy reach and eliminates the need for another separate hub, camera, microphone, or set of speakers. These monitors have advanced microprocessors which contain codec information, Windows Interface drivers and other small software which help in proper functioning of these functions. Ultrawide screens Monitors that feature an aspect ratio of 21:9 or 32:9 as opposed to the more common 16:9. 32:9 monitors are marketed as super ultrawide monitors. Touch screen These monitors use touching of the screen as an input method. Items can be selected or moved with a finger, and finger gestures may be used to convey commands. The screen will need frequent cleaning due to image degradation from fingerprints. Consumer features Glossy screen Some displays, especially newer LCD monitors, replace the traditional anti-glare matte finish with a glossy one. This increases color saturation and sharpness but reflections from lights and windows are very visible. Anti-reflective coatings are sometimes applied to help reduce reflections, although this only mitigates the effect. Curved designs In about 2009, NEC/Alienware together with Ostendo Technologies, Inc. (based in Carlsbad, CA) were offering a curved (concave) monitor that allows better viewing angles near the edges, covering 75% of peripheral vision in the horizontal direction. This monitor had 2880x900 resolution, 4 DLP rear projection systems with LED light sources and was marketed as suitable both for gaming and office work, while for $6499 it was rather expensive. While this particular monitor is no longer in production, most PC manufacturers now offer some sort of curved desktop display. 3D Newer monitors are able to display a different image for each eye, often with the help of special glasses, giving the perception of depth. An autostereoscopic screen can generate 3D images without headgear. Professional features Anti-glare and anti-reflection screens Features for medical using or for outdoor placement. Directional screen Narrow viewing angle screens are used in some security conscious applications. Integrated professional accessories Integrated screen calibration tools, screen hoods, signal transmitters; Protective screens. Tablet screens A combination of a monitor with a graphics tablet. Such devices are typically unresponsive to touch without the use of one or more special tools' pressure. Newer models however are now able to detect touch from any pressure and often have the ability to detect tilt and rotation as well. Touch and tablet screens are used on LCDs as a substitute for the light pen, which can only work on CRTs. Integrated display LUT and 3D LUT tables The option for using the display as a reference monitor; these calibration features can give an advanced color management control for take a near-perfect image. Local dimming backlit Option for professional LCD monitors, and basic feature of OLED screens; professional feature with mainstream tendency. Backlit brightness/color uniformity compensation Near to mainstream professional feature; advanced hardware driver for backlit modules with local zones of uniformity correction. Mounting Computer monitors are provided with a variety of methods for mounting them depending on the application and environment. Desktop A desktop monitor is typically provided with a stand from the manufacturer which lifts the monitor up to a more ergonomic viewing height. The stand may be attached to the monitor using a proprietary method or may use, or be adaptable to, a Video Electronics Standards Association, VESA, standard mount. Using a VESA standard mount allows the monitor to be used with an after-market stand once the original stand is removed. Stands may be fixed or offer a variety of features such as height adjustment, horizontal swivel, and landscape or portrait screen orientation. VESA mount The Flat Display Mounting Interface (FDMI), also known as VESA Mounting Interface Standard (MIS) or colloquially as a VESA mount, is a family of standards defined by the Video Electronics Standards Association for mounting flat panel monitors, TVs, and other displays to stands or wall mounts. It is implemented on most modern flat-panel monitors and TVs. For computer monitors, the VESA Mount typically consists of four threaded holes on the rear of the display that will mate with an adapter bracket. Rack mount Rack mount computer monitors are available in two styles and are intended to be mounted into a 19-inch rack: Fixed A fixed rack mount monitor is mounted directly to the rack with the LCD visible at all times. The height of the unit is measured in rack units (RU) and 8U or 9U are most common to fit 17-inch or 19-inch LCDs. The front sides of the unit are provided with flanges to mount to the rack, providing appropriately spaced holes or slots for the rack mounting screws. A 19-inch diagonal LCD is the largest size that will fit within the rails of a 19-inch rack. Larger LCDs may be accommodated but are 'mount-on-rack' and extend forward of the rack. There are smaller display units, typically used in broadcast environments, which fit multiple smaller LCDs side by side into one rack mount. Stowable A stowable rack mount monitor is 1U, 2U or 3U high and is mounted on rack slides allowing the display to be folded down and the unit slid into the rack for storage. The display is visible only when the display is pulled out of the rack and deployed. These units may include only a display or may be equipped with a keyboard creating a KVM (Keyboard Video Monitor). Most common are systems with a single LCD but there are systems providing two or three displays in a single rack mount system. Panel mount A panel mount computer monitor is intended for mounting into a flat surface with the front of the display unit protruding just slightly. They may also be mounted to the rear of the panel. A flange is provided around the LCD, sides, top and bottom, to allow mounting. This contrasts with a rack mount display where the flanges are only on the sides. The flanges will be provided with holes for thru-bolts or may have studs welded to the rear surface to secure the unit in the hole in the panel. Often a gasket is provided to provide a water-tight seal to the panel and the front of the LCD will be sealed to the back of the front panel to prevent water and dirt contamination. Open frame An open frame monitor provides the LCD monitor and enough supporting structure to hold associated electronics and to minimally support the LCD. Provision will be made for attaching the unit to some external structure for support and protection. Open frame LCDs are intended to be built into some other piece of equipment. An arcade video game would be a good example with the display mounted inside the cabinet. There is usually an open frame display inside all end-use displays with the end-use display simply providing an attractive protective enclosure. Some rack mount LCD manufacturers will purchase desktop displays, take them apart, and discard the outer plastic parts, keeping the inner open-frame LCD for inclusion into their product. Security vulnerabilities According to an NSA document leaked to Der Spiegel, the NSA sometimes swaps the monitor cables on targeted computers with a bugged monitor cable in order to allow the NSA to remotely see what is being displayed on the targeted computer monitor. Van Eck phreaking is the process of remotely displaying the contents of a CRT or LCD by detecting its electromagnetic emissions. It is named after Dutch computer researcher Wim van Eck, who in 1985 published the first paper on it, including proof of concept. Phreaking more generally is the process of exploiting telephone networks. See also History of display technology Comparison of CRT, LCD, Plasma, and OLED displays Flat panel display Multi-monitor Vector monitor Virtual desktop Variable refresh rate High frame rate Head-mounted display References External links American inventions Computer peripherals Display devices
11732933
https://en.wikipedia.org/wiki/PlayStation%203%20system%20software
PlayStation 3 system software
The PlayStation 3 system software is the updatable firmware and operating system of the PlayStation 3. The base operating system used by Sony for the PlayStation 3 is a fork of both FreeBSD and NetBSD called CellOS. It uses XrossMediaBar as its graphical shell. The process of updating is almost identical to that of the PlayStation Portable, PlayStation Vita, and PlayStation 4. The software may be updated by downloading the update directly on the PlayStation 3, downloading it from the user's local Official PlayStation website to a PC and using a USB storage device to transfer it to the PlayStation 3, or installing the update from game discs containing update data. The initial slim PS3s SKU shipped with a unique firmware with new features, also seen in software 3.00. Technology System The native operating system of the PlayStation 3 is CellOS, which is believed to be a fork of FreeBSD; TCP/IP stack fingerprinting identifies a PlayStation 3 as running FreeBSD, and the PlayStation 3 is known to contain code from FreeBSD and NetBSD. The 3D computer graphics API software used in the PlayStation 3 is LibGCM and PSGL, based on OpenGL ES and Nvidia's Cg. LibGCM is a low level API, and PSGL is a higher level API, but most developers preferred to use libGCM due to higher levels of performance. This is similar to the later PlayStation 4 console which also has two APIs, the low level GNM and the higher level GNMX. Unlike the Software Development Kit (SDK) for mobile apps, Sony's PlayStation 3 SDK is only available to registered game development companies and contains software tools and an integrated hardware component. Due to the fact that it requires a licensing agreement with Sony (which is considered expensive), a number of open source and homebrew PS3 SDKs are available in addition to a number of leaked PS3 SDKs. Graphical shell The PlayStation 3 uses the XrossMediaBar (XMB) as its graphical user interface, which is also used in the PlayStation Portable (PSP) handheld console, a variety of Sony BRAVIA HDTVs, Blu-ray disc players and many more Sony products. XMB displays icons horizontally across the screen that be seen as categories. Users can navigate through them using the left and right buttons of the D-pad, which move the icons forward or back across the screen, highlighting just one at a time, as opposed to using any kind of pointer to select an option. When one category is selected, there are usually more specific options then available to select that are spread vertically above and below the selected icon. Users may navigate among these options by using the up and down buttons of the D-pad. The basic features offered by XMB implementations varies based on device and software version. Apart from those appearing in the PSP console such as category icons for Photos, Music and Games, the PS3 added Users, TV and Friends to the XMB. Also, XMB offers a degree of multitasking. In-game XMB features were added to the PS3 properly with firmware version 2.41 after causing early implementation problems. While XMB proved to be a successful user interface for Sony products such as PSP and PS3, the next generation Sony video game consoles such as the PlayStation 4 and the PlayStation Vita no longer use this user interface. Cooperation with handheld consoles The PlayStation 3 supports Remote Play with Sony's handheld game consoles, the PlayStation Portable and the PlayStation Vita. However, unlike Remote Play between the PlayStation 4 and the PlayStation Vita, the problem with PS3 was that it only supported a "select" few titles and results were often laggy. However, it is clear that Remote Play with the PS3 was the testing bed for its much better integration with the PS4. Also, for users having both the PlayStation 3 and the PlayStation Vita, it is possible to share media files videos, music and images between them by transferring multimedia files directly from the PlayStation 3 to the PlayStation Vita, or vice versa. Furthermore, they can use a service called cross-buy which allows them to buy certain games that support this feature one time, and play them in both Sony platforms. Not only that, but in the case of most such games, their saved games actually transfer back and forth between devices, allowing players to pick up from the moment they left off. There is also a feature called cross-play (or cross-platform play) covering any PlayStation Vita software title that can interact with a PlayStation 3 software title. Different software titles use Cross-Play in different ways. For example, Ultimate Marvel vs. Capcom 3 is a title supporting the Cross-Play feature, and the PS3 version of the game can be controlled using the PS Vita system. In addition, some PS3 games can be played on the PS Vita using the PlayStation Now streaming service. Non-game features Similar to many other consoles, the PlayStation 3 is capable of photo, audio, and video playback in a variety of formats. It also includes various photo slideshow options and several music visualizations. Furthermore, the PlayStation 3 is able to play Blu-ray Disc and DVD movies as well as audio CDs out of the box, and also capable of adopting streaming media services such as Netflix. For a web browser, the PS3 uses the NetFront browser, although unlike its successor PS4 which uses the same modern Webkit core as Safari from Apple, the PS3 web browser receives a low score in HTML5 compliance testing. However, unlike the PS4, the PS3 is able to play Adobe Flash, including full-screen flash. Early versions of the PlayStation 3 system software also provided a feature called OtherOS that was available on the PS3 systems prior to the slimmer models launched in September 2009. This feature enabled users to install an operating system such as Linux, but due to security concerns, Sony later removed this functionality through the 3.21 system software update. According to Sony Computer Entertainment (SCE), disabling this feature will help ensure that PS3 owners will continue to have access to the broad range of gaming and entertainment content from SCE and its content partners on a more secure system. Sony was successfully sued in a class action over the removal of this feature. The settlement was approved in September 2016. Sony agreed to pay up to $55 to as many as 10 million PS3 owners but denied wrongdoing. Furthermore, the PlayStation 3 provides printing support. It can for example print images and web pages when a supported printer is connected via a USB cable or a local network. However, only a selection of printers from Canon, Epson, and Hewlett-Packard are compatible with the PS3. Backwards compatibility All PlayStation 3 consoles are able to play original PlayStation games (PS One discs and downloadable classics). However, not all PlayStation 3 models are backwards compatible with PlayStation 2 games. In summary, early PS3 consoles such as the 20GB and 60GB launch PS3 consoles were backwards compatible with PS2 games because they had PS2 chips in them. Some later models, most notably the 80GB Metal Gear Solid PS3 consoles are also backwards compatible, through partial software emulation in this case since they no longer had the PS2 CPU in them, although they do have the PS2 GPU in them, allowing for reduced backward compatibility through hardware-assisted software emulation. All other later models, such as the 40GB, 140GB, PS3 Slim, and PS3 Super Slim are not backwards compatible with PS2 games, though users can still enjoy PS One and PS3 games on them. According to Sony, when they removed backwards compatibility from the PS3, they had already been at a point where they were three years into its lifecycle; by that time the vast majority of consumers that were purchasing the PS3 cite PS3 games as a primary reason, meaning that the PS2 compatibility was no longer necessary. Nevertheless, PS2 Classics which are playable on the PS3 have officially been introduced to the PlayStation Network for purchase afterwards, although they are only a selection of PS2 games republished in digital format, and unlike PS3 games, they lack Trophy support. Later when the PlayStation 4 console was released, it was not backward compatible with either PlayStation 3, PlayStation 2, or PlayStation 1 games, although limited PS2 backward compatibility was later introduced, and PS4 owners might play a selected group of PS3 games by streaming them over the Internet using the PlayStation Now cloud-based gaming service. LV0 keys The PlayStation 3 LV0 keys are a set of cryptographic keys which form the core of the PlayStation 3's security system. According to a news story on Polygon: With the LV0 keys users are able to circumvent restrictions placed by Sony, more commonly known as jailbreaking. The LV0 keys were released online by a group calling themselves "The Three Musketeers", granting users access to some of the most sensitive parts of the PlayStation 3. With access to these areas, users can decrypt security updates and work around the authorized PlayStation firmware. This allows PlayStation 3 firmware updates to be modified on a computer so that they can be run on a modified console. The Three Musketeers decided to release the code after a group of rival hackers obtained the code and planned to sell it. While this is not the first time the PlayStation 3 has been hacked, according to Eurogamer, "The release of the new custom firmware—and the LV0 decryption keys in particular—poses serious issues." It also says that "options Sony has in battling this leak are limited" since "the reveal of the LV0 key basically means that any system update released by Sony going forward can be decrypted with little or no effort whatsoever". History of updates The "initial" release for the PlayStation 3 system software was version 1.10 as appeared on 11 November 2006 in Japan and 17 November 2006 in North America that provided the PlayStation Network services and the Remote Play for the 60 GB model. However, version 1.02 was included with some games. There were a number of updates in the 1.xx versions, which provided new features such as the Account Management, compatibility of USB devices for PlayStation 2 format games, and supports for USB webcams and Bluetooth keyboards and mice. Version 1.80 released on 24 May 2007 added a number of relatively small new features, mostly related to media and videos, such as the ability to upscale standard DVD-Videos to 1080p and to downscale Blu-ray video to 720p. Version 1.90 released on 24 May 2007 further added the Wallpaper feature for the background of XMB and the ability to eject a game disc using the controller, to re-order game icons by format and creation date. This update also forced 24 Hz output for Blu-ray over HDMI, and introduced bookmarks and a security function to the web browser. The last version in the 1.xx series was 1.94 released on 23 October 2007 that added support for DualShock 3 controllers. As with the version 1.xx series, there were a number of versions in the 2.xx and 3.xx series, released between 8 November 2007 and 20 September 2011. There were quite a few noticeable changes, and in version 2.10 alone there were new features such as the additions of the Voice Changer feature with the power to make users sound like a person using a voice changer with five presets over hi and low tones, a new music bitmapping process specifically designed for the PS3 to provide enhanced audio playback, as well as supports for DivX and WMV playback and Blu-ray disc profile 1.1 for picture-in-picture. Version 2.50 released on 15 October 2008 was the update in the 2.xx series that contained the largest number of new features or changes, among them were the support for official PS3 Bluetooth headset, in-game screenshots and Adobe Flash 9. A recovery menu (or safe mode) was also introduced in this version. Later versions in the 2.xx series such as the 2.7x, 2.85 or 2.90 were distributed with the PS3 "slim". Similar to versions such as 2.00, versions such as 3.00, 3.10, 3.30, 3.40, and 3.70 all introduced relatively large number of new features or changes, such as supports for new Dynamic Custom Themes, improvements in the Internet Browser, Trophy enhancements, and a new [Video Editor & Uploader] application. The most noticeable change in the version 4.00 released on 30 November 2011 was the added support for the PlayStation Vita handheld game consoles. For example, [PS Vita System] was added as an option and [PS Vita System Application Utility] has been added as a feature under [Game]. With this update, the PlayStation 3 also gained the ability to transfer videos, images, musics, and game data to and from the PlayStation Vita. Version 4.10 released on 8 February 2012 also added improvements to the Internet Browser including some support for HTML5 and its display speed and web page layout accuracy. Later versions in the 4.xx series all made a few changes to the system, mostly to improve the stability and operation quality during the uses of some applications, in addition to adding new features such as displaying closed captions when playing BDs and DVDs and "Check for Update" to the options menu for a game. The PlayStation 3 system software is currently still being updated by Sony. Withdrawal of update 2.40 System software version 2.40, which included the in-game XMB feature and PlayStation 3 Trophies, was released on 2 July 2008; however, it was withdrawn later the same day because a small number of users were unable to restart their consoles after performing the update. The fault was explained to have been because of certain system administrative data being contained in the HDD. The issue was addressed in version 2.41 of the system software released on 8 July 2008. Class action suit filed over update 3.0 System software version 3.0 was released on 1 September 2009. Shortly after its release, a number of users complained that the system update caused their system's Blu-ray drive to malfunction. In addition, John Kennedy of Florida filed a class action suit against Sony Computer Entertainment America(SCEA). John Kennedy had purchased a PlayStation 3 in January 2009, claiming it worked perfectly until he installed the required firmware update 3.0, at which point the Blu-ray drive in his system ceased functioning properly. Sony later released a statement, "SCEA is aware of reports that PS3 owners are experiencing isolated issues with their PS3 system since installing the most recent system software update (v3.00)," and released software update 3.01 on 15 September 2009. However, after installing 3.01, the plaintiff alleged the problems were not only not solved, but the new update created new issues as well. Class action suits filed over update 3.21 Due to the removal of the "OtherOS" feature from older models of the PS3 due to security issues (possibly related to the exploit released by geohot) which caused an uproar in the PlayStation community, several lawsuits have been filed. The first one was filed on behalf of PS3 owners by Anthony Ventura. The suit states that removing the feature constitutes breach of contract, false advertising and a handful of other business practices relating to consumer protection laws as the feature was touted by Sony when these systems were new as a way consumers could turn their machines into a basic PC and cites that the feature was "extremely valuable" and one of the main reasons that many people paid more for the PS3 over buying a competing console like a Wii or an Xbox 360. It also elaborates that anyone who does not accept the update can no longer play future games or future Blu-ray movie releases. Later on, two more suits were also filed by other members of the PlayStation 3 community. The first of these newer lawsuits was filed by Todd Densmore and Antal Herz which claim Sony has rendered several PlayStation 3 features they paid for "inoperable" as a result of the release of firmware 3.21. The second filed was by Jason Baker, Sean Bosquett, Paul Graham, and Paul Vannatta, and claims, among other things, that they "lost money by purchasing a PS3 without receiving the benefit of their bargain because the product is not what it was claimed to be - a game console that would provide both the Other OS feature and gaming functions." A fourth lawsuit was filed by Keith Wright and seeks compensation equal to the cost of the console. A fifth lawsuit was filed by Jeffrey Harper and Zachary Kummer which calls for a jury trial. A sixth lawsuit was filed by Johnathon Huber and has quotes from both the EU and US PlayStation blogs. Sony responded to the lawsuits by requesting a dismissal on the grounds that "no one cared about the feature" and that the filings cite quotes from 3rd party websites, the instruction manual, the PlayStation Web site and claims they are invalid proof and that Sony can disable PSN and the other advertised features (playing games that require newer firmware, etc.) as they wish. The lawyers for the plaintiffs reviewed the request and said that this is fairly common at this stage of the process and that the case would be reviewed before a judge in November 2010. In February 2011, all claims of false advertising in the case were dismissed, but the plaintiffs were allowed to appeal and amend the case and the other claims that the removal violated the Computer Fraud and Abuse act were allowed to go forward. In March 2011, the plaintiffs amended their complaint to refute Sony's claims that it was within its rights under the TOS and warranty to remove the feature adding more details to their claims including breach of warranty, breach of implied warranty, breach of contract, unjust enrichment, and breach of several California unfair business practices laws. In April 2011, SCEA again asked that the case be dismissed and made claims that the plaintiffs refiled claim was insufficient and that they were hackers and wanted to violate Sony's intellectual property and asked the judge to grant search rights on their PS3 systems. SCEA also made claims that they were not the division solely responsible for the removal and should not be held responsible despite conflicting information to the contrary. On 18 April 2011, the plaintiffs fired back at Sony's renewed efforts to have the case dismissed by pointing out the fact that Sony had made many of the same claims before and that they had been dismissed by the court and also pointed out several legal precedents under California law that refuted Sony's claims. In December 2011, the whole case was dismissed under the grounds that the plaintiffs had failed to prove that they could expect the "Other OS" feature beyond the warranty of the machine. However, this decision was overturned in a 2014 appellate court decision finding that plaintiffs had indeed made clear and sufficiently substantial claims. Ultimately, in 2016, Sony settled with users who installed Linux or purchased a PlayStation 3 based upon the alternative OS functionality. Withdrawal of update 4.45 System software version 4.45 was released on 18 June 2013; however, it was withdrawn one day later because a small number of users were unable to restart their consoles after performing the update. On 21 June 2013, Morgan Haro, a Community Manager for PlayStation Network, announced that the issue had been identified and a new update was planned to be released to resolve the issue. The system update that addressed this issue, version 4.46 was released on 27 June 2013, and a fix for those affected by system version 4.45 was also provided by Sony. See also Media Go Linux for PlayStation 3 LocationFree Player Qriocity XrossMediaBar Other gaming platforms from Sony: PlayStation 4 system software PlayStation Portable system software PlayStation Vita system software Other gaming platforms from the next generation: Wii U system software Xbox One system software Nintendo 3DS system software Nintendo Switch system software Other gaming platforms from this generation: Wii system software Xbox 360 system software Nintendo DSi system software References External links Official PlayStation 3 System Software Update page • Australia • • • New Zealand • United Kingdom • United States Update History PlayStation Blog (firmware announcements) PS3 compatible printers Software Game console operating systems Unix variants 2006 software Proprietary operating systems PlayStation 3 - PlayStation Support with PlayStation by Windows Phones Power by Windows 10|Windows
49900621
https://en.wikipedia.org/wiki/Redox%20%28operating%20system%29
Redox (operating system)
Redox is a Unix-like microkernel operating system written in the programming language Rust, which has a focus on safety, stability, and performance. Redox aims to be secure, usable, and free. Redox is inspired by prior kernels and operating systems, such as SeL4, MINIX, Plan 9, and BSD. It is similar to GNU and BSD, but is written in a memory-safe language. It is free and open-source software distributed under an MIT License. Design The Redox operating system is designed to be secure. This is reflected in two design decisions: Using the programming language Rust for implementation Using a microkernel design, similar to MINIX Components Redox provides packages (memory allocator, file system, display manager, core utilities, etc.) that together make up a functional operating system. Redox relies on an ecosystem of software written in Rust by members of the project. Redox kernel – derives from the concept of microkernels, with inspiration from MINIX Ralloc – memory allocator TFS file system – inspired by the ZFS file system Ion shell – the underlying library for shells and command execution in Redox, and the default shell pkgutils – package manager Orbital windowing system – display and window manager, sets up the orbital: scheme, manages the display, and handles requests for window creation, redraws, and event polling relibc – C standard library Command-line applications Redox supports command-line interface (CLI) programs, including: Sodium – vi-like editor that provides syntax highlighting Rusthello – advanced Reversi AI; is highly concurrent, serving as proof of Redox's multithreading abilities; supports various AI strategies, such as brute forcing, minimax, local optimizations, and hybrid AIs Graphical applications Redox supports graphical user interface (GUI) programs, including: NetSurf – a lightweight web browser which uses its own layout engine Calculator – a software calculator which provides functions similar to the Windows Calculator program Editor – simple text editor, similar to Microsoft Notepad File Browser – a file manager that displays icons, names, sizes, and details for files; uses the launcher command to open files when they are clicked Image Viewer – Image viewer for simple file types Pixelcannon – 3D renderer, can be used to benchmark the Orbital desktop Orbterm – ANSI type terminal emulator History Redox was created by Jeremy Soller and was first published on 20 April 2015 on GitHub. As of July 2021, the Redox repository had a total of 79 contributors. References External links Official GitLab instance Free software operating systems Hobbyist operating systems Microkernel-based operating systems Free software programmed in Rust Software using the MIT license Unix variants
667678
https://en.wikipedia.org/wiki/Side-channel%20attack
Side-channel attack
In computer security, a side-channel attack is any attack based on information gained from the implementation of a computer system, rather than weaknesses in the implemented algorithm itself (e.g. cryptanalysis and software bugs). Timing information, power consumption, electromagnetic leaks or even sound can provide an extra source of information, which can be exploited. Some side-channel attacks require technical knowledge of the internal operation of the system, although others such as differential power analysis are effective as black-box attacks. The rise of Web 2.0 applications and software-as-a-service has also significantly raised the possibility of side-channel attacks on the web, even when transmissions between a web browser and server are encrypted (e.g. through HTTPS or WiFi encryption), according to researchers from Microsoft Research and Indiana University. Many powerful side-channel attacks are based on statistical methods pioneered by Paul Kocher. Attempts to break a cryptosystem by deceiving or coercing people with legitimate access are not typically considered side-channel attacks: see social engineering and rubber-hose cryptanalysis. General General classes of side-channel attack include: Cache attack — attacks based on attacker's ability to monitor cache accesses made by the victim in a shared physical system as in virtualized environment or a type of cloud service. Timing attack — attacks based on measuring how much time various computations (such as, say, comparing an attacker's given password with the victim's unknown one) take to perform. Power-monitoring attack — attacks that make use of varying power consumption by the hardware during computation. Electromagnetic attack — attacks based on leaked electromagnetic radiation, which can directly provide plaintexts and other information. Such measurements can be used to infer cryptographic keys using techniques equivalent to those in power analysis or can be used in non-cryptographic attacks, e.g. TEMPEST (aka van Eck phreaking or radiation monitoring) attacks. Acoustic cryptanalysis — attacks that exploit sound produced during a computation (rather like power analysis). Differential fault analysis — in which secrets are discovered by introducing faults in a computation. Data remanence — in which sensitive data are read after supposedly having been deleted. (i.e. Cold boot attack) Software-initiated fault attacks — Currently a rare class of side channels, Row hammer is an example in which off-limits memory can be changed by accessing adjacent memory too often (causing state retention loss). Optical - in which secrets and sensitive data can be read by visual recording using a high resolution camera, or other devices that have such capabilities (see examples below). In all cases, the underlying principle is that physical effects caused by the operation of a cryptosystem (on the side) can provide useful extra information about secrets in the system, for example, the cryptographic key, partial state information, full or partial plaintexts and so forth. The term cryptophthora (secret degradation) is sometimes used to express the degradation of secret key material resulting from side-channel leakage. Examples A works by monitoring security critical operations such as AES T-table entry or modular exponentiation or multiplication or memory accesses. Attacker then is able to recover the secret key depending on the accesses made (or not made) by the victim, deducing the encryption key. Also, unlike some of the other side-channel attacks, this method does not create a fault in the ongoing cryptographic operation and is invisible to the victim. In 2017, two CPU vulnerabilities (dubbed Meltdown and Spectre) were discovered, which can use a cache-based side channel to allow an attacker to leak memory contents of other processes and the operating system itself. A timing attack watches data movement into and out of the CPU or memory on the hardware running the cryptosystem or algorithm. Simply by observing variations in how long it takes to perform cryptographic operations, it might be possible to determine the entire secret key. Such attacks involve statistical analysis of timing measurements and have been demonstrated across networks. A power-analysis attack can provide even more detailed information by observing the power consumption of a hardware device such as CPU or cryptographic circuit. These attacks are roughly categorized into simple power analysis (SPA) and differential power analysis (DPA). Example of machine learning approaches are in. Fluctuations in current also generate radio waves, enabling attacks that analyze measurements of electromagnetic (EM) emanations. These attacks typically involve similar statistical techniques as power-analysis attacks. A deep-learning-based side-channel attack, using the power and EM information across multiple devices has been demonstrated with the potential to break the secret key of a different but identical device in as low as a single trace. Historical analogues to modern side-channel attacks are known. A recently declassified NSA document reveals that as far back as 1943, an engineer with Bell telephone observed decipherable spikes on an oscilloscope associated with the decrypted output of a certain encrypting teletype. According to former MI5 officer Peter Wright, the British Security Service analyzed emissions from French cipher equipment in the 1960s. In the 1980s, Soviet eavesdroppers were suspected of having planted bugs inside IBM Selectric typewriters to monitor the electrical noise generated as the type ball rotated and pitched to strike the paper; the characteristics of those signals could determine which key was pressed. Power consumption of devices causes heating, which is offset by cooling effects. Temperature changes create thermally induced mechanical stress. This stress can create low level acoustic emissions from operating CPUs (about 10 kHz in some cases). Recent research by Shamir et al. has suggested that information about the operation of cryptosystems and algorithms can be obtained in this way as well. This is an acoustic cryptanalysis attack. If the surface of the CPU chip, or in some cases the CPU package, can be observed, infrared images can also provide information about the code being executed on the CPU, known as a thermal-imaging attack. An optical side-channel attack examples include gleaning information from the hard disk activity indicator to reading a small number of photons emitted by transistors as they change state. Allocation-based side channels also exist and refer to the information that leaks from the allocation (as opposed to the use) of a resource such as network bandwidth to clients that are concurrently requesting the contended resource. Countermeasures Because side-channel attacks rely on the relationship between information emitted (leaked) through a side channel and the secret data, countermeasures fall into two main categories: (1) eliminate or reduce the release of such information and (2) eliminate the relationship between the leaked information and the secret data, that is, make the leaked information unrelated, or rather uncorrelated, to the secret data, typically through some form of randomization of the ciphertext that transforms the data in a way that can be undone after the cryptographic operation (e.g., decryption) is completed. Under the first category, displays with special shielding to lessen electromagnetic emissions, reducing susceptibility to TEMPEST attacks, are now commercially available. Power line conditioning and filtering can help deter power-monitoring attacks, although such measures must be used cautiously, since even very small correlations can remain and compromise security. Physical enclosures can reduce the risk of surreptitious installation of microphones (to counter acoustic attacks) and other micro-monitoring devices (against CPU power-draw or thermal-imaging attacks). Another countermeasure (still in the first category) is to jam the emitted channel with noise. For instance, a random delay can be added to deter timing attacks, although adversaries can compensate for these delays by averaging multiple measurements (or, more generally, using more measurements in the analysis). As the amount of noise in the side channel increases, the adversary needs to collect more measurements. Another countermeasure under the first category is to use security analysis software to identify certain classes of side-channel attacks that can be found during the design stages of the underlying hardware itself. Timing attacks and cache attacks are both identifiable through certain commercially available security analysis software platforms, which allow for testing to identify the attack vulnerability itself, as well as the effectiveness of the architectural change to circumvent the vulnerability. The most comprehensive method to employ this countermeasure is to create a Secure Development Lifecycle for hardware, which includes utilizing all available security analysis platforms at their respective stages of the hardware development lifecycle. In the case of timing attacks against targets whose computation times are quantized into discrete clock cycle counts, an effective countermeasure against is to design the software to be isochronous, that is to run in an exactly constant amount of time, independently of secret values. This makes timing attacks impossible. Such countermeasures can be difficult to implement in practice, since even individual instructions can have variable timing on some CPUs. One partial countermeasure against simple power attacks, but not differential power-analysis attacks, is to design the software so that it is "PC-secure" in the "program counter security model". In a PC-secure program, the execution path does not depend on secret values. In other words, all conditional branches depend only on public information. (This is a more restrictive condition than isochronous code, but a less restrictive condition than branch-free code.) Even though multiply operations draw more power than NOP on practically all CPUs, using a constant execution path prevents such operation-dependent power differences (differences in power from choosing one branch over another) from leaking any secret information. On architectures where the instruction execution time is not data-dependent, a PC-secure program is also immune to timing attacks. Another way in which code can be non-isochronous is that modern CPUs have a memory cache: accessing infrequently used information incurs a large timing penalty, revealing some information about the frequency of use of memory blocks. Cryptographic code designed to resist cache attacks attempts to use memory in only a predictable fashion (such as accessing only the input, outputs and program data, and doing so according to a fixed pattern). For example, data-dependent table lookups must be avoided because the cache could reveal which part of the lookup table was accessed. Other partial countermeasures attempt to reduce the amount of information leaked from data-dependent power differences. Some operations use power that is correlated to the number of 1 bits in a secret value. Using a constant-weight code (such as using Fredkin gates or dual-rail encoding) can reduce the leakage of information about the Hamming weight of the secret value, although exploitable correlations are likely to remain unless the balancing is perfect. This "balanced design" can be approximated in software by manipulating both the data and its complement together. Several "secure CPUs" have been built as asynchronous CPUs; they have no global timing reference. While these CPUs were intended to make timing and power attacks more difficult, subsequent research found that timing variations in asynchronous circuits are harder to remove. A typical example of the second category (decorrelation) is a technique known as blinding. In the case of RSA decryption with secret exponent and corresponding encryption exponent and modulus , the technique applies as follows (for simplicity, the modular reduction by m is omitted in the formulas): before decrypting, that is, before computing the result of for a given ciphertext , the system picks a random number and encrypts it with public exponent to obtain . Then, the decryption is done on to obtain . Since the decrypting system chose , it can compute its inverse modulo to cancel out the factor in the result and obtain , the actual result of the decryption. For attacks that require collecting side-channel information from operations with data controlled by the attacker, blinding is an effective countermeasure, since the actual operation is executed on a randomized version of the data, over which the attacker has no control or even knowledge. A more general countermeasure (in that it is effective against all side-channel attacks) is the masking countermeasure. The principle of masking is to avoid manipulating any sensitive value directly, but rather manipulate a sharing of it: a set of variables (called "shares") such that (where is the XOR operation). An attacker must recover all the values of the shares to get any meaningful information. Recently, white-box modeling was utilized to develop a low-overhead generic circuit-level countermeasure against both EM as well as power side-channel attacks. To minimize the effects of the higher-level metal layers in an IC acting as more efficient antennas, the idea is to embed the crypto core with a signature suppression circuit, routed locally within the lower-level metal layers, leading towards both power and EM side-channel attack immunity. See also Brute-force attack Computer surveillance Covert channel Side effect References Further reading Books Articles , Differential Power Analysis, P. Kocher, J. Jaffe, B. Jun, appeared in CRYPTO'99. Side channel attack: an approach based on machine learning, 2011, L Lerman, G Bontempi, O Markowitch. , Timing Attacks on Implementations of Diffie-Hellman, RSA, DSS, and Other Systems, P. Kocher. , Introduction to Differential Power Analysis and Related attacks, 1998, P Kocher, J Jaffe, B Jun. Nist.gov, a cautionary Note Regarding Evaluation of AES Candidates on Smart Cards, 1999, S Chari, C Jutla, J R Rao, P Rohatgi DES and Differential Power Analysis, L Goubin and J Patarin, in Proceedings of CHES'99, Lecture Notes in Computer Science Nr 1717, Springer-Verlag External links Sima, Mihai; Brisson, André (2015), Whitenoise Encryption Implementation with Increased Robustness against Side-Channel Attacks Brisson, André (2015),University of Victoria, British Columbia Side Channel Attack Resistance study of Whitenoise New side channel attack techniques COSADE Workshop International Workshop on Constructive Side-Channel Analysis and Secure Design Cryptographic attacks
38196102
https://en.wikipedia.org/wiki/Marek%20Trojanowicz
Marek Trojanowicz
Marek Trojanowicz (born April 30, 1944 in Warsaw) is a Polish chemist, professor of chemical sciences with specialization in analytical chemistry, academic staff member, and head of the Laboratory for Flow Analysis and Chromatography, University of Warsaw, Poland. Biographic data Trojanowicz completed his master studies in 1966 in the Faculty of Chemistry, University of Warsaw. In 1974 in the same Faculty he was granted a PhD degree under the supervision of Prof. Adam Hulanicki in the field of analytical chemistry presenting the thesis on theory of titrations with potentiometric detection. In 1981 he was granted D.Sc. degree (habilitation) based on a dissertation on membrane ion-selective electrodes and their application in water analysis. He had a post-doc one-year stay in Tohoku University in Sendai, Japan, in the research group of Prof. Nobuyuki Tanaka. In 1991 he was nominated a titular professor of chemical sciences. Since 1992 he is full professor in the Faculty of Chemistry, University of Warsaw. He was a visiting professor at 20 universities and research institutes all over the world, including Japan, France, United Kingdom, Italy, Brazil, Australia and USA. Since 1966 he is employed as academic staff member in the Faculty of Chemistry, University of Warsaw, and since 1992 simultaneously in the Institute of Nuclear Chemistry and Technology in Warsaw. He was/is a member of Advisory Editorial Boards of several international journals, including Journal of Biochemical and Biophysical Methods and Talanta (Elsevier), Analytical Letters (Taylor and Francis), Microchimica Acta (Springer) and Journal of Flow Injection Analysis (Japan Association of Flow Injection Analysis). In 1992–2003 he was Scientific Secretary of the Committee on Analytical Chemistry, of the Polish Academy of Sciences. He is a member of the Warsaw Scientific Society, and the Polish Chemical Society, the International Electrochemical Society, and the Society of Environmental Toxicology and Chemistry. Scientific interests He published 320 scientific publications (250 in peer-reviewed journals according to ISI Web of Knowledge, January 2013). In 1988 in the journal of the Royal Society of Chemistry The Analyst he presented the development of enzymatic electrochemical biosensor, where for the first time it was shown that enzyme immobilized in hydrophobic graphite paste maintain the biocatalytic activity. In 1989 he published together with Mark E. Meyerhoff in the journal Analytical Chemistry a paper on novel electrochemical detection in ion-chromatography, which was based on exchange of ions through tubular membranes and measuring the change of potential of the indicator electrode. In 2001 in Journal of Chromatography he published with his research team a work on simultaneous analytical determination of optical isomers of several neurotransmitters in physiological fluids using capillary electrophoresis, including determination of all diastereoisomers of ephedrine. He was also granted 6 patents in Poland, Finland, European Union and USA. Awards and honors For his scientific activity he was granted several Polish and foreign awards, including Wiktor Kemula Medal of the Polish Chemical Society (2009), Scientific Honor Award of the Japan Association of Flow-Injection Analysis (2003), prizes of Minister of National Education of Poland (1975,1980,1991), and prize of Minister of Environmental Protection of Poland (1972). In 2012 he was granted the Officer's Cross of the Order Polonia Restituta, and prize of Minister of Science and Education for the life achievements in science. Books and editorials Advances in Flow Analysis, Wiley-VCH, Marek Trojanowicz (ed.), Weinheim, 2008, pp. 702, Analiza przepływowa. Metody i zastosowania , P. Kościelniak, M. Trojanowicz (ed.), Vol. I, Uniwersytet Jagielloński Press, Kraków, 2005, pp. 256, Analiza przepływowa. Metody i zastosowania, P. Kościelniak, M. Trojanowicz (ed.), Vol. II, Uniwersytet Jagielloński Press, Kraków, 2008, pp. 262, Flow Injection Analysis. Instrumentation and Applications. World Scientific Publishing, Singapore, 2000, pp. 481, Automatyzacja w analizie chemicznej, WNT, Warszawa, 1992, pp. 514. References External links Polish Science Marek Trojanowicz – strona WWW Katalog Biblioteki Jagiellońskiej 1944 births Polish chemists University of Warsaw alumni University of Warsaw faculty Recipients of the Order of Polonia Restituta Living people
565462
https://en.wikipedia.org/wiki/Leonid%20Khachiyan
Leonid Khachiyan
Leonid Genrikhovich Khachiyan (; ; May 3, 1952April 29, 2005) was a Soviet and American mathematician and computer scientist. He was most famous for his ellipsoid algorithm (1979) for linear programming, which was the first such algorithm known to have a polynomial running time. Even though this algorithm was shown to be impractical, it has inspired other randomized algorithms for convex programming and is considered a significant theoretical breakthrough. Early life and education Khachiyan was born on May 3, 1952 in Leningrad to Armenian parents Genrikh Borisovich Khachiyan, a mathematician and professor of theoretical mechanics, and Zhanna Saakovna Khachiyan, a civil engineer. His grandparents were Karabakh Armenians. He had two brothers: Boris and Yevgeniy (Eugene). His family moved to Moscow in 1961, when he was nine. He received a master's degree from the Moscow Institute of Physics and Technology. In 1978 he earned his Ph.D. in computational mathematics/theoretical mathematics from the Computer Center of the Soviet Academy of Sciences and in 1984 a D.Sc. in computer science from the same institution. Career Khachiyan began his career at the Soviet Academy of Sciences, working as a researcher at the Academy's Computer Center in Moscow. He also worked as an adjunct professor at the Moscow Institute of Physics and Technology. In 1979 he stated: "I am a theoretical mathematician and I'm just working on a class of very difficult mathematical problems." Khachiyan immigrated to the United States in 1989. He first taught at Cornell University as a visiting professor. In 1990 he joined Rutgers University as a visiting professor. He became professor of computer science at Rutgers in 1992. By 2005, he held the position of Professor II at Rutgers. Work on linear programming Ellipsoid method Khachiyan is best known for his four-page February 1979 paper that indicated how an ellipsoid method for linear programming can be implemented in polynomial time. The paper was translated into several languages and spread around the world unusually fast. Authors of a 1981 survey of his work noted that it "has caused great excitement and stimulated a flood of technical papers" and was covered by major newspapers. It was originally published without proofs, which were provided by Khachiyan in a later paper published in 1980 and by Peter Gács and Laszlo Lovász in 1981. It was Gács and Lovász who first brought attention to Khachiyan's paper at the International Symposium on Mathematical Programming in Montreal in August 1979. It was further popularized when Gina Kolata reported it in Science Magazine on November 2, 1979. Khachiyan's theory is considered a groundbreaking one that "helped advance the field of linear programming." Giorgio Ausiello noted that the method was not practical, "but it was a real breakthrough for the world of operations research and computer science, since it proved that the design of polynomial time algorithms for linear programming was possible and in fact opened the way to other, more practical, algorithms that were designed in the following years." Personal life and death Khachiyan spoke Russian and English, but not Armenian. Bahman Kalantari noted that "For some, his English accent wasn’t always easy to understand." The 1979 New York Times profile of him described Khachiyan as "a relaxed, friendly young man in a sweater who speaks a little English, which he learned in high school." He was known as "Leo" and "Lenya" to his friends and colleagues. Václav Chvátal described him as "selfless, open, patient, sympathetic, understanding, considerate." Michael Todd, another colleague, described him as "cynical about politics," "very modest and kind to his friends," and "intolerant of condescension and pomposity." Khachiyan married Olga Pischikova Reynberg, of Russian-Jewish origin, in 1985. They had two daughters, Anna and Nina, who were teenagers at the time of his death. He became a naturalized U.S. citizen in 2000. He died of a heart attack in South Brunswick, New Jersey on April 29, 2005, at the age of 52. Recognition In 1982 he was awarded the prestigious Fulkerson Prize by the Mathematical Programming Society and the American Mathematical Society for outstanding papers in the area of discrete mathematics, particularly his 1979 article "A polynomial algorithm in linear programming." Khachiyan was considered a "noted expert in computer science whose work helped computers process extremely complex problems." He was called one of the world's most famous computer scientists at the time of his death by Haym Hirsh, chair of the computer science department at Rutgers. "Computer scientists and mathematicians say his work helped revolutionize his field," noted his New York Times obituary. Bahman Kalantari, a friend and colleague at Rutgers, wrote: "Surely, Khachiyan shall always remain to be among the greatest and most legendary figures in the field of mathematical programming." References Notes Citations External links DBLP: Leonid Khachiyan. In Memoriam: Leonid Khachiyan from the Computer Science Department, Rutgers University. SIAM news: Leonid Khachiyan, 1952–2005: An Appreciation. The Mathematics Genealogy Project: Leonid Khachiyan. New York Times : Obituary. 1952 births 2005 deaths 20th-century American mathematicians 21st-century American mathematicians American people of Armenian descent American computer scientists Cornell University faculty Moscow Institute of Physics and Technology alumni Moscow Institute of Physics and Technology faculty People from Saint Petersburg Russian people of Armenian descent Soviet Armenians Armenian scientists Armenian mathematicians Soviet mathematicians Rutgers University faculty Soviet computer scientists Soviet emigrants to the United States Naturalized citizens of the United States
47394800
https://en.wikipedia.org/wiki/Dark0de
Dark0de
dark0de, also known as Darkode, is a cybercrime forum and black marketplace described by Europol as "the most prolific English-speaking cybercriminal forum to date". This site which was launched in 2007, serves as a venue for the sale and trade of hacking services, botnets, malware, stolen personally identifiable information, credit card information, hacked server credentials, and other illicit goods and services. History In early 2013, it came under a large DDoS attack moving from bulletproof hosting provider Santrex to Off-shore, the latter being a participant of the Stophaus campaign against Spamhaus. The site has had an ongoing feud with security researcher Brian Krebs. In April 2014, various site users were attacked via the Heartbleed exploit, gaining access to private areas of the site. Take down The forum was the target of Operation Shrouded Horizon, an international law enforcement effort led by the Federal Bureau of Investigation which culminated in the site's seizure and arrests of several of its members in July 2015. According to the FBI, the case is "believed to be the largest-ever coordinated law enforcement effort directed at an online cyber criminal forum". Upon announcing the 12 charges issued by the United States, Attorney David Hickton called the site "a cyber hornet's nest of criminal hackers", "the most sophisticated English-speaking forum for criminal computer hackers in the world" which "represented one of the gravest threats to the integrity of data on computers in the United States". On Monday, September 21, 2015, Daniel Placek appeared on the podcast Radiolab discussing his role in starting Darkode and his eventual cooperation with the United States government in its efforts to take down the site. Revivals Only two weeks after the announcement of the raid, the site reappeared with increased security, employing blockchain-based authentication and operating on the Tor anonymity network. Researchers from MalwareTech suggested the relaunch was not genuine, and almost immediately after, it was hacked and its database leaked. On December 13, a version of the site returned on the original domain name. See also Lizard Squad, a hacking group, said to have used dark0de References External links Darkode archive project News about Dark0de The User’s Guide to Darkode: A Complete History and How to Use It Internet forums Cybercrime Tor onion services Darknet markets Hacker groups
41264613
https://en.wikipedia.org/wiki/EMC%20ViPR
EMC ViPR
ViPR Controller is a software-defined storage offering from EMC Corporation announced on May 6, 2013, at EMC World. ViPR abstracts storage from disparate arrays into a single pool of storage capacity that “makes it easier to manage and automate its own data-storage devices and those made by competitors.” ViPR became generally available September 27, 2013. Description and core components ViPR is deployed as software-only virtual appliances on ESX servers and does not require the installation of new hardware. ViPR separates the data plane from the control plane. The control plane is a software layer that manages storage; the data plane is the storage infrastructure, including networks, where storage devices perform reads and writes to disks and/or memory. ViPR enables management of multivendor platforms, including third-party storage. With the ViPR Controller, users abstract physical storage into virtual storage pools, create storage categories or classes (such as high-performance file or “gold/silver/bronze” block), and automate storage delivery to users to access through a self-service catalog. Enterprise Management Associates states “the underlying idea of EMC ViPR is to deliver enterprise storage similar to the way Amazon offers virtual machines, enabling corporate developers to provision storage in a self-service manner.” REST APIs provide a central access and control point to manage storage arrays or devices. REST APIs are used to integrate ViPR with third-party applications and management tools, as well as cloud stacks such as VMware, OpenStack and Microsoft Hyper-V. In addition to the ViPR Controller, ViPR includes ViPR Global Data Services, which enable combinations of data type (e.g. block, file, and object), protocols. EMC supports object files and Hadoop using a software overlay based on ViPR. The ViPR Object Data Service exposes REST APIs for Atmos (EMC's object storage appliance), Amazon S3 and Swift (the native OpenStack object store service), which means that pools potentially use both cloud services and local [EMC] VNX and Isilon arrays. ViPR’s prestidigitation enables data written as objects by cloud applications to be accessible as files by legacy apps. Similar to the way ViPR provides object support, it can provision pools as a Hadoop file system (HDFS). This is significant because it means data stored in a traditional block storage VMAX array can be exposed to big data Hadoop applications without moving it to a separate file repository. Theoretically, this could allow the same set of physical data to serve as a traditional transactional database while simultaneously incorporating into a big data analytics system, in place. (Network Computing.) Architecture ViPR is a distributed scale-out software platform. It uses cloud technologies such as Cassandra, an open-source distributed database management system, to handle large amounts of data, workflows and workloads from one management point. ViPR is a software solution, not a hardware offering, running on a virtual machine. When compared to other solutions, it stands out because those are platforms that provide automation stacks whereas ViPR provides a storage platform that plugs into all of these stacks. (SiliconAngle.) Integration In version 1.0, ViPR supports EMC arrays and storage devices and non-EMC arrays such as NetApp. ViPR users have the ability to virtualize, provision, monitor, and report on storage use from additional vendor arrays integrated through third-party developed adaptors written to the ViPR REST-based APIs. See also Software defined storage References External links EMC.com EMC ViPR Emerging Trends in Software Defined Storage (InfoStor) EMC ViPR software-defined storage: Why, and can it succeed? What is Software Defined Storage? EMC ViPR announced at EMCWorld 2013 Dell EMC
213525
https://en.wikipedia.org/wiki/Slow%20motion
Slow motion
Slow motion (commonly abbreviated as slo-mo or slow-mo) is an effect in film-making whereby time appears to be slowed down. It was invented by the Austrian priest August Musger in the early 20th century. This can be accomplished through the use of high-speed cameras and then playing the footage produced by such cameras at a normal rate like 30 fps, or in post production through the use of software. Typically this style is achieved when each film frame is captured at a rate much faster than it will be played back. When replayed at normal speed, time appears to be moving more slowly. A term for creating slow motion film is overcranking which refers to hand cranking an early camera at a faster rate than normal (i.e. faster than 24 frames per second). Slow motion can also be achieved by playing normally recorded footage at a slower speed. This technique is more often applied to video subjected to instant replay than to film. A third technique that is becoming common using current computer software post-processing is to fabricate digitally interpolated frames to smoothly transition between the frames that were actually shot. Motion can be slowed further by combining techniques, such as for example by interpolating between overcranked frames. The traditional method for achieving super-slow motion is through high-speed photography, a more sophisticated technique that uses specialized equipment to record fast phenomena, usually for scientific applications. Slow motion is ubiquitous in modern filmmaking. It is used by a diverse range of directors to achieve diverse effects. Some classic subjects of slow-motion include: Athletic activities of all kinds, to demonstrate skill and style. To recapture a key moment in an athletic game, typically shown as a replay. Natural phenomena, such as a drop of water hitting a glass. Slow motion can also be used for artistic effect, to create a romantic or suspenseful aura or to stress a moment in time. Vsevolod Pudovkin, for instance, used slow motion in a suicide scene in his 1933 film The Deserter, in which a man jumping into a river seems sucked down by the slowly splashing waves. Another example is Face/Off, in which John Woo used the same technique in the movements of a flock of flying pigeons. The Matrix made a distinct success in applying the effect into action scenes through the use of multiple cameras, as well as mixing slow-motion with live action in other scenes. Japanese director Akira Kurosawa was a pioneer using this technique in his 1954 movie Seven Samurai. American director Sam Peckinpah was another classic lover of the use of slow motion. The technique is especially associated with explosion effect shots and underwater footage. The opposite of slow motion is fast motion. Cinematographers refer to fast motion as undercranking since it was originally achieved by cranking a handcranked camera slower than normal. It is often used for comic, or occasional stylistic effect. Extreme fast motion is known as time lapse photography; a frame of, say, a growing plant is taken every few hours; when the frames are played back at normal speed, the plant is seen to grow before the viewer's eyes. The concept of slow motion may have existed before the invention of the motion picture: the Japanese theatrical form Noh employs very slow movements. How slow motion works There are two ways in which slow motion can be achieved in modern cinematography. Both involve a camera and a projector. A projector refers to a classical film projector in a movie theater, but the same basic rules apply to a television screen and any other device that displays consecutive images at a constant frame rate. Overcranking For purposes of making the above illustration readable, a projection speed of 10 frames per second (fps) has been selected (the 24fps film standard makes slow overcranking rare but nevertheless available on professional equipment). Time stretching The second type of slow motion is achieved during post production. This is known as time-stretching or digital slow motion. This type of slow motion is achieved by inserting new frames in between frames that have actually been photographed. The effect is similar to overcranking as the actual motion occurs over a longer time. Since the necessary frames were never photographed, new frames must be fabricated. Sometimes the new frames are simply repeats of the preceding frames but more often they are created by interpolating between frames. (Often this motion interpolation is, effectively, a short dissolve between still frames). Many complicated algorithms exist that can track motion between frames and generate intermediate frames within that scene. It is similar to half-speed, and is not true slow-motion but merely a longer display of each frame. In action films Slow motion is used widely in action films for dramatic effect, as well as the famous bullet-dodging effect, popularized by The Matrix. Formally, this effect is referred to as and is a process whereby the capture frame rate of the camera changes over time. For example, if in the course of 10 seconds of capture, the capture frame rate is adjusted from 60 frames per second to 24 frames per second, when played back at the standard film rate of 24 frames per second, a unique time-manipulation effect is achieved. For example, someone pushing a door open and walking out into the street would appear to start off in slow motion, but in a few seconds later within the same shot the person would appear to walk in "realtime" (everyday speed). The opposite speed-ramping is done in The Matrix when Neo re-enters the Matrix for the first time to see the Oracle. As he comes out of the warehouse "load-point", the camera zooms into Neo at normal speed but as it gets closer to Neo's face, time seems to slow down, perhaps visually accentuating Neo pausing and reflecting a moment, and perhaps alluding to future manipulation of time itself within the Matrix later on in the movie. In broadcasting Slow-motion is widely used in sport broadcasting and its origins in this domain extend back to the earliest days of television, one example being the European Heavyweight Title in 1939 where Max Schmeling knocked out Adolf Heuser in 71 seconds. In instant replays, slow motion reviews are now commonly used to show in detail some action (photo finish, goal, ...). Generally, they are made with video servers and special controllers. The first TV slo-mo was the Ampex HS-100 disk record-player. After the HS-100, Type C videotape VTRs with a slow-motion option were used. There were a few special high frame rate TV systems (300 fps) made to give higher quality slow-motion for TV. 300 fps can be converted to both 50 and 60 fps transmission formats without major issues. Scientific use In scientific and technical applications it is often necessary to slow motion by a very large factor, for example to examine the details of a nuclear explosion. Examples are sometimes published showing, for example, a bullet bursting a balloon. Video file recording methods Usually, digital camcorders (including: bridge cameras, DSLM, higher-end compact cameras and mobile phones) historically had two ways of storing slow motion video (or: high framerate video) into the video file: the real-time method and the menial method. Real-time method The real time method treats the video as a normal video while encoding it. The output video file contains the same framerate as the image sensor output framerate. The duration of the video in the output file also matches the real-life recording duration. And the output video also contains an audio track, like usual videos. This method is used by all GoPro cameras, Sony RX10/RX100 series cameras (except in the time-limited "super-slow-motion" High Frame Rate (HFR) mode), Apple iPhones with high framerate (slow motion) video recording functionality (starting with the iPhone 5s in late 2013), Sony Xperia flagships since 2014 (Xperia Z2, first Sony flagship with precluded 120fps video recording), LG V series mobile phones and every Samsung Galaxy flagship phone since 2015 (Galaxy S6) for videos with 120 fps or higher. Every video camera that is able to record at 60 fps (e.g. Asus PadFone 2 (late 2012: 720p@60 fps) and Samsung Mobile starting at the Galaxy Note 3 (late 2013) with 1080p at 60 fps, labelled "smooth motion"), recorded it using the real-time method. Advantages Video editing software (e.g. Sony Vegas, Kdenlive and included software in mobile phones) and video playback software (e.g. VLC media player) allow treating such videos as both usual videos and slow-motion videos. During video editing and video playback, the indicated playback speed matches real life. Metadata viewing software (e.g. MediaInfo) shows a framerate and a time that matches the real-life conditions during the video recording progression. Video framerate and duration matches real life. Includes audio track, like normal framerate videos. These advantages make the real-time method the more useful method for power users. Menial method The menial method saves recorded video files in a stretched way, and also without audio track. The framerate in the output file does not match the original sensor output framerate, but the former is lower. The real-life timespan of the recording (while holding the camera) does not match the length of the video in the output file, but the latter is longer. The opposite is the case for time-lapse videos, where the effectively saved framerate is lower than for normal videos This means that the action visible inside the video runs at slower speeds than in real life, despite the indicated playback speed of ×1. This encoding method is used by the camera software of the following devices (incomplete list): Panasonic Lumix DMC-FZ1000 (2014; 1080p@120fps; 1/4×) Samsung Omnia 2 GT-i8000 (2009; QVGA 320×240@120fps; 1/4×) Sony FDR-AX100 (2014; 720p@120fps; 1/4×) Sony RX100 IV, V, VI and VII: High Frame Rate (HFR) mode records at 240 fps up to 1,000 fps for 3–7 seconds. This is saved at 24 - 60 fps, i.e. from 1/4x down to 1/40x speed. All Samsung Galaxy flagship devices starting from late 2012 to late 2014: 2012: Galaxy Note 2: 720×480@120fps 2013 H1: Galaxy S4 (800×450@120fps) 2013 H1: S4 Zoom (720×480@120fps) 2013 H2: Galaxy Note 3 (1280×720@120fps) 2014 H1: Galaxy S5, Galaxy K Zoom, H2: Note 4 (1280×720@120fps) Earlier OnePlus flagship devices (1280×720@120fps). OnePlus One Advantages The output video file is directly playable as slow motion in video players that do not support adjusting the playback speed (e.g. on a Galaxy S3 Mini). The output video file is directly playable in video players and/or on devices that can only handle limited framerates (e.g. on a Galaxy S3 Mini). Comparison Example A 120FPS video whose real-life recording duration is 00h:00m:10s can be encoded in the following methods seen in the table on the Samsung Galaxy Note 2, S4, Note 3, S5 and Note 4 (example devices that use the menial method for 120fps video recording). In this example, the real-time-method recording device can be an iPhone 5s, a Galaxy S6 (including variants), a Galaxy Note 5, a Sony Xperia Z2, Xperia Z3 or Xperia Z5. This table also includes references from other video recording types (normal, low-framerate, time-lapse) to facilitate understanding for novice people. Notes See also Motion picture terminology High-speed camera Time-lapse photography Bullet time Video server Multicam (LSM) Temporal posterization References External links Videos Sorprendentes en Slow Motion / Cámara Lenta Interesting High-speed Video Clips Create Slow Motion Videos JackCabbage: Overcrank on the EX-1 JackCabbage: Overcrank and Undercrank on the HVX Cinematic techniques Articles containing video clips Austrian inventions
2083029
https://en.wikipedia.org/wiki/D-Bus
D-Bus
In computing, D-Bus (short for "Desktop Bus") is a message-oriented middleware mechanism that allows communication between multiple processes running concurrently on the same machine. D-Bus was developed as part of the freedesktop.org project, initiated by Havoc Pennington from Red Hat to standardize services provided by Linux desktop environments such as GNOME and KDE. The freedesktop.org project also developed a free and open-source software library called libdbus, as a reference implementation of the specification. This library should not be confused with D-Bus itself, as other implementations of the D-Bus specification also exist, such as GDBus (GNOME), QtDBus (Qt/KDE), dbus-java and sd-bus (part of systemd). Overview D-Bus is an inter-process communication (IPC) mechanism initially designed to replace the software component communications systems used by the GNOME and KDE Linux desktop environments (CORBA and DCOP respectively). The components of these desktop environments are normally distributed in many processes, each one providing only a few —usually one— services. These services may be used by regular client applications or by other components of the desktop environment to perform their tasks. Due to the large number of processes involved —adding up processes providing the services and clients accessing them— establishing one-to-one IPC communications between all of them becomes an inefficient and quite unreliable approach. Instead, D-Bus provides a software-bus abstraction that gathers all the communications between a group of processes over a single shared virtual channel. Processes connected to a bus do not know how it is internally implemented, but D-Bus specification guarantees that all processes connected to the bus can communicate with each other through it. Linux desktop environments take advantage of the D-Bus facilities by instantiating multiple buses, notably: a single system bus, available to all users and processes of the system, that provides access to system services (i.e. services provided by the operating system and also by any system daemons) a session bus for each user login session, that provides desktop services to user applications in the same desktop session, and allows the integration of the desktop session as a whole A process can connect to any number of buses, provided that it has been granted access to them. In practice, this means that any user process can connect to the system bus and to its current session bus, but not to another user's session buses, or even to a different session bus owned by the same user. The latter restriction may change in the future if all user sessions are combined into a single user bus. D-Bus provides additional or simplifies existing functionality to the applications, including information-sharing, modularity and privilege separation. For example, information on an incoming voice-call received through Bluetooth or Skype can be propagated and interpreted by any currently-running music player, which can react by muting the volume or by pausing playback until the call is finished. D-Bus can also be used as a framework to integrate different components of a user application. For instance, an office suite can communicate through the session bus to share data between a word processor and a spreadsheet. D-Bus specification Bus model Every connection to a bus is identified in the context of D-Bus by what is called a bus name. A bus name consists of two or more dot-separated strings of letters, digits, dashes, and underscores. An example of a valid bus name is . When a process sets up a connection to a bus, the bus assigns to the connection a special bus name called unique connection name. Bus names of this type are immutable—it's guaranteed they won't change as long as the connection exists—and, more importantly, they can't be reused during the bus lifetime. This means that no other connection to that bus will ever have assigned such unique connection name, even if the same process closes down the connection to the bus and creates a new one. Unique connection names are easily recognizable because they start with the—otherwise forbidden—colon character. An example of a unique connection name is (the characters after the colon have no particular meaning). A process can ask for additional bus names for its connection, provided that any requested name is not already being used by another connection to the bus. In D-Bus parlance, when a bus name is assigned to a connection, it is said the connection owns the bus name. In that sense, a bus name can't be owned by two connections at the same time, but, unlike unique connection names, these names can be reused if they are available: a process may reclaim a bus name released —purposely or not— by another process. The idea behind these additional bus names, commonly called well-known names, is to provide a way to refer to a service using a prearranged bus name. For instance, the service that reports the current time and date in the system bus lies in the process whose connection owns the bus name, regardless of which process it is. Bus names can be used as a simple way to implement single-instance applications (second instances detect that the bus name is already taken). It can also be used to track a service process lifecycle, since the bus sends a notification when a bus name is released due to a process termination. Object model Because of its original conception as a replacement for several component oriented communications systems, D-Bus shares with its predecessors an object model in which to express the semantics of the communications between clients and services. The terms used in the D-Bus object model mimic those used by some object oriented programming languages. That doesn't mean that D-Bus is somehow limited to OOP languages —in fact, the most used implementation () is written in C, a procedural programming language. In D-Bus, a process offers its services by exposing objects. These objects have methods that can be invoked, and signals that the object can emit. Methods and signals are collectively referred to as the members of the object. Any client connected to the bus can interact with an object by using its methods, making requests or commanding the object to perform actions. For instance, an object representing a time service can be queried by a client using a method that returns the current date and time. A client can also listen to signals that an object emits when its state changes due to certain events, usually related to the underlying service. An example would be when a service that manages hardware devices —such as USB or network drivers— signals a "new hardware device added" event. Clients should instruct the bus that they are interested in receiving certain signals from a particular object, since a D-Bus bus only passes signals to those processes with a registered interest in them. A process connected to a D-Bus bus can request it to export as many D-Bus objects as it wants. Each object is identified by an object path, a string of numbers, letters and underscores separated and prefixed by the slash character, called that because of their resemblance to Unix filesystem paths. The object path is selected by the requesting process, and must be unique in the context of that bus connection. An example of a valid object path is . However, it's not enforced —but also not discouraged— to form hierarchies within object paths. The particular naming convention for the objects of a service is entirely up to the developers of such service, but many developers choose to namespace them using the reserved domain name of the project as a prefix (e.g. ). Every object is inextricably associated to the particular bus connection where it was exported, and, from the D-Bus point of view, only lives in the context of such connection. Therefore, in order to be able to use a certain service, a client must indicate not only the object path providing the desired service, but also the bus name under which the service process is connected to the bus. This in turn allows that several processes connected to the bus can export different objects with identical object paths unambiguously. An interface specifies members —methods and signals— that can be used with an object. It is a set of declarations of methods (including its passing and returning parameters) and signals (including its parameters) identified by a dot-separated name resembling the Java language interfaces notation. An example of a valid interface name is . Despite their similarity, interface names and bus names should not be mistaken. A D-Bus object can implement several interfaces, but at least must implement one, providing support for every method and signal defined by it. The combination of all interfaces implemented by an object is called the object type. When using an object, it's a good practice for the client process to provide the member's interface name besides the member's name, but is only mandatory when there is an ambiguity caused by duplicated member names available from different interfaces implemented by the object —otherwise, the selected member is undefined or erroneous. An emitted signal, on the other hand, must always indicate to which interface it belongs. The D-Bus specification also defines several standard interfaces that objects may want to implement in addition to its own interfaces. Although technically optional, most D-Bus service developers choose to support them in their exported objects since they offer important additional features to D-Bus clients, such as introspection. These standard interfaces are: : provides a way to test if a D-Bus connection is alive. : provides an introspection mechanism by which a client process can, at run-time, get a description (in XML format) of the interfaces, methods and signals that the object implements. : allows a D-Bus object to expose the underlying native object properties or attributes, or simulate them if it doesn't exist. : when a D-Bus service arranges its objects hierarchically, this interface provides a way to query an object about all sub-objects under its path, as well as their interfaces and properties, using a single method call. The D-Bus specification defines a number of administrative bus operations (called "bus services") to be performed using the object that resides in the bus name. Each bus reserves this special bus name for itself, and manages any requests made specifically to this combination of bus name and object path. The administrative operations provided by the bus are those defined by the object's interface . These operations are used for example to provide information about the status of the bus, or to manage the request and release of additional well-known bus names. Communications model D-Bus was conceived as a generic, high-level inter-process communication system. To accomplish such goals, D-Bus communications are based on the exchange of messages between processes instead of "raw bytes". D-Bus messages are high-level discrete items that a process can send through the bus to another connected process. Messages have a well-defined structure (even the types of the data carried in their payload are defined), allowing the bus to validate them and to reject any ill-formed message. In this regard, D-Bus is closer to an RPC mechanism than to a classic IPC mechanism, with its own type definition system and its own marshaling. The bus supports two modes of interchanging messages between a client and a service process: One-to-one request-response: This is the way for a client to invoke an object's method. The client sends a message to the service process exporting the object, and the service in turn replies with a message back to the client process. The message sent by the client must contain the object path, the name of the invoked method (and optionally the name of its interface), and the values of the input parameters (if any) as defined by the object's selected interface. The reply message carries the result of the request, including the values of the output parameters returned by the object's method invocation, or exception information if there was an error. Publish/subscribe: This is the way for an object to announce the occurrence of a signal to the interested parties. The object's service process broadcasts a message that the bus passes only to the connected clients subscribed to the object's signal. The message carries the object path, the name of the signal, the interface to which the signal belongs, and also the values of the signal's parameters (if any). The communication is one-way: there are no response messages to the original message from any client process, since the sender knows neither the identities nor the number of the recipients. Every D-Bus message consists of a header and a body. The header is formed by several fields that identify the type of message, the sender, as well as information required to deliver the message to its recipient (destination bus name, object path, method or signal name, interface name, etc.). The body contains the data payload that the receiver process interprets —for instance the input or output arguments. All the data is encoded in a well known binary format called the wire format which supports the serialization of various types, such as integers and floating-point numbers, strings, compound types, and so on, also referred to as marshaling. The D-Bus specification defines the wire protocol: how to build the D-Bus messages to be exchanged between processes within a D-Bus connection. However, it does not define the underlying transport method for delivering these messages. Internals Most existing D-Bus implementations follow the architecture of the reference implementation. This architecture consists of two main components: a point-to-point communications library that implements the D-Bus wire protocol in order to exchange messages between two processes. In the reference implementation this library is . In other implementations may be wrapped by another higher-level library, language binding, or entirely replaced by a different standalone implementation that serves the same purpose. This library only supports one-to-one communications between two processes. a special daemon process that plays the bus role and to which the rest of the processes connect using any D-Bus point-to-point communications library. This process is also known as the message bus daemon, since it is responsible for routing messages from any process connected to the bus to another. In the reference implementation this role is performed by , which itself is built on top of . Another implementation of the message bus daemon is , which is built on top of . The library (or its equivalent) internally uses a native lower-level IPC mechanism to transport the required D-Bus messages between the two processes in both ends of the D-Bus connection. D-Bus specification doesn't mandate which particular IPC transport mechanisms should be available to use, as it's the communications library that decides what transport methods it supports. For instance, in Unix-like operating systems such as Linux typically uses Unix domain sockets as the underlying transport method, but it also supports TCP sockets. The communications libraries of both processes must agree on the selected transport method and also on the particular channel used for their communication. This information is defined by what D-Bus calls an address. Unix-domain sockets are filesystem objects, and therefore they can be identified by a filename, so a valid address would be unix:path=/tmp/.hiddensocket. Both processes must pass the same address to their respective communications libraries to establish the D-Bus connection between them. An address can also provide additional data to the communications library in the form of comma-separated key=value pairs. This way, for example, it can provide authentication information to a specific type of connection that supports it. When a message bus daemon like is used to implement a D-Bus bus, all processes that want to connect to the bus must know the bus address, the address by which a process can establish a D-Bus connection to the central message bus process. In this scenario, the message bus daemon selects the bus address and the remainder processes must pass that value to their corresponding or equivalent libraries. defines a different bus address for every bus instance it provides. These addresses are defined in the daemon's configuration files. Two processes can use a D-Bus connection to exchange messages directly between them, but this is not the way in which D-Bus is normally intended to be used. The usual way is to always use a message bus daemon (i.e. ) as a communications central point to which each process should establish its point-to-point D-Bus connection. When a process —client or service— sends a D-Bus message, the message bus process receives it in the first instance and delivers it to the appropriate recipient. The message bus daemon may be seen as a hub or router in charge of getting each message to its destination by repeating it through the D-Bus connection to the recipient process. The recipient process is determined by the destination bus name in the message's header field, or by the subscription information to signals maintained by the message bus daemon in the case of signal propagation messages. The message bus daemon can also produce its own messages as a response to certain conditions, such as an error message to a process that sent a message to a nonexistent bus name. improves the feature set already provided by D-Bus itself with additional functionality. For example, service activation allows automatic starting of services when needed —when the first request to any bus name of such service arrives at the message bus daemon. This way, service processes neither need to be launched during the system initialization or user initialization stage nor need they consume memory or other resources when not being used. This feature was originally implemented using setuid helpers, but nowadays it can also be provided by systemd's service activation framework. Service activation is an important feature that facilitates the management of the process lifecycle of services (for example when a desktop component should start or stop). History and adoption D-Bus was started in 2002 by Havoc Pennington, Alex Larsson (Red Hat) and Anders Carlsson. The version 1.0 —considered API stable— was released in November 2006. Heavily influenced by the DCOP system used by versions 2 and 3 of KDE, D-Bus has replaced DCOP in the KDE 4 release. An implementation of D-Bus supports most POSIX operating systems, and a port for Windows exists. It is used by Qt 4 and later by GNOME. In GNOME it has gradually replaced most parts of the earlier Bonobo mechanism. It is also used by Xfce. One of the earlier adopters was the (nowadays deprecated) Hardware Abstraction Layer. HAL used D-Bus to export information about hardware that has been added to or removed from the computer. The usage of D-Bus is steadily expanding beyond the initial scope of desktop environments to cover an increasing amount of system services. For instance, the NetworkManager network daemon, BlueZ bluetooth stack and PulseAudio sound server use D-Bus to provide part or all of their services. systemd uses the D-Bus wire protocol for communication between and systemd, and is also promoting traditional system daemons to D-Bus services, such as logind. Another heavy user of D-Bus is Polkit, whose policy authority daemon is implemented as a service connected to the system bus. Implementations libdbus Although there are several implementations of D-Bus, the most widely used is the reference implementation libdbus, developed by the same freedesktop.org project that designed the specification. However, libdbus is a low-level implementation that was never meant to be used directly by application developers, but as a reference guide for other reimplementations of D-Bus (such as those included in standard libraries of desktop environments, or in programming language bindings). The freedesktop.org project itself recommends applications authors to "use one of the higher level bindings or implementations" instead. The predominance of libdbus as the most used D-Bus implementation caused the terms "D-Bus" and "libdbus" to be often used interchangeably, leading to confusion. GDBus GDBus is an implementation of D-Bus based on GIO streams included in GLib, aiming to be used by GTK+ and GNOME. GDBus is not a wrapper of libdbus, but a complete and independent reimplementation of the D-Bus specification and protocol. MATE Desktop and Xfce (version 4.14), which are also based on GTK+ 3, also use GDBus. QtDBus QtDBus is an implementation of D-Bus included in the Qt library since its version 4.2. This component is used by KDE applications, libraries and components to access the D-Bus services available in a system. sd-bus In 2013, the systemd project rewrote libdbus in an effort to simplify the code, but it also resulted in a significant increase of the overall D-Bus performance. In preliminary benchmarks, BMW found that the systemd's D-Bus library increased performance by 360%. By version 221 of systemd, the sd-bus API was declared stable. libnih-dbus The libnih project provides a light-weight "standard library" of C support for D-Bus. Additionally, it has good support for cross compiling. kdbus kdbus was a project that aimed to reimplement D-Bus as a kernel-mediated peer-to-peer inter-process communication mechanism. Beside performance improvements, kdbus would have advantages arising from other Linux kernel features such as namespaces and auditing, security from the kernel mediating, closing race conditions, and allowing D-Bus to be used during boot and shutdown (as needed by systemd). kdbus inclusion in the Linux kernel proved controversial, and was dropped in favor of BUS1, as a more generic inter-process communication. zbus zbus is a native Rust library for D-Bus. It's main strength are its macros that make communication with services and implementation of services, extremely easy and simple. Language bindings Several programming language bindings for D-Bus have been developed, such as those for Java, C# and Ruby. See also Linux on the desktop Common Language Infrastructure Common Object Request Broker Architecture Component Object Model Distributed Component Object Model Foreign function interface Java remote method invocation Remote procedure call XPCOM References External links D-Bus home page at Freedesktop.org D-Bus specification Introduction to D-Bus on the Freedesktop.org wiki D-Bus Tutorial DBus Overview Application layer protocols C++ libraries Collabora Free network-related software Freedesktop.org Inter-process communication Remote procedure call Software using the Academic Free License